5

I fully expect to be roasted here, but I, for one, welcome our new vibe coding AI overlords.

I've never been good at coding for one major reason: I cannot commit to memory or manage to effectively use enormous sets of commands, principles, techniques, frameworks, etc., that are often required for this work. I've always been a huge source of frustration for team members who have that innate ability, in that I slow them down, make lots of mistakes, and generally don't know what I'm doing in the context of everything available to me to use. Especially in comparison to them. The only good ability I seem to have is picking through code others wrote, updating it, debugging it, and generally comparing it to best practices to either ask them to fix it or figure out a way to get it done myself. But it takes me a long time, and it's super frustrating.

"Vibe coding" has been world-changing for me in that regard. I know. It's not "pure coding". I know. It's "stealing our jobs". I know. "It's making us all dumb and dependent".

I don't care. I'm trading that for FINALLY being able to realize the vision of all the projects my right brain WANTED to do for so long, but that I never tried because I knew my limited left brain couldn't manage it. I knew the UI and the requirements, but I just couldn't get started. Or, if I got started, I couldn't figure out what to do next. I knew how to explain it, but it would take me many more hours than necessary to write the first working class and functions.

I'm in this to make money. I'll leave the "coding poetry" to the purists. I need an MVP, then a v1, then a v10+ as soon as I can possibly get them done, so then I can get the software to market before some other competitor.

If that makes me some kind of terrible person or shit coder who's "ruining everything" (really?), so be it. I'm due to retire in the next 5-7 years. If I can make that happen earlier with more sold software, all the better.

Comments
  • 3
    Everyone their own path. I have a huge reason to be sad because programming is my whole identity and it's changed to something low effort what python basically already was. I considered python to go to fast already for development and people less appreciating it. It will be worse with AI. But a reasoning model, I take it with both arms because we can't fight it and I want to remain modern and top notch. I've learnt that there's much to learn about efficte vibe coding. Just with programming, we have winners who know how their code works, we have losers also. Nothing changes.

    But for me personally, I'm happy that I finally can make something visual looking good. I never could do that and did not care about it. But now it's free. Pretty sure that'll piss of some designers, but hey, it's just the way it is. We all have to adapt.
  • 3
    @whimsical Exactly. I remember in 1990 being told in my earliest college-level computer programming courses that "someday" we'd have an nth-generation language system that we could make programs in with natural language English. Nobody could describe just how that would work since, as everyone knew, anything resembling artificial intelligence was decades, or hundreds, maybe thousands of years away. I feel like I've been waiting for this moment my whole career, where I can get shit done and finally go outside and touch grass. Being locked in a lab, a cubicle maze, or a room in my house, slaving away at a keyboard over some damn missing semicolon is so yesterday.
  • 2
    I see your point but I‘m completely different.
    I‘m not in for the money but for the passion.

    I like to offload boring/tedious tasks to AI but for the most part I want to be in full control.
  • 2
    @stackodev

    > where I can get shit done and finally go outside and touch grass.

    Can you though?
    Most dev jobs don’t allow you leave early when you are faster.
    There is no such thing as finished with the work. You are payed by the hour.

    Is your job different?
  • 3
    past: figuring out the api to do tasks and creating test code to integrate into other code

    now: ask chatgpt to create test code. spend a while adapting the test code to work with my codebase. double checking the api calls to make sure they do what I actually want. converting code that won't work in embedded to code that can.

    Maybe it was faster to just read the api and write my own test code? I will have to check my gpt generated code to see if it can convert to heapless code for embedded.
  • 4
    So I asked gpt to change my C parsing code to not use heap. It decided it couldn't use strtok even though it operates in place. So I had to tell it to use strtok. Weird. You have to know when it does things you don't want. So I spend time babysitting the output.
  • 1
    Yeah this works until something subtle goes wrong (which happens to be the MOST common type of issue that LLMs produce) and you're dead in the water.

    You're right - you deserve to be roasted for falling short of expertise your entire career and celebrating the offload of cognitive work to an unreliable machine.

    If you never got good at programming, maybe you should have stopped trying.

    If you work with a new framework, tool, whatever - your job is to be able to learn it, and quickly. This doesn't happen naturally and requires research and practice. Maybe you did these things and had a hard time for reasons beyond your control. Who knows.

    Bottom line is, if it never worked for you before, this solution might seem good now but you're BONED the moment it doesn't work right.
  • 4
    @Lensflare I'm the owner of my business. I'm trying to be a better boss to myself and let myself out to play more. And that's the reason I started this business...to escape the tyranny of the typical dev shop.
  • 4
    @YourMom GPT is terrible. Claude is so much better. I just tell it to write the tests (and dozens of other things I never used to bother to do because it was so tedious).
  • 3
    @YourMom Nobody should use GPT for coding. It's not good at it. At all. Claude 4 is the best I've found so far. Here's a good (and long) article about someone else's experience. https://wordfence.com/blog/2025/...
  • 3
    @stackodev well, our company paid for gpt. I don't know if they would pay for another.

    So strtok has state (fuck). I knew this. But asked gpt to make a version that has local state (yeah C11 is not available otherwise I would use strtok_s). So I will test this version out.
  • 2
    @YourMom as one of our colleagues here pointed out using asm example code, heap or stack doesn't matter. But you're doing embedded huh. But I expect it gave good results. Llm's are good at c.
  • 3
    @AlgoRythm That's an opinion, but not my actual experience. I've never found a situation I haven't been able to get out of using vibe coding. In many ways, it frees up my wetware compute cycles to see the bigger picture and understand the root cause. Sometimes you're staring into the abyss of mismatched brackets when you need to stare into the abyss of the architecture.
  • 2
    @stackodev honestly, if writing tests is tedious, you are probably doing it wrong.
    Like trying to achieve a specific % test coverage and writing completely useless test cases.
  • 1
    @YourMom you probably mean copilot and they work with Claude too.
  • 2
    @stackodev nah, I'm in far favor of Claude too, too many reasons to write here. Bit every llm requires a different prompt for x result. If you're very used to gpt prompting you can have great results with it. Especially with the new one. It's a big boost. But I read online that indeed yes, gpt got way better at coding but sucks for anything else now. And well, they put deep research behind a pay wall after just five times or so. Fuckers. But also, 32k context window? Is the 90's? Gpt 4o is back as well.. Don't care. I want o3 back.

    One thing is for sure, gpt5 does have benefits, but especially for THEM.

    But gpt5 is so far away from Claude, like astronomical..
  • 2
    @stackodev Modern models make less mistakes but they're almost always difficult to spot. Ask me why I know so much about two-way binding in Blazor. Too late, I'll tell you. It's because an LLM gave me code that looked right, I didn't bother to read about two-way binding in Blazor beforehand, and I wasted a few hours and STILL had to learn it myself anways.

    Granted, this was GPT and GPT is much worse at code. But all LLMs fall into the same traps, just at different rates. They're all the same tech with different datasets.

    Funny you should mention architecture - LLMs are so awful at it that I don't think any LLM has successfully generated something more complex than a chat app on its own. It also has some trouble following existing architecture, though that is generally not as big a problem as other problems with vibe coding.
  • 2
    @stackodev in a similar way to how leaded gasoline lowered the global IQ of humanity, I think vibe coding is going to increase the global number of vulnerabilities/ hacks globally as code that was written without any organic intelligence meets the market. Not to mention an enormous increase in shitty bloatware and enshittification of existing products.

    The tragedy - even if you don't use crappy LLM code in your product, it might appear somewhere in your supply chain anyways. What a shame.
  • 2
    @AlgoRythm I have same experience learning regex decently form gpt, in the end I needed a site. I tried to learn the whole thing as I was writing an interpreter for it.

    But knowing what you can't do, and can do with an llm is part of the journey. That's what it is, a journey. But the path of the journey (the llm) is changing constantly.

    But I don't produce any code anymore what is not corrected or checked or whatever by llm. It has so much advantages. But I keep my llm's very strict by telling it to do exactly as I say. All nazi stuff. It ended up in the retoor phylopsophy. Mind you, it's only for llm's: https://static.molodetz.nl/retoor.h...
  • 1
    @whimsical I actually agree with LLM as an assistant. Boilerplate code has almost never really been written by good devs. Either it comes from your IDE or it comes from stack overflow, but I hardly ever fucking write the small, self-contained code that LLMs spit out very well.

    The process of learning tools, commands, and frameworks is FUN for me and what I assume to the be the majority of other devs that are devs because they LIKE programming. OP has flipped that idea on it's head which is probably why I'm so personally upset by the idea.
  • 2
  • 3
    @AlgoRythm I understand why this is a sensitive subject in general. For people like me, programming is my identity and llm's are destroying it. But I see a lot of challenge in decent prompting too. It's a bit different work but I like it as well. The art is making one prompt, so good, that claude keeps generating three times more than it's allowed and being out of tokens for the next five hours and be finished. Bevause Claude doesn't stop these days in the middle anymore. Only when finished.

    Btw, anyone else notice Claude crashes? Sometimes it crashes and I have nothing and all tokens wasted.
  • 1
    @YourMom is this about the way i say gpt5? You know what I've meant. Your screenshot makes it clear.
  • 1
    @whimsical I do feel threatened by it, but would feel moreso if I was a junior. LLMs can pass for junior devs mostly because thinking back to my first internship and first year of my first job, I was a fucking bonehead. LLM could probably out-perform me on my best days back then.

    It would take a lot for an LLM to out-architect me these days. It would need to be an extraordinarily lucky day for the LLM with a prompt that basically give it the answer.

    As the LLM race cools down, I'm accepting LLMs into my workflow as code reviewers, rubber duckies, and boilerplate generators. I do not give it any further tasks because I do not trust it with further tasks, based on both my emotions and genuine experience.
  • 2
    @whimsical no, i was confused when you said copilot. I don't know what the fuck I am even talking about apparently.
  • 3
    awww it's chill by me

    if it works, it works
  • 2
    @YourMom The assumption was that you use a tool that uses gpt like co-pilot rather than the webinterface

    But yes, it does clear up the boilerplate wonder if it can create a basic vulkan renderer
  • 2
    @AlgoRythm maybe now, but maybe I future it'll actually makes things more secure. With default security we have now it's already an art to make insecure code. It's only made my people who really don't give a fuck. And AI maybe gives more fucks by default compared to human soon. And since the makers of AI considers all their users losers, I'm pretty sure our gebeated code will have many guardrails. Because believe me, they think we're retarded. Like completely.
  • 1
    @BordedDev I have a little Michael J Fox going on and I don't know why. I just ate some protein so it shouldn't be blood sugar issues. My wife told me by text that I probably definitely have parkinsons. Going to go home soon so eat more food.
  • 2
    @whimsical you might be right about that, but largely on a superficial level. Yes, LLMs tend to do the grunt work that (myself included) organic programmers tend to avoid such as validation. But this is just the basics of security, the really important security is almost always architectural, which LLMs fail at spectacularly in general.
  • 2
    @AlgoRythm I vote for low attack surface. That's most important. Security should never be complex. If it's complex, it goes wrong. Never allow complex security.
  • 2
    @whimsical again, I agree to some level and disagree at another. Security at it's core is a difficult thing. Look at SSL/ TLS and the fact that you really aren't supposed to roll your own encryption unless you're an EXPERT.

    The *core* of security is a complicated science, and is currently a subset of math more than anything else.

    Things like session, federated identity, JWT, and encryption at rest are the most complicated things a typical organization should need to deal with. I can tell you from implementing these things across 2 different tech stacks and 3 different frameworks, that this is a heavily architectural task and leaks in the boat can be devastating.
  • 1
    @AlgoRythm I consider jwt already not a good thing. Goes to far already.
  • 2
    @whimsical JWT is great - simpler than SAML with its XML tokens, and more secure with a signature already in the basic spec. What's wrong with it?
  • 1
    @AlgoRythm they contain information.
  • 2
    @whimsical is that not the point? You're gonna need to be less cryptic
  • 1
    @AlgoRythm I consider that an issue. It could've been just a hash and let backend figure out what data comes with it. No need to use a system that everyone knows how it works. Jwt is not simple.
  • 1
    @whimsical I think you're confused about the purpose of JWT. It doesn't replace or compete with the typical web token (which could be as simple as 32 bytes of entropy that is handled 100% of the back-end)

    JWT is about storing identity information in a way that you can validate the SOURCE of the token. It's a beautifully simple design that includes only three parts: the header (format info about the token itself) the payload (JSON encoded freeform data) and the signature.

    Play around with it on this great site, maybe your mind will be changed: https://www.jwt.io/

    Here's a cool tip: with JWT, your web server can validate the identity of a user WITHOUT calling the user info and/or session databases.
  • 1
    @whimsical The one biggest downside of JWT is that you cannot invalidate a token without also completely removing the efficiency gains that JWT has to offer. I mean, if you want the ability to invalidate tokens after they're issued, you'd need a database call on EVERY JWT validation, which totally defeats the purpose of JWT in the first place. The most common resolution to this is short expiry dates (1-15 minutes)
Add Comment