It's like learning to cook and regularly making your own meals, then shifting to a "new paradigm" of hiring a personal chef to cook for you. Food's getting made either way, but it's not really the same deal.
No, it's more like moving from line cook, to head chef in charge of 30 cooks.
Food's getting made, but you focus on the truly creative part -- the menu, the concept, the customer experience. You're not boiling pasta or cutting chives for the thousandth time. The same way now you're focusing on architecture and design now instead of writing your 10,000th list comprehension.
Except the cooks don't exist anymore as they all have become head chefs (or changed careers) and the food is being cooked by magical cooking black boxes
Would you consider drudgery the in-depth thinking that's required to actually go and write that algorithm, think out all the data ownership relationships, name the variables, think the edge cases for the tests?
For me, the act of sitting down and writing the code is what actually leads to true understanding of the logic, in a similar way to how the only way to understand a mathematical proof is to go trough it. Sure, I'm not doing anything useful by showing that the root of 2 is irrational, but by doing that I gain insights that are otherwise impossible to transfer between two minds.
I believe that coding was one of the few things (among, for example, writing math proofs, or that weird process of crafting something with your hands where the object you are building becomes intimately evident) that get our brains to a higher level of abstraction than normal mammal "survival" thinking. And it makes me very sad to see it thrown out of the window in the name of a productivity that may not even be real.
> Would you consider drudgery the in-depth thinking that's required to actually go and write that algorithm, think out all the data ownership relationships, name the variables, think the edge cases for the tests?
For 99% of the functions I've written in my life? Absolutely drudgery. They're barely algorithms. Just bog-standard data transformation. This is what I love having AI replace.
For the other 1% that actually requires original thought, truly clever optimization, and smart naming to make it literate? Yes, I'll still be doing that by hand, although I'll probably be getting the LLM to help scaffold all the unit tests and check for any subtle bugs or edge cases I may have missed.
The point is, LLMs let me spend more time at the higher level of abstraction that is more productive. It's not taking it away!
I do agree with this, and in fact I do often use LLMs for for these tasks! I guess my message is more intended towards vibe-only coders (and, I guess, the non-technical higher ups drooling at the idea of never having to hire another developer).
I've found AI handy as a sort of tutor sometimes, like "I want to do X in Y programming language, what are some tools / libraries I could use for that?" And it will give multiple suggestions, often along with examples, that are pretty close to what I need.
The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.
What point have we reached? All I see is HN drowning in insufferable, identical-sounding posts about how everything has changed forever. Meanwhile at work, in a high stakes environment where software not working as intended has actual consequences, there are... a few new tools some people like using and think they may be a bit more productive with. And the jury's still out even on that.
The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.
I think I'm done with HN at this point. It's turned into something resembling moltbook. I'll try back in a couple of years when maybe things will have changed a bit around here.
Same as it was for "blockchain" and NFTs. Tech "enthusiasts" can be quite annoying, until whatever they hype is yesterday's fad. Then they jump on the next big thing. Rinse, repeat.
I do not await the day where the public commons is trashed by everyone and their claudebot, though perhaps the segmentation of discourse will be better for us in the long run given how most social media sites operate.
I am not in a high stakes environment and work on a one-person size projects.
But for months I have almost stopped writing actual lines of code myself.
Frequency and quality of my releases had improved.
I got very good feedback on those releases from my customer base, and the number of bugs reported is not larger than on a code written by me personally.
The only downside is that I do not know the code inside out anymore even if i read it all, it feels like a code written by co-worker.
> The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.
I am absolutely baffled by this take. I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work. Devops and livesite is a harder problem, but even there we see very promising results.
I was a skeptic too. I was decently vocal about AI working for single devs but could never scale to large, critical enterprise codebases and systems. I was very wrong.
> The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.
"The naysayers"/"the graybeards" have never been on top.
If they had been, many of the things the author here talks about getting rid of never would've been popular in the first place. Giant frameworks? Javascript all the things? Leftpad? Rails? VBA? PHP? Eventually consistent datastores?
History is full of people who successfully made money despite the downsides of all those things because the downsides usually weren't the most important thing in the moment of building.
It's also full of people who made money cleaning it all up when the people who originally built it didn't have time to deal with it anymore. "De-slopify" is going to be a judgment question that someone will need to oversee, there's no one-size-fits-all software pattern, and the person who created the pile of code is unlikely to be in a position to have time to drive that process.
Step 1: make money with shortcuts
Step 2: pay people to clean up and smooth out most of those shortcuts
I've bounced between both roles already a lot due to business cycles of startup life. When you're trying to out-scale your competitor you want to find every edge you can, and "how does this shit actually work" is going to be one of those edges for making the best decisions about how to improve cost/reliability/perf/usability/whatever. "It doesn't matter what the code looks like" is still hard to take seriously compared to the last few iterations of people pitching tools claiming the same. The turnaround loop of modifying code is faster now; the risk of a tar-pit of trying to tune on-the-fly a pile of ill-fitting spaghetti is not. It's gonna be good enough for a lot of people, Sturgeon's law - e.g. most people aren't great at knowing what usefully-testable code looks like. So let's push past today's status quo of software.
If I was working on a boring product at a big tech co I'd be very worried, since many of those companies have been hiring at high salaries for non-global-impact product experiments that don't need extreme scale or shipping velocity. But if you want to push the envelope, the opportunity to write code faster should be making you think about what you can do with it that other people aren't yet. Things beyond "here's a greenfield MVP of X" or "here's a port of Y."
The AI agents can ALREADY "de-slopify" the code. That's one of the patterns people should be using when coding with LLMs. Keep an agent that only checks for code smells, testability, "slop", scalability problems, etc. alongside whatever agent you have writing the actual code.
I guess juniors are different these days. In my generation a lot of people's first contact with code was doing basic (html, css, bits of js) web development. That was how I got started at like 12 or 13.
Another thing I've noticed is that using AI, I'm less likely to give existing code another look to see if there's already something in it that does what I need. It's so simple to get the AI to spin up a new class / method that gets close to what I want, so sometimes I end up "giving orders first, asking questions later" and only later realizing that I've duplicated functionality.
Yeah, I really don't get it. So instead of using someone else's framework, you're using an AI to write a (probably inferior and less thoroughly tested and considered) framework. And your robot employee is probably pulling a bunch of stuff (not quite verbatim, of course) from existing relevant open source frameworks anyway. Big whoop?
AI adoption is being heavily pushed at my work and personally I do use it, but only for the really "boilerplate-y" kinds of code I've already written hundreds of times before. I see it as a way to offload the more "typing-intensive" parts of coding (where the bottleneck is literally just my WPM on the keyboard) so I have more time to spend on the trickier "thinking-intensive" parts.
It's like learning to cook and regularly making your own meals, then shifting to a "new paradigm" of hiring a personal chef to cook for you. Food's getting made either way, but it's not really the same deal.
reply