They're so obviously going to fail, but in a good way. The idea is that they were going to get the world addicted then raise the prices, but the reality is that there's going to be a race for the bottom in pricing because none of them are significantly better than the others. They don't own anything, it's just math; they can be undercut by an OSS bomb from China at any given moment.
Even worse, they've bet against the math not advancing. If it gets significantly more power-efficient, which literally could happen tomorrow if the right paper goes up on arxiv, maybe a 10 year old laptop could give "good enough" results. All those data centers are now trash and your companies are now worth a negative trillion dollars.
I think all of these factors are completely independent of whether AI works or not, or how well it works. Personally, I don't care if it replaces programmers: get another job. I just have experienced it, and it is at this point mediocre.
Of course I am not using the bleeding edge, and I am not privy to the top secret insider stuff which may well be orders of magnitude better. But if they've got it, why would they keep it a secret when people are desperate to give them money? If they're hiding it, it's something that they know that somebody could analyze and knock off, and then it's a race for the bottom again.
In a race for the bottom, we all win. Except the people and economies who bet their lives on it being a race to the top.
It’s funny reading this parallel world that some portion of people have constructed for themselves.
It has been three years and these tools can do a considerable portion of my day to day work. Salvage the wreckage? Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
I think this comment is reacting to a different argument than the one the article is actually making.
The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work. In fact, it more or less assumes the opposite. The critique is about the economic and organizational story being told around AI, not about whether an individual developer can ship faster today.
Saying “these tools now do a considerable portion of my work” operates on the micro level of personal productivity. Doctorow is operating on the macro level: how firms reframe human labor as “automation,” push humans into oversight and liability roles, and use exaggerated autonomy claims to justify valuations, layoffs, and cost-cutting.
Ironically, the “Wile E. Coyote running off a cliff” metaphor aligns more with the article than against it. The whole “reverse centaur” idea is that jobs don’t disappear instantly; they degrade first. People keep running because the system still sort of works, until the ground is gone and the responsibility snaps back onto humans.
So there’s no contradiction between “this saves me hours a day” and “this is being oversold in ways that will destabilize jobs and business models.” Those two things can be true at the same time. The comment seems to rebut “AI doesn’t work,” which isn’t really the claim being made.
You can read my reply to another comment making a similar point. In short, I think you are giving Doctorow far too much credit - the assumption that these tools are fundamentally incapable is woven throughout the essay, the risk always comes from the fact that managers might think these tools (which are obviously inferior) can do your job. The notion that they can actually do your job is treated as invariable absurd, pie-in-the-sky, bubble thinking, or unmentionable.
My point is I don’t think a technology that went from chatgpt (cool, useless) to opus-4.5+ in 3 years is obviously being oversold when it says that it can do your entire job beyond being just a useful tool.
I think we have to be careful when assuming that model capabilities will continue to grow at the same rate they have grown in recent years. It is very well-documented their growth in recent years has been accompanied by an exponential increase in the cost of building these models, see for example (of many examples) [1]. These costs include not just the cost of GPUs but also the cost for reinforcement learning from human feedback (RLHF), which is not cheap either -- there is a reason that SurgeAI has over $1 billion in annual revenue (and ScaleAI was doing quite well before they were purchased by Meta) [2].
Maybe model capabilities WILL continue to improve rapidly for years to come, in which case, yes, at some point it will be possible to replace most or all white collar workers. In that case you are probably correct.
The other possibility is that capabilities will plateau at or not far above current levels because squeezing out further performance improvements simply becomes too expensive. In that case Cory Doctorow's argument seems sound. Currently all of these tools need human oversight to work well, and if a human is being paid to review everything generated by the AI, as Doctorow points out, they are effectively functioning as an accountability sink (we blame you when the AI screws up, have fun.)
I think it's worth bearing in mind that Geoffrey Hinton (infamously) predicted ten years ago that radiologists would all be out of a job in five years, when in fact demand for radiology has increased. He probably based this on some simple extrapolation from the rapid progress in image classification in the early 2010s. If image classification capabilities had continued to improve at that rate, he would probably have been correct.
No, models significantly improved at the same cost. Last year's Claude 3.7 has since been beaten by GPT-OSS 120B that you can run locally and is much cheaper to train.
They justified it with the paper that states what you say, but that's exactly the problem. The statement of paper is significantly weaker than the claim that there's no progress without exponential increase in compute.
The statement of the the paper that SotA models require ever increasing compute, does not support "be careful when assuming that model capabilities will continue to grow" because it only speaks of ever growing models, but model capabilities of the models at the same compute cost continue growing too.
I do not agree with your reading of the article. The premise - both implicit and stated explicitly throughout the article - is that companies are hyping this up because they want to be seen as growing, that this technology cannot do your job, that these are statistical tools foolishly being used to replace real workers. Look at the bits I quote in my other comment.
I would have been much more interested in reading the article you’re suggesting.
You need to read the article again with a more charitable lens. He starts with
>What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn’t change it.
It feels like shoving words on Cory's mouth to make statements like "he's saying it can't replace us". That is the exact point he avoids in the article to focus on the human. Not the tech and capabilities.
> The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work
Would you call something that could replace your labor "spicy auto complete"? He also evokes nfts and blockchain, for some reason. To me this phrasing makes it sound like he thinks they are damn near useless.
What cory thinks now doesn't affect the piece that focuses on the future and how humanity shapes around it. It feels like missing the forest for the trees to focus on what he feels about AI in c.2025 when the piece is talking about ramifications.
We’re supposed to pretend people read articles instead of just the headline (it is in the posting guidelines). To play along with that rule, people will write as if the poster they are responding to missed some nuance of the article.
I don't have much to offer here (and yes, sorry, after I made my snarky remark I realize you had indeed read the article). I recognize AI's capabilities but mainly don't use it primarily for political reasons but also because I just enjoy writing code. I'll sometimes use up the chatgpt free limit using it as a somewhat better search engine (and it's not always better) but there's no way I'm paying for agents, which is everything to do with where the money is going, not the money itself. Of course there are other reasons outside of how AI is used by programmers that would derail the general theme of these threads.
I'm just drawn to these threads for the drama and sometimes it triggers me and I write a snarky throwaway comment. If the discussions, and particularly the companies themselves, could shift to actual societal good it can do and how it is concretely getting there, that would hold my attention. Instead we get Sona etc.
I was accepting sodapopcan’s premise while responding to them. My joke was intended to be aimed at the posting guides and these little Hackernews traditions. But, it was a bit dismissive toward you, which is a little rude. Sorry.
I think you’ll find the essay much more nuanced than that. It only incidentally discusses what you’re thinking about.
> Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs.
The article does a pretty lazy* job of defending its assumption that "solving really gnarly, abstract puzzles" is going to remain beyond AI capabilities indefinitely, but that is a load-bearing part of the argument and Doctorow does try to substantiate it by dismissing LLMs as next-word predictors. This is a description which is roughly accurate at some level of reduction but has not helped anyone predict the last three years of advances and so seems pretty unlikely to be a helpful guide to the next three years.
The other argument Doctorow gives for the limits of LLMs is the example of typo-squatting. This isn't an attack that's new to LLMs and, while I don't know if anyone has done a study, I suspect it's already the case in January 2026 that a frontier model is no more susceptible to this than the median human, or perhaps less; certainly in general Claude is less likely to make a typo than I am. There are categories of mistakes it's still more likely to make than me, but the example here is already looking out of date, which isn't promising for the wider argument.
*to be fair, it's clearly not aimed at a technical audience.
I disagree. The article leads with the sentiment that I mention and has it woven throughout. The theme is AI is obviously not capable of doing your job, the problem is that the stupid managerial class will get convinced it is and make things shitty.
> This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
> Now, AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past. That means that it will “hallucinate” a library called lib.pdf.text.parsing,
I think it is a convenient, palatable, and obviously comforting lie that lots of people right now are telling themselves.
To me, all the ‘nuance’ in this article is just because the coyote in Doctorow has begun looking down but still cannot quite believe it. He is still leaning on the same tropes of statistical autocomplete that have been a mainstay of the fingers-in-ears gang for the last 3 years.
You're in half the comment replies with a confrontational tone and, at times, quite aggressively. It does not feel as though you're sincerely engaging, but instead have an ideological world view that makes it difficult to reconcile different perspectives.
I'm working directly with these tools and have several colleagues who do as well. Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it, but it's just not materializing that much in my day-to-day outside of creating the most basic code/scaffolding where I then have to go back and fix/correct because there are subtle errors. It's actually hard to tell if my productivity is better because I have to spend time fixing the generated output.
Maybe it would help to recognize that your experience is not the norm. And if the tech were there, where are the actual profits from selling it? It's increasingly more common for it to be "under development" for selling to consumers or only deployed as a chatbot in scenarios where it's acceptable to be wrong and warnings to verify output yourself.
I’m replying to the people replying to me, which is hopefully permissible? I will respond aggressively to people who say that my work must not be very important or hard if I feel that AI can do a considerable portion of my day to day because I feel like that is initiating rudeness and find that the HN tendency to talk down to people expressing this opinion is chilling important conversations.
If my other replies come off as aggro, I apologize - I definitely can struggle with moderating tone in comments to reflect how I actually feel.
> Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it
Let me be clear - I am not so completely sold on the current iteration. But I think there has been a significant improvement even since the midpoint of last year, the number of diffs I am returning mostly unedited is sharply increasing, and many people I am talking to are privately telling me they are no longer authoring any code themselves except for minor edits in diffs. Given that this has only been 3 years since chatgpt, really I am just looking at the curve and saying ‘woah.’
It's unfortunately the case that even understanding what AI can and cannot do has become a matter of, as you say, "ideological world view". Ideally we'd be able to discuss what's factually true of AI at the beginning of 2026, and what's likely to be true within the next few years, independently of whether the trends are good for most humans or what we ought to do about them. In practice that's become pretty difficult, and the article to which we're all responding does not contribute positively.
>independently of whether the trends are good for most humans or what we ought to do about them.
This whole article is about the trends and of they are good for humans. I was pleasantly surprised that this was not yet another argument of "AI is (not) good enough" since people at this point have their fences set on that. I don't think it's too late to talk about how we as humans can manage pandora's box before it opens.
Responses like this dismissing the human element seem to want to isolate themselves from societal effects for some reason. The box will affect you.
Neither in my previous comment nor in my actual views do I dismiss the human element or expect to isolate myself from societal effects.
> I was pleasantly surprised that this was not yet another argument of "AI is (not) good enough"
The article does assert that, and that's important for its argument that ordinary workers just need to convince decisionmakers that things will go poorly if they replace us.
"Now, if AI could do your job, this would still be a problem... But AI can’t do your job."
>independently of whether the trends are good for most humans or what we ought to do about them.
Saying "the writer shouldn't talk about this" is about as dismissive of a topic as you can be. You could have simply said "this topic isn't as interesting to delve into", but the framing that "the article to which we're all responding does not contribute positively." suggests that.
>This isn't ambiguous.
It's also talking about the present. The article already made clear it is not going to predict the future of tech in the very beginning. Its looking at the here and now for AI and the human element for any possible futures on whether or not that remains the case or not.
Also note this response. It is again trying to focus on the tech arguments. This isn't the focus of this argument
That two things can or should be discussed independently doesn't mean either is unimportant. And insisting that you know what I meant better than I do is not a good way to have a productive conversation.
As for the Doctorow article, I don't understand exactly what you're trying to say about "focus", but it's incoherent to read the discussion about replacement of your current job as talking purely about the present - since the job is currently yours, the replacement must happen in the future.
The technology is different, but we've been told that we'll be replaced by foreign teams for years. It's had some impact, but in all but two of the cases I've personally seen it has been significantly detrimental. In the majority of cases, the offshore team was either replaced or relegated to pointless busy work.
My work involves reading papers, doing high level math, coding it, reasoning about the business environment, etc. etc. My work is important and I find the classic HN impulse to stigmatize people for saying this to be silly.
It is you who is the fool if you haven’t managed to use these things to massively accelerate what you can do and if you cannot see the clear trend. Again, it has been three years since chatgpt came out.
So the work is important and I am the fool because ... you think so? That is not a very intellectually-defensible position to hold. One could even reason the argument is so flawed because LLMs degrade thinking capacity.
You’re arguing in a tautological fashion - anyone who suggests that AI can do a lot of their job must not have been doing important work in the first place. It’s a convenient psychological self defense mechanism that I see often here. Take care.
> My work involves reading papers, doing high level math, coding it, reasoning about the business environment, etc. etc. My work is important and I find the classic HN impulse to stigmatize people for saying this to be silly.
This is what every person who's been laid off by AI says. Every single time. People really like to assume that the work they do is important, except companies don't care about important they care about pushing shit out the door, faster and cheaper. Your high level math and business reasoning do not matter when they can just let someone cheaper go wild and deliver faster with no guard rails.
My job is essentially delivering faster with no guard rails. I know it is a common HN sentiment that nobody can do my job better than me because I am the most thoughtful person to ever do it.
This is explicitly not what I am saying given that I am leading with AI getting close to being able to do much of what is currently my job. I find it hard to imagine a world where we stagnate right where we are and it takes a decade to do anything more aka I cannot imagine a world where a considerable portion of jobs are not automatable soon - and I do not even think it will be shittier.
And yet you did not read this essay, or at least did not understand it.
Whatever LLM you used to summarize it has let you down. I wonder how often that is happening in your day to day work, perhaps that's why you feel your job is at risk.
I am commenting on something in the background space of assumptions of the essay. Just because it is not the main thrust doesn’t mean I didnt read the article or used an LLM to summarize it for me. Take care
I would have thought that my issue with what you did was pretty clear from the original message, and would not require any particularly sophisticated interpretation.
">" is the notation for quotes.
You did not use it for a quote. Instead you used it to present your own distortion (not a paraphrase) as whimsicalism's words. That is not cool. Stop doing it.
Given the points I can see, other people understood what I was saying and appreciated it. I'm sorry that you're so confused but make no mistake this is your problem, not mine.
I mostly do normal React crap, ostensibly the easiest thing for these tools to do, and these tools cannot do a considerable portion of my work. Yes I've used the latest model. Yes I've used the latest agentic IDE. Yes I've tweaked my prompts and added repository rule files. Yes I've done this approximately every three months for the last two years. This shit does not work. Nobody ever posts proof of it working well in any <great project>.
I am at the point where if I read something from a software developer like, "these tools can do a considerable portion of my day to day work", I have to just assume that person's day to day work was garbage. And this is not terribly surprising, because a lot of software developers I have personally worked with did produce mostly garbage. Some amount of those people are surely using AI and posting about it, and that explains what we continually see online. Sorry to any offended.
> It has been three years and these tools can do a considerable portion of my day to day work.
Agreed.
> Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
Eh… some people maybe. But history shows nearly every time a tool makes people more efficient, we get more jobs, not less. Jevon’s paradox and all that: https://en.wikipedia.org/wiki/Jevons_paradox
Of course cars wouldn't lead to more horses, because horses were the thing being replaced. But cars sure as hell lead to a lot more drivers, which is more akin to the analogy.
To take a software engineer for an example, Jevon's paradox would say that since software engineering is now so much easier due to LLM's, the demand will increase due to the reduced cost, which will lead to more software needing created, which paradoxically leads to more software engineers. There's no equivalent of the "horse" in the analogy, because the same people who were coding before ("driving" the horse) will be aided by LLM's in the future ("driving" a car.)
> But history shows nearly every time a tool makes people more efficient, we get more jobs, not less.
I hope so, but you have any ideas what they could be? This time feels different, especially because all the ultra-pro-AI people keep saying that "this time it's different" from a technological revolution. This is aiming to replaces people across many industries whereas historically it has been in smaller increments as new inventions are (more slowly) rolled out.
> AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past.
If we keep saying this hard enough over and over, maybe model capabilities will stop advancing.
Hey, there's even a causal story here! A million variations of this cope enter the pretraining data, the model decides the assistant character it's supposed to be playing really is dumb, human triumph follows. It's not _crazier_ than Roko's Basilisk.
There's a fundamental disconnect: OP refers to senior engineers being replaced with AI, whereas the evidence and logical reasoning points much more to junior engineers being replaced by AI. And that premise seems like a quite plausible one...
>OP refers to senior engineers being replaced with AI, whereas the evidence and logical reasoning points much more to junior engineers being replaced by AI.
If industry cared about future seniors, they'd invest in juniors. But that's not what's happening. AI will effectively replace seniors in 20 years with the current trajectory. Whether or not that replacement is adequate or not is the bigger question.
I think the junior thing started ~24, early ~25. Because back then the level of the current models was at or above that level, with somewhat flaky reliability. In the past year that's changed. We are now at "mostly reliable" in any junior-level stuff, and "surprisingly capable, maybe still needs some hand-holding" at advanced / senior-level stuff. And somewhat super-human if the problem is easily verifiable in a feedback loop (see the atcoder stuff).
The whole AGI industry is like one of those projects that claims "90% finished" from the time of the first demo, then for the next N years, all the way up until the project is eventually canceled.
Yeah we can spew out millions of lines of unmaintainable slop code! Now we can even write a slop unusable browser!
All this shit looks like progress, but it's all really a cover for lack of progress. And now we've got the entire economy as a bet on it.
None of this is to say there's nothing useful coming out of the industry. I use it productively for a ton of things. But, the reverse centaur thing is a great analogy. The money getting ploughed into it is assuming reverse centaur will be the final outcome, not a set of useful productivity tools. Once investors start to realize that all we're going to get out of it is the latter, we'll be in for a world of hurt.
Granted, one nice thing about the AI wave is that I bet it'll be able to keep slinging new and idiotic slop for decades that'll keep successfully unburdening investors from their money, because, "hey look, it's 90% finished!" Who knows, maybe that's the point.
It's disappointing how so many people blame AI for our problems. I see this pattern over and over; people never blame the socio-economic system and blame technology instead. Technological improvement is the only thing which allows us to survive the social, cultural and moral decline that we've been experiencing. People blame tech because it allows the system to be highly inefficient and still hold together. But if people blame tech, root issues will not be addressed.
I don’t think the article is blaming AI as a technology. It’s criticizing how the current socio-economic system uses AI.
The argument isn’t “tech is the problem,” but that autonomy narratives are used to shift risk, degrade labor, and justify valuations without real system-level productivity gains. That’s a critique of incentives and power structures, not of technological progress itself.
In that sense, “don’t blame tech, blame the system” is very close to the article’s point, not opposed to it.
It is pretty plain to see technology enables socio-economic disharmony, to say the least. While it may not be the "cause" it is certainly a potent accelerant.
Yes but we can't stop progress while scarcity is still causing suffering. I think that communication works long term, even with algorithm suppression. Insightful ideas tend to bubble up, especially during chaotic times.
I think technology has always been a tool to impose will over others. Computing was just such a unique kind of technology where, for a decently long time, only a subset of people knew how to use it, and that subset didn’t have existing wealth and power (or not enough). It’s taken upto now for the ones with real power to catch up, or a mix of the ones who didn’t now have real power. And they will use technology for what it ultimately is for, to impose their will on others.
Well it's kinda both. One step towards socio-economic change would be if everyone just stopped giving billionaires upwards of $200/month, and didn't have their companies give it to them on their behalf.
How to fix the human/society instead? Technology has enabled a lot of evil: the society that had guns came and colonized the society without, and made them slaves (here's the opening to argue that Genghis Khan managed to enslave many societies without guns). The rise of the Internet and online shopping ruined "main street shops". "Uber for ___" enabled the exploitative gig-economy with retirement meaning dropping dead...
Yeah, we're back to feudal lords having the power to control society, they can even easily buy governments... Seems like the problem is with neo-liberalist capitalism, without any controls coming from the society (i.e. democractically elected governments) it will maximize exploitation.
It’s so strange to see people accusing tech companies of using AI to concentrate power and wealth when thus far, AI has almost entirely been all consumer surplus. You have crazily high competition in the industry that allows you the consumer to use SOTA models for free, or even run them yourself.
My prediction is that this will keep going all the way to the AGI stage. Someone will release (or leak) an AGI capable model that’s able to design AI chips, as well as the Fabs needed to build them, as well as robots to build and operate the Fabs and robot factories and raw material mines and refineries.
> when thus far, AI has almost entirely been all consumer surplus.
Tell that to the 2025 job numbers. Who do you think benefits from a millipn+ layoffs? The consumers? The new grads who can't even get their career started?
OpenAI and Microsoft have defined AGI as a revenue number so yeah maybe using that definition.
I believe AGI will require the ability to self tune its own Neutral network coefficients which the current tech cannot do because I can’t deduce it’s own errors. Oh sorry “hallucinations”. Developing brains learn from both pain and verbal feedback (no, not food!) etc.
It’s an interesting problem where just telling a LLM model it’s wrong is not enough to adjust Billions of parameters with.
TBF on the LLM side we currently have 4 big players, and a bunch of smaller ones. Plus a healthy bunch of open models, lagging ~1y behind SotA. The best thing for us consumers is that it stays this way. Any of them winning would be bad in general.
Even worse, they've bet against the math not advancing. If it gets significantly more power-efficient, which literally could happen tomorrow if the right paper goes up on arxiv, maybe a 10 year old laptop could give "good enough" results. All those data centers are now trash and your companies are now worth a negative trillion dollars.
I think all of these factors are completely independent of whether AI works or not, or how well it works. Personally, I don't care if it replaces programmers: get another job. I just have experienced it, and it is at this point mediocre.
Of course I am not using the bleeding edge, and I am not privy to the top secret insider stuff which may well be orders of magnitude better. But if they've got it, why would they keep it a secret when people are desperate to give them money? If they're hiding it, it's something that they know that somebody could analyze and knock off, and then it's a race for the bottom again.
In a race for the bottom, we all win. Except the people and economies who bet their lives on it being a race to the top.
reply