Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am so, so, so tired of hearing this argument. At a minimum, AI provides efficiency gains. Skilled engineers can now produce more code. This puts downward pressure on jobs. We’re not going to eliminate every software engineering job, but the options are to build more software or to hire fewer engineers. I am not convinced that software has a growing market (it’s already everywhere), so that implies downward pressure. The same is true for customer support, photography, video production (ads), paralegal work, pharma, and basically any job that involves filing paperwork.

Eliminating jobs has absolutely happened. How many jobs exist today for newspaper printing? Photograph development? Film development? Call switchboard operation? Technology absolutely eats jobs. There have been more jobs created over time, but the current economic situation makes large scale jobs adjustment work less well.



> Skilled engineers can now produce more code

No, they can't. AI cannot produce code.

> The same is true for customer support,

AI cannot provide customer support. It cannot answer questions.

> photography, video production (ads)

AI cannot take photographs or make videos. Or at least, not ones that look like utter trash.

> paralegal work, pharma, and basically any job that involves filing paperwork.

Right, so you'd be happy with a random number generator with a list of words picking what medication you're supposed to get, or preparing your court case?

AI is useless, and always will be. It is not "intelligence", it's crude pattern matching - a big Eliza bot.


I am so, so, so tired of hearing this argument. At a minimum, switching from assembly language to high-level programming languages provided efficiency gains. Skilled engineers were able to produce more code. This put upward pressure on jobs. The demand for new software is effectively infinite.


Unlike higher level programing languages AI doesn't actually make programmers more efficient (https://arxiv.org/abs/2507.09089). Many people who are great programmers and love programing aren't interested in having their role reduced to being QA where they just review the bad code AI designed and wrote all day long.

In a hypothetical world where AI is actually decent enough to be any good at writing software, the demand for software being infinite won't save even one programmer's job because zero programmers will be needed to create any of it. Everyone who needs software will just ask AI to do it for them. Zero programing jobs needed ever again.


Pretending 16 samples is authoritative is absolutely hilarious and wild, copium this pure could kill someone. Also working on a codebase you already know biases results in the first place -- they missed out on what has become a cornerstone of this stuff for AISWE people like me: repo tours; tree-sitter feeds the codebase to the LLM and I get to find all the stuff in the code I care about by either a single well formatted meta prompt or by just asking questions when I need to.

I'll concede one thing to the authors of the study, Claude Code is not that great. Everyone I know has moved on since before July. I personally am hacking on my own fork of Qwen CLI (which is itself a Gemini fork) and it does most of what I want with the models of my choice which I swap out depending on what I'm doing. Sometimes they're local on my 4090 and sometimes I use a frontier or larger openweights model hosted somewhere else. If you're expecting a code assistant to drop in your lap and just immediately experience all of its benefits you'll be disappointed. This is not something anyone can offer without just prescribing a stack or workflow. You need to make it your own.

The study is about dropping just 16 people into a tooling they're unfamiliar with, have no mechanical sympathy for, and aren't likely to shape and mold it to their own needs.

You want conclusive evidence go make friends with people who hack their own tooling. Basically everyone I hang out with has extended BMAD, written their own agents.md for specific tasks, make their own slash commands, "skills" (convenient name and PR hijacking of a common practice but whatever, thanks for MCP I guess). Literally what kind of dev are you if you're not hacking your own tools???

You got four ingredients here you have to keep in mind when thinking about this stuff: the model, the context, the prompt, and the tooling. If you're not intervening to set up the best combination of each for each workflow you are doing then you are just letting someone else determine how that workflow goes.

Universal function approximators that can speak english got invented and nobody wants to talk to them is not the scifi future I was hoping for when I was longing for statistical language modeling to lead to code generation back in 2014 as a young NLP practitioner learning Python for the first time.

If you can't make it work fine, maybe it's not for you, but I would probably turn violent if you tried to take this stuff from me.


> the options are to build more software or to hire fewer engineers.

To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.

More on all of these later.

> I am not convinced that software has a growing market

Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.

The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.

However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]

They are, of course, now doing very different things.

Let's now spitball some of those other scenarios above:

- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.

- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g traditional end to end tests.

- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.

Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.

And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.

I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.

1. https://www.economist.com/democracy-in-america/2011/06/15/ar... (while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: