Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI isn't going to kill the software industry (dustinewers.com)
108 points by mooreds on Jan 24, 2025 | hide | past | favorite | 147 comments


I've been trying to put this effect into words for a while, now I don't have to - this is really clearly stated:

"AI tools create a significant productivity boost for developers. Different folks report different gains, but most people who try AI code generation recognize its ability to increase velocity. Many people think that means we’re going to need fewer developers, and our industry is going to slowly circle the drain.

This view is based on a misunderstanding of why people pay for software. A business creates software because they think that it will give them some sort of economic advantage. The investment needs to pay for itself with interest. There are many software projects that would help a business, but businesses aren’t going to do them because the return on investment doesn’t make sense.

When software development becomes more efficient, the ROI of any given software project increases, which unlocks more projects. [...] Cheaper software means people are going to want more of it. More software means more jobs for increasingly efficient software developers."


I think this effect will be even greater this time (last time being higher-level “slow” languages like python and js), because AI will allow for a new wave of developers who won’t care about the “right” code that much and will perceive it as a disposable resource rather than a form of art. This aligns well with many smaller businesses that are by nature temporary or very dynamic and have to actually fight with developers tendencies to create beautiful solutions after the ship has sailed.


IME the software quality (including my own) is already on the edge of the unmaintainable slop threshold. I don't think it can slip much further without taking their hosts down too. Even Apple's software quality seems to be hit or miss lately.


That's really okay for the most non-clueless-user facing software. Inhouse worked like that since forever. Half-assed, barely working out of happy path, dangerous if you tick that box, but doing its job anyway. There's still a large margin to drop from, to make software that isn't "boxed/saas" but tailored to very specific needs. Many devs may have felt that moment, when some personal tool or a script worked just fine, but any attempt to turn it into "an app for a user" resulted in a complexity mess. Maybe the answer is not better software, but less software and more code doing the job.

I don't think it's a bad thing, if it will help real people rather than the usual bigcorp/unicorn critters.


I feel like a lot of those problems are already addressed by no-code products. CRMs, project management tools, web site builders… those with relatively straightforward needs who aren’t that fussy about how it gets done are already served. I don’t doubt AI will help some people here but I’m not convinced it’ll be by an industry changing amount.


As it becomes easier and more profitable to accrue tech debt, more tech debt will be accrued..

See also: Jevon's paradox

>[Tech companies], both historical and modern, typically expect that higher [revenues] will lower [tech debt], rather than expecting the Jevons paradox

https://en.wikipedia.org/wiki/Jevons_paradox


It depends on the definition of “right”

As software becomes more essential to a business its reliability becomes more important. If your customers can tolerate defects or downtime it’s a signal that:

A) you’re not providing any real value

B) You provide so much additional value compared to your competition you still come out ahead in the wash

C) Your customers are hostages via vendor lock-in

A and C are the most common cases of persistent bad software.


I just witnessed an example of more or less this at a startup I’ve been contracting at. The engineering team on the core product is tiny with no slack for extra projects. One non-technical founder and a few other non-technical people build a prototype piece of software using AI and low code tools for facilitating another revenue stream. They started using it with a few customers and raised more money around it. The money they raised is going directly into expanding the engineering team to work on both products.


This is essentially Jevon's paradox [1] applied to software development. As it becomes easier and more efficient to create software, more if it will be consumed/demanded/created.

[1] https://en.wikipedia.org/wiki/Jevons_paradox

edit: whoops that's the point of the original article


I'd go further than TFA and say it with tech debt..

As it becomes easier and more profitable to accrue tech debt, more tech debt will be accrued


> More software means more jobs for increasingly efficient software developers

This assumes software developers will be the ones needed to meet the demand. I think it will be more like technical product managers


You're splitting hairs over a title, but effectively talking about the same individuals.

Whatever juice there is to squeeze out of generative AI coding, the people who will squeeze the most from it in the near future (10-20 years) are current and incoming software developers. Some may adopt different titles as the role matures, especially if there's a pay differential. This already happened with data engineering, database administration, etc

So it's possible that the absolute number of people carrying the title approximately like "software developer" may be fewer in 10 years than it is today, although I personally find that very unlikely. But the people leading and maturing whatever alternate path comes up, for the next generation or so, will more often than not have a professional or academic background in software development regardless.

The whole point of the article is that generative AI represents a lever that amplifies the capabilities of people who best know how to apply it. And for software development, that's ultimately going to be software developers.


> You're splitting hairs over a title, but effectively talking about the same individuals.

I disagree. SWE skills are not the same as TPM skills. Those who like doing SWE work may not like TPM work nor be good at it. And my point is TPM skills will be what’s likely more needed. Therefore they may not be the same individuals, though many will need to adapt. Or go into woodworking or something


I think it will be the software developers that lean more into the kind of skills you see in technical product managers, or maybe vice-versa.


> I think it will be more like technical product managers

Even with traditional software development you'll be a much more effective software developer with good product management skills in a lot of niches (basically anything that's building a something user-facing)


But who will they blame when the solution inevitably fails to live up to the inherently contradictory demands of the different stakeholders?


> A business creates software because they think that it will give them some sort of economic advantage. The investment needs to pay for itself with interest. There are many software projects that would help a business, but businesses aren’t going to do them because the return on investment doesn’t make sense

All this is fine but then they take this to the extreme by laying off developers when there are no more ROI projects. Then bit rot sets in, systems degrade, customers abandon ship, and suddenly there are no engineers to help the business. And no, hiring contractors last minute doesn't cut it because they don't know what pile of gunk has been running and what caused the degradation.

Software requires maintenance, much like a bespoke car. For every bespoke car you produce, you need to continuously service it with bespoke skills else it will stop working - and cause your revenue to drop.


Doesn't this assume there will always be significant business needs that can be met by software, but are not yet? Or at least not as efficiently as they could be?

I imagine there is an upper bound on how much of the world can be eaten by software, and the trend seems to be getting closer to that point. Unless there are some massive breakthroughs in robotics or cybernetics which open more of the physical world to software.

There's also a point where incremental software improvement requires bulldozing, paving, or burning so much nature that we'll be worse off in the end. Watching billionaires squabble about who gets the new AI data centers makes me wonder if we haven't crossed over that point a few funding rounds ago.


I don't think we’re anywhere near the upper bound yet and IMO the prevalence of SaaS software that could be better done in house if resources permitted demonstrates that. The future will be a lot more bespoke.

Like Simon’s quote above says, software is a competitive advantage so when one company develops software that makes them more efficient or grows revenue, competitors have to follow suit or get left behind. It’s an economic arms race. That’s why the dreaded outsourcing wave of the 2000s never materialized: companies ended up hiring a bunch more software engineers in the US and outsourced a bunch of other engineering to India and other countries.

The interesting question is how this will interact with current interest rates and the end of ZIRP.


I think it really depends how much improvement we get in AI from here.

I work at a small business and understand the entire system as a whole.

Most the complexity we have is in the UI so non-technical people can interact and work with company data/databases. Our SaaS is really the same thing but stuff we didn't want to build in house. There is no rocket science being done here.

At some level of AI accuracy though it would seem like there would be a phase transition that most of this UI goes away along with the need for many of the employees.

Right now there is no risk of this happening but at some level of accuracy we go from 200 people to like 20 with the only future head count coming in sales. Most likely though I would expect a competitor to eat our lunch in the phase transition and we go from 200 to zero.

Pushing 50 here, I am trying to think right now of my AI hedge/career change to something more creative/artistic that being a human itself has value. I managed to avoid working in a factory like my father by being a shitty computer but there might be no market for shitty human computers in 10 years.


Software is being commoditized by SAAS. Generally that means it's table stakes, not necessarily a competitive advantage.


90% of the small SaaS products I’ve seen used within business introduced tons of pain points and inefficiencies, ranging from constant bugs to manual data entry moving it from one SaaS to the next. Once you throw in all the per-seat licensing, their value proposition isn’t always that great.

I think a product like GithubNext Spark [1] that can handle deployment, SSO, databases, uploads, etc with an LLM fine tuned on the managed runtime API will be a very productive tool. It already works surprisingly well in my experience although it’s an mvp and they havent fleshed out database stuff yet. Kind of like a higher code Retool that can still be used by non-programmers.

[1] https://githubnext.com/projects/github-spark/


It is all the trend nowadays in enterprise consulting, SaaS products being plugged like LEGOs, with a couple of serverless deployments for business logic, aka MACH architecture.

The original dream of Web Services and SOA.


I actually strongly believe the universe has an effectively infinite carrying capacity for software. This is because all systems can be improved upon recursively


> This is because all systems can be improved upon recursively

Until it becomes cosmic code.

( https://minimaxir.com/2025/01/write-better-code/ / https://news.ycombinator.com/item?id=42584400 )


It doesn't take infinite iterations to solve a problem.


> and the trend seems to be getting closer to that point.

What evidence suggests this to you?


Things that seem borderline worse when being done in software. Like touchscreens in car controls, beta self driving, shitty/broken websites that should be paper flyers on bulletin boards, a tablet at the barbers where a sign in sheet or paper numbers would do better, those "smart" doors some grocery stores tried, etc.


> AI tools create a significant productivity boost for developers.

I think the first step betrays the weakness in this reasoning.

We already know that more powerful software tools do not make all developers more efficient. So the reverse can be true.

For software jobs to remain, not only must the tools keep making human developers more efficient, but the human developers need to continue making the automated developers more efficient.

Both directions must be accounted for.

Question: what is something our best human developers can do, that will always enhance the artificial developers results? Regardless of how far automated developers scale in quantity, quality, speed, economy and complexity of work?

It is the same question for any new automation vs. previous process.

You need to identify that, for anything past your first reasoning step to have a chance.

But if you can definitively answer it, I don’t think you need any more steps.


AI has been drastically improving for the past couple decades, and if that continues then AI is obviously going to kill the software industry and many other industries.

Sure, maybe apparent exponential growth tails off into an S curve at some point, as often happens. But this blog post seems to assume that is guaranteed to happen, which is a big leap of faith, and also makes this not a very interesting blog post. Because it's basically just, "Well if I assume the best arguments against my position are wrong, then I don't even need to address them, and I can instead focus on lesser arguments that are easier for me to discuss".


"This view is based on a misunderstanding of why people pay for software."

Do many people pay for software. Other than some small, one-time donations to shareware back in the day and to more recent project that operates outside of "app stores" I pay zero dollars for software. It is a negligible expense.

No doubt businesses still pay for software. Having friends in software sales, I know it does not necessarily sell itself. It takes (non-developer) work. Whether businesses truly need to purchase these licenses is an open question. Most companies I know want to reducedd their software bill, some believe they are being overcharged.


Essentially software is going to become like email. Dirt cheap to produce. The folks who will get employed and continue to make money like crazy will be the email automation company equivalents. Perhaps a bad analogy but the jobs are not gonna go away anywhere.

Engineers will still continue to work like crazy and produce 100x the output. The pay could still remain the same because the profit margins on these newly developed software is gonna be so much better.

That being said, I think there will be a cycle of adjustment - may be 2-5 years for this reality to set in. So in that interim there may be joblosses.


The counter argument is that if business can always find more projects that brings in revenue, they would not be laying off people, or shutting down.

The reality is business can't really just scale their revenue by adding more profitable or positive ROI projects. There are not enough of them to go around. Eventually you hit a plateau in terms of growth and the R&D / engineering productivity can't translate into revenue anymore.


Like the grand parent, I think at a macro scale there will be more projects and more software engineering jobs over time as productivity increases.

But that’s across the whole economy. Individual businesses will very much behave idiosyncratically. Businesses tend to have an upside U in terms of their growth; unlike the economy as a whole, they don’t go up and to the right forever. And the economy as a whole, if you zoom out, does go up an to the right, but it is bumpy.

I guess in summary, I don’t think it makes sense to extrapolate long term trends by focusing on just the last 2 years, when the market for software engineers has cooled. After all, these cooler years followed what I can only feel were 3 or 4 years of a scalding hot market for software engineers.


I do believe on a macro level there will be more software engineering jobs in the short term (which is not contradictory to software engineers being laid off from some companies as part of AI workforce reshuffling among companies).

However there are at least 3 tipping points in the medium term that can alter this dynamics:

- AI becoming more capable and cost-effective than human in most aspects of software engineering. This would dramatically shift the ROI calculations and tip towards AI over human.

- We reach a stage where software resources and products are abundant, they solve most of our problems, and we don't need more software anymore. Sort of like WALL-E. Obviously we don't need so many software engineers then.

- AI systems take over software systems as the backbone of our technology layer, where AI is cheaper and more capable than software. Maybe we only need 100 different AI systems instead of 100k software systems. At this stage, we are going to be replacing software with AI, so we need less software engineers.


Not all businesses can scale their revenue, but the businesses that can eat the ones that can't.


Which then kills off even more high paying jobs.

Machines shifted things away from manual labor, but AI sifts it from intellectual labor.


I don't follow.

If AI can make high paying workers more efficient, then the companies that leverage that efficiency to grow will outpace the companies that can't. There's no net loss of high paying workers.


I think there are greatly diminishing returns. 10x of me would probably not give even 2x the output. So instead it might make sense to employ 0.2x of me to get 2x the output.


If tractors can make farm labor more efficient why would that reduce the number of farmers?

The argument boils down to diminishing returns.

Honda can’t scale to producing 10,000x as many cars. What they can do is sell the cars they are already producing for less, so if some aspect of their business becomes more efficient they don’t just do more of it endlessly, they quickly cross a threshold and then spend less on it.


No they don't do more of one thing endlessly, they expand into more products and markets.

If you accept that AI as we know it is a tool (and not a wholesale replacement for human beings) then it will follow the same trend of any other productivity tool of expanding markets and increasing demands for labor.


> they expand into more products and markets.

That changes nothing about diminishing returns here.

Expanding into a market doesn’t dramatically increase the demand for products in that market. If Tesla started to make Hybrids they could sell more cars and employ more people, but other companies would experience fewer sales and thus cut back on staff.

Competition is great for overall productivity, but the gains aren’t magically spread evenly across the economy.

> expanding markets and increasing demands for labor.

Many industries like banking, mining, and farming have drastically cut back on labor. When the majority of those lost jobs were low skilled the productivity benefits allowed more high skilled labor to be employed. But specifically cut high skill labor and new jobs are less likely to also be high skill.


Very well put. The efficiency improvement I don’t think is the piece to debate, it’s pretty clear (like having auto complete for typing out text messages). I have yet to feel threatened (job security wise) by LLMs.


It depends how good the AI becomes. There aren't a lot of horses employed for commercial transportation any more.


Cars didn’t eliminate the need for someone to still operate transportation, going from horses to cars to ships just changed the vessel and if anything opened up more jobs as efficiency increased. We may have to adjust our roles as software engineers, but wholesale eliminating the role as a job family feels a bit difficult to picture. When software written by an LLM (which has been trained on human written code) ultimately fails or has some bug/edge case, I don’t think TPMs will have much fun debugging outages via their LLM..


Horses were never employed for operating transportation and so the analogy was never about the change to operating transportation. It's about the mode of transportation itself.

If AI gets as good as the people building it hope it will, then it doesn't matter what new jobs crop up or what opportunities there are left to uncover, the machine will just handle it.


Wrong analogy. The horse wasn't employed, the driver was. The horse was just a tool used by the driver.


Faulty logic. The horse was the best means to an end; currently, human programmers are the best means to a different end.


Your argument will only apply when humans are no longer needed at any point in the pipeline.


I fail to see why you think so. The better AI becomes, the less need for humans in the pipeline there may be, even if that number is not zero.


Someone still needs to communicate to the computer exactly what they want and the computer has to understand that. So far, that has been tried unsuccessfully many times, and I don't see that changing with AI


If we had some sort of enhancement that makes horses 500% faster and stronger, we probably would.


Don't think so. They need medical care, food. And a 500% strength horse is still not as fast as a semi truck.


Automobiles need maintenance and fuel. Both of which created additional jobs.


Those jobs will only exist as long as they're necessary.


Maybe that's why we measure the transportation vehicles they were replaced by in horsepower.


This is the way.


It's beneficial for executives to say AI will kill the software industry, it's a way to stoke fear in workers and a convenient way to say "with AI you could be x more productive," which makes the expectation that the worker should be however many times more productive than they already are, with or without AI "help". This is an attempt to increase hours worked at the same wages, which is itself an attempt to lower wages.


While this behavior may exist in pockets, my experience suggests that this is not broadly the general mental model / approach by executives. I think this is a narrative in some peoples heads, but is neither an accurate reflection of the world, nor a healthy way to approach employment.


You're probably right, though I speak from anecdotal experience, as the CTO of my company recently said that developers should be 5X more productive with AI. So what if I'm not 5X more productive with AI? Doesn't matter, because there's more tickets in the sprint that, in his head, I should theoretically be spending the same time working on as "before AI."


Sounds like your CTO is dangerously incompetent and isn't actively using the AI tooling they are endorsing.


> While this behavior may exist in pockets, my experience suggests that this is not broadly the general mental model / approach by executives. I think this is a narrative in some peoples heads, but is neither an accurate reflection of the world, nor a healthy way to approach employment.

But it may be consistent with worker's experience with the effects of executives' actual decisions, and a better fit than the executives' actual mental model.


While I found myself in furious agreement with the section titled "Jevons Paradox", I'm less convinced by this argument from the "Comparative Advantage" section:

"While AI is powerful, it’s also computationally expensive. Unless someone decides to rewrite the laws of physics, there will always be a limit on how much artificial intelligence humanity can bring to bear."

The cost for running a prompt through the best-available model has collapsed over the past couple of years. GPT-4o is about 100x times less expensive than GPT-3 was, and massively more capable.

DeepSeek v3 and R1 are priced at a fraction of OpenAI's current prices for GPT-4 and o1 model and appear to be highly competitive with them.

I don't think we've hit the end of that trend yet - my intuition is that there's a lot more performance gains still to be had.

I don't think LLM systems will be competitive with everything that humans can do for a long time, if ever. But for the things they CAN do the cost is rapidly dropping to almost nothing.


A human brain runs on about 20 W, and I see no reason to believe that that is the absolute limit of efficiency. It's probably quite efficient as far as mammalian meat goes, but evolution can only optimize somewhat locally.


Indeed, the laws of physics don't put any limits on artificial intelligence that don't also apply to natural intelligence. It's a strange place to look for a comparative advantage argument.


> I don't think LLM systems will be competitive with everything that humans can do for a long time

It's looking like about 3 years. That will be another ~100x cost reduction, $100s more billions in infra, new training algorithms, new capabilities.


i agree that the industry wont be killed, but I do have some worries about what the future will look like.

- If we keep making AI-assistance tools that make mid- and senior-level ICs more and more efficient, where does that leave entry-level junior positions? It's already tough enough for juniors to get a foot in the door, but will it get even harder as we continue to make the established older devs more and more efficient?

- The current crop of AI-assistance tools are being tailored to meet the needs of mid- and senior-level ICs that learned programming in a pre-AI world. But incoming junior devs are "AI native" and may approach software development in a very different way.

- I would wager that there will be substantial workplace/generational divides between devs that learned programming before using AI assistance later vs "AI native" devs that had AI assistance the whole time. I have no idea what these new ways of working will be, but I'm curious to see how it plays out.


I remember when Hinton said that that we should "stop training radiologists now" in 2016[1]. Meanwhile, radiologists are in high demand and are getting paid better than ever. I believe the same will be true for programmers in the future. Sure, some of the boilerplate will be handled for you, just like segmentation is for radiologists. That's great for everyone.

1 = https://www.youtube.com/watch?v=2HMPRXstSvQ&t=30s


Optimistic me waiting for all clinics to have one-home worth MRIs which can be placed between steel racks and plugged into a socket. Now every doc will need to become a radiologist.

https://www.news-medical.net/news/20240510/Machine-learning-...


A radiologist is training for a 35-40 year career though. I definitely would not have wanted to have started that training in 2016.


The notion of no longer training radiologists because computer vision algorithms and deep learning are good at detecting cancers in imagery strikes me troublingly naïve. Who fucking trained the AIs? Will the AIs magically be able to detect yet to be discovered maladies?


Fwiw this is not new. I worked on a machine vision project to evaluate x-rays for cancer...in 1985.


Did the radiologists suddenly get better than the AI at reading images? Or is the system simply unchangeable?


A few responses:

- Since when have the radiologists ever been worse than AI at reading images?

- AI is doing mechanical tasks for the radiologists, so it stands to reason that this makes radiologist more efficient.

- The radiologist is a liability sponge, so if at all possible it of course makes sense to augment the radiologist with AI rather than to try to do away with them. (This roughly gets at your point about the system being unchangeable.)


> Since when have the radiologists ever been worse than AI at reading images?

Since 2 years~ish?


For segmentation? Sure. For their role as a clinical consultant? Not yet.


I am quite tired of seeing titles like this. No, you _don't know_. Vast majority of definitive statements like this is going to be meaningless. The whole point about this is that it's an uncertainty, the impact of AI on our society is unpredictable, you could be right, but you could be wrong too. And merely assigning a probability to this is going to be very non-trivial.

I just can't understand where people find the kind of confidence to say AI is (or is not) going to <insert your scenario here>.


AI is going to make building software way cheaper and more profitable, but that's actually bad news for a lot of developers out there. Think about how many people are only employed because they know the basics of React or Django or whatever. They can copy-paste code and tweak existing patterns, but that's exactly what AI is getting really good at.

The developers who are actually going to thrive are the ones who can architect complex systems and solve gnarly technical problems. That stuff is getting more valuable, not less.

But a lot of folks have built careers on pretty basic skills. They've gotten by because there just aren't many humans who can do even simple technical work. That advantage is disappearing fast.


The barrier to entry will be raised, though isn't that always happening in software, regardless of if it's AI or not? Companies used to hire HTML email developers, for example. There are many HTML email builders out there that do the job for a marketing person.

Better tooling, if it's AI tooling or a framework, continuously changes the job requirements. Even your average React developer still has to deal with plenty of other things people in the past didn't have to think about. E.g. dependency management, responsive screen sizes for all screen widths, native apps, state management, etc.


"Better tooling, if it's AI tooling or a framework, continuously changes the job requirements."

This might be true if you work at a tech company, but it's not universally true. There are many people who are gainfully employed as software developers, based solely on technical knowledge they acquired years ago.


Ironically though, if you do need to build raw HTML emails in 2025, it's still a huge PITA due to the horrible support for HTML / CSS across many email clients, and a dearth of good resources and information out there.


What we are currently calling AI is just a fancy programming language/REPL/compiler anyway, so obviously software developers aren't going away any time soon. You fundamentally must be a software developer to use these tools.

Elevator operators never went away either. In fact, there have never been more elevator operators in human history! Not a good career choice, though. That is what these warnings, realistic or not, are actually calling attention to.


> there have never been more elevator operators in human history

Press X to doubt


When was there more? We keep building more and more buildings with elevators in places where there are more and more people. With defined elevator attendants being almost unheard of nowadays, leaving elevator users to be operator in nearly every case, anything else is mathematically unlikely.

Software developers aren't going anywhere, but, like the elevator operator, everyone might become a software developer. At least that is the theory the grifters are grifting on. They aren't literally saying software developers are going away. That couldn't work given that you become, if you weren't already, a software developer when you use these tools.


That’s very easily googled: https://www.researchgate.net/figure/Number-of-Elevator-Opera...

There were more elevator operators before elevators became easily operated by the passengers - as expected


That indicates that there are effectively no elevator operators today, which is clearly false. Elevator manufacturers put a lot of effort into incorporating an array of buttons for the operator to push for good reason – and push them the operator does. I witness it every time I enter an elevator.

Are you confusing operating an elevator with operating an elevator professionally? We were never talking about the later, and even called attention to how it is not a good career choice today to really drive home that idea.


So your point is that since anyone can use an elevator, we’re all “elevator operators”?

So basically that everyone will be a software writer, therefore nobody will get paid to write software anymore?


Not GP but I'd say yes - if and when computers are as easy to operate to their desired potential, with as simple an interface as elevator buttons, then of course. The operations we desire from computing systems has a bit broader scope than elevators.

Just as we still need experts in electrical hardware systems to fix/improve the implementations of those simple elevator interfaces - we will still need people to understand the "hard part" underneath all of these, even if Average Joe can make apps for himself (as long as they only involved 'solved problems' the models can apply). The fact that the AI grifters are calling for children to stop studying computer science that seems transparently reckless and self-serving, though I'd love to hear if any informed users on HN have any insightful arguments in support of Jensen Huang et al


Does having a point not imply some kind of effort to communicate with a human, not sending strings of text to a faceless, inanimate computer program like I am doing?

Furthermore, even if I had some reason to make a point to a computer program, if such a thing is possible, it still could not be my point. It explicitly states that it is taken from the perspective of the grifter. If there is a point found in there somewhere, it would be their point.


...what?


There were more professional elevator operators. But now, there are far, far more people operating elevators. My kids have been doing it since they were 2.


And the point is…?


If everyone can simply tell a computer what to do, and it does that, we don't need professional programmers. Just like we no longer need human computers.


Whoosh


good one. I get the point now.

Though not necessarily agreeing with it. Maybe if the AI is AGI, but then everything would be moot.


This is a great analogy. Maybe someday, computers will work like they are supposed to. You tell it what you want, and it does the work of understanding you instead of vice-versa. And then it just actually does what you want. That would be amazing. Our world would change so much, so fast that we can't really predict whether we'll actually be better off or not.


The job of "software engineer" as we know it will end.

Before the industrial revolution, shoemakers would make shoes. It was a specialized skill, meaning shoes were very expensive, so most people couldn't afford them.

Then factories were invented. Now shoes could be made cheaply and quickly, by machines, so more people could afford them. This meant that far more people could be employed in the shoe industry.

But those people were no longer shoemakers. Shoemakers were wiped out overnight.

Think of how huge the shoe industry is now. There are jobs ranging from factory worker to marketing manager. But there are zero shoemakers.

AI writing software doesn't mean it's the end of the industry. Humanity will benefit greatly, just like we did from getting cheaper shoes.

But the software engineers are screwed.


But those people were no longer shoemakers. Shoemakers were wiped out overnight.

Have you seen the cost and popularity of "Made in X" handmade boots though? Red Wing, Origin, Red Back. It's absolutely crazy

The difference is, all of a sudden we could make a lot of CHEAP shoes and yes I'm sure it wiped out a lot of shoe maker jobs, but there is still a lot of good shoe makers around and there is still a high demand for handmade shoes and boots.


Shoemaking is an interesting analogy, and it brings to mind a few other facts that might be relevant. I have zero experience in the industry of shoemaking, so these are my impressions, they could be wrong:

1. There are still many thousands of people in the US alone employed today as traditional shoemakers at boutique firms. It's a very niche career, but it does exist.

2. As the cost of shoes plummeted and our ability to create more complex designs exploded, we also got a huge proliferation of innovation and creativity in shoe design.

3. Yes, today's shoe industry has lots of factory workers and marketing managers...but it also has many tens of thousands of more specialized roles like shoe designers, materials specialists, process automation engineers, etc.

I can see a future where software is almost entirely created by AI, but we have many specialized roles of people who know how to apply AI tools to software creation, or who sit in the interface between the business and the AI in some way that it's hard to foresee now.

On the flip side, if we truly get ASI, then it is hard to see what exactly those specialized roles represent that can't be replaced.

What does Sam Altman do that an ASI won't be able to?


The shoe comparison is ridiculous. For let's say 99% of people, shoe requirements are the same (in function), with almost all variations being purely esthetic. There are, let's say, 10 kinds of shoes, or perhaps 100. Make it a thousand, for argument's sake.

Meanwhile, every single business has different workflows and therefore different needs. The most common ones (browsers, etc.) are answered by traditional software. If you can write in detail the business needs as pertaining to workflows - business rules, let's call them - you've effectively made the software already. The only difference being that telling ChatGPT to do something in English and telling the computer to do it in code is that one is non-deterministic.

Software is, primarily, a means to process information, which is to say reality (in a business setting). An AI that can replace software developers can, in effect, replace every job that happens on a computer, in every company on Earth. Apart from Jevon's paradox (which is much more applicable to software than shoes), this shift would be so gargantuan that it's barely worth thinking about, in the same way that it's not worth thinking about a supervolcanic eruption: the consequences would be earth-shattering, and finding employment would be the least of your worries.


To add to this: the author is missing a major aspect of the Jevons paradox.

They keep referencing "more efficient software developers," but the Jevons paradox isn't only about efficiency. The efficiency creates lower cost, which in turn increases demand.

The main cost of software is software engineers. It's a specialized skill, so it's a high-salary job.

With AI doing most of the work, salaries will begin to fall. It will no longer make sense to study computer science, or spend years learning to code, for such a low salary. There will no longer be people doing what we call software engineering today.

So the author is right, Jevons paradox will take effect. But like I said above, it will replace the current industry with a very different-looking industry.


I really don't see AI generating safe code for automotive embedded systems that is maintainable and MISRA and HIS compliant. And there will need to be software engineers who are trained to debug these systems.


Lol no


Tons of shoemakers exist! And not unlike cheap AI swill, the best shoes are handmade by them .


Working with nontechnical people make prompts is interesting.

I’m seeing a lot of frustration with people dealing with markdown. Even though it’s free form and not really like code at all the hashes, dashes etc throw them off

Also seeing a lot of people having a hard time expressing their desired behavior in a concrete way. It reminds me in 3rd grade when we had to write recipes and then the teacher had a classmate maliciously comply to only what was written.

Overall I think tools will improve and barriers will continue to disappear but for the time being still has big demand for people to convert abstract intention to concrete machine usable format. It’s just how those ideas are expressed get more flexible with llms


"Cheaper software means people are going to want more of it. More software means more jobs for increasingly efficient software developers. Economists call this Jevons Paradox."

If we accept there will be increased demand for software, it's a big jump from that to concluding the efficiency of AI will be outpaced by the demand for software, specifically along the dimension of required developers.

Software isn't wheat or fuel, it can be reused and resold.


“Crack the books”—can anybody recommend good books for this shared future of ours? I’m tired of trying to piece it together from blog posts, READMEs, and short video tutorials.


I haven't read these all the way through myself but I've seen enough of them that I'm confident suggesting them:

- Prompt Engineering for LLMs by John Berryman and Albert Ziegler: https://www.amazon.com/Prompt-Engineering-LLMs-Model-Based-A...

- AI Engineering by Chip Huyen, which I recommend based on the strength of this extract about "agents": https://huyenchip.com/2025/01/07/agents.html


I like hands-on learning more than I like textbooks, so in case that matches your requirements, maybe try training your own GPT to have a sense for how it works. I wrote a Rust version of the famous https://github.com/karpathy/nanoGPT (which is in Python) so that I could learn how it's built.

I wrote it in Rust because I wanted to improve my skills in that language, be forced to write code instead of just reading the existing implementation so that I would truly learn, and test the quality of the nascent Rust AI/ML ecosystem, but you could pick your own language


Would you say more about your experience writing it in Rust? It worked well, what didn't, anywhere you found that you struggled unexpectedly or that was easier than you expected?


Hey, thanks for asking. I'm the furthest from an authority in this so I encourage you to take everything I say with a grain of salt.

I was using the burn[0] crate which is pretty new but in active development and chock-full of features already. It comes with a lot of what you need out of the box including a TUI visualizer for the training and validation steps.

The fact that it's so full of features is a blessing and a curse. The code is very modular so you can use the pieces you want the way you want to use them, which is good, but the "flavor" of Rust in which is written felt like a burden compared to the way I'm used to writing Rust (which, for context, is 99% using the glorious iced[1] GUI library). I can't fault burn entirely for this, after all they are free to make their own design choices and I was a beginner trying to do this in less than a week. I also think they are trying to solve for getting a practitioner to just get up and going right away, whereas I was trying to build a modular configuration on top of the crate instead of a one-and-done type script.

But there were countless generic types, several traits to define and implement in order to make some generic parameter fit those bounds, and the crate has more proc_macro derives than I'd like (my target number is 0) such as `#[derive(Module, Config, new)]` because they obfuscate the code that I actually have to write and don't teach me anything.

TL;DR the crate felt super powerful but also very foreign. It didn't quite click to the point where I thought it was intuitive or I felt very fluent with it. But then again, I spent like 5 days with it.

One other minor annoying thing was that I couldn't download exactly what I wanted out of HuggingFace directly. I ended up having to use `HuggingfaceDatasetLoader::new("carlosejimenez/wikitext__wikitext-2-raw-v1")` instead of `HuggingfaceDatasetLoader::new("Salesforce/wikitext")` because the latter would get an auth error, but this may also be my ignorance about how HF is supposed to work...

Eventually, I got the whole thing to work quite neatly and was able to tweak hyperparameters and get my model to increasingly better perplexity. With more tweaks, a better tokenizer, possibly better data source, and an NVIDIA GPU rather than Apple Silicon, I could have squeezed even more out of it. My original goal was to try to slap an iced GUI on the project so that I could tweak the hyperparameters there, compare models, plot the training and inference, etc. with a GUI instead of code. Sort of a no-code approach to training models. I think it's an area worth exploring more, but I have a main quest I need to finish first so I just wrote down my findings in an unpublished "paper" and tabled it for now.

________

[0]: https://github.com/tracel-ai/burn

[1]: https://github.com/iced-rs/iced


Some that i am looking into; these are "practical" books which do not focus on the theory/algorithms but given that they are available (library/models/whatever), how to build your apps using them;

1) Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications by Chip Huyen.

2) AI Engineering: Building Applications with Foundation Models by Chip Huyen (this is a very recent book).

3) Generative Deep Learning: Teaching Machines To Paint, Write, Compose, and Play by David Foster.

4) Building LLMs for Production: Enhancing LLM Abilities and Reliability with Prompting, Fine-Tuning, and RAG by Bouchard & Peters.


It honestly feels like books are a poor fit for this topic, because things are moving so much faster than the publishing industry. A book published in 2024 would have been written in 2023, and would give you a pretty skewed picture of the SOTA.


> Cheaper software means people are going to want more of it.

This is the key insight.

Most software today is built for the common user (individual, business, etc.).

With the cost of writing curly braces and semi-colons dropping drastically, we’ll actually see an increase in the number of programmers worldwide.

This will come at some cost; your average Wordpress agency will need to evolve or get eaten, similarly so if you primarily build CRUD apps.

As more software is written, the upper bound should also go higher. Great engineers will be greater, both in capability and compensation.

LLMs are the new compiler, and the world of software is going to get a lot more bespoke.


This cannot be true forever or for everything. I don't want cheaper software controls in my car. I want physical buttons. I don't want unpredictable and worse-than-human self driving at any price, not even if you paid me. (That my kids are beta tester pedestrians for the neighborhood Teslas is at regulatory failure IMO.)


> LLMs are the new compiler, and the world of software is going to get a lot more bespoke.

This is true only if your data sources and sinks are accessible, well documented, and their governing app is cooperative, such that your LLM coding tool can get access to it.

For instance, if you want to add a custom filter email plugin, your email app has to provide a plugin API and document it, thereby enabling a LLM-based development tool to seek and find all the necessary hooks for you. But is doing that in the interest of the email app's vendor? If not, no amount of AI will be able to add value to that app, ever.


I think the API in that case is just going to be the browser.


Thank you for this article, I really needed a positivity boost today.

It's a great reminder to not only consider what wonderful things could happen in the future, but also to want them and work towards them. Clarifying a positive future like this helps others consider it, want it, and make it happen.

I understand the case being made is more of a prediction than a wish, but it's also a vision. I believe clarifying a vision makes it more likely to happen, especially when people gravitate to it.

There is plenty of negativity already. Thanks again for the positive outlook.


AI is going to create more work for software developers, in the shorter run.

Big efficiency gains motivate the million small steps, planned and muddled through, that it takes to harness the new efficiencies most effectively at scale. Not just somewhere.

But in the long run (where “long” these days can be pretty short) how can generations of improving automated software developers, improving in both quality & capacity, not replace lots of software developers.

> There’s no shortage of people saying “this time it’s different”, but those people have been around for every other major technological advance and they have yet to be correct. I wouldn’t bet on the doomers.

This is not a valid argument. It is logical gibberish.

Let’s prep it for Lean: “many people have been wrong before, let’s call those people “Doomers1”. There are people who disagree with me. Let’s call those people “Doomers2”. Clearly, “Doomers1” = “Doomers2”, proof by similarity of set names. “Doomers1” have been wrong about something before, therefore “Doomers2” will certainly be wrong.

Lean crashes…

—-

Question: “What software development task is intrinsically human?”

Thought experiment: Could there be alien non-human programmers? Based on silicon instead of carbon, that could ever be better than us?

An argument that software developers cannot be replaced by exponentially scaling and improving learning tech, would have to involve some quite special reasoning.

It would have to tie humans to fundamental math and computing in a way that shook our understanding of all three.


I haven’t seen a study yet that suggests AI tools enhance productivity of developers.

I wouldn’t take it as a given.

The study I have seen was from a company selling AI developer tools whose researchers were employed by said company. Not exactly and independent and bias-free study.

Personally they don’t work for me.

It’s not AI that is going to take out jobs. It’s the capital class that will do that. That’s what is changing the industry.

I suspect this will be the year we start to hear stories of folks getting let go for not using LLM codegen.


I feel anecdotally that Stack Overflow significantly increased my productivity. By induction therefore so should an LLM.


Interesting I found SO to not be that helpful even during the golden years.

I should not that I tend to work in codebases that have little to no public information though so that might be the differentiator. Presumably an LLM would be less useful due to that but I'm looking forward to trying again in the future.


From my experience AI tools can help a lot to write some parts of your code. You could for example generate a parameterized test from several similar tests. Doing the same by hand might take half an hour, where it would only a take a minute to feed the right things to the AI tool and let it generate the code.


This one? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566

So it's more accurate to say you have seen a study, you just don't believe it.


Yes. Funded by Microsoft with two Microsoft employees as authors.

Maybe they did everything right?

However, Microsoft owns a codegen AI tool and has vested interest in the positive results of such studies.

If the results could be reproduced independently I would consider it more credible.

As is I feel it’s more marketing than research.


AI is not going to kill software industry, but is there still a software industry?

Or, rather, is there any non-software industry left?


That's like asking "is there a building industry" just because every industry uses buildings. Every (well, most) industries use software; but the production of that software is itself an industry.


Is there a taxi industry?

There certainly used to be one in the past, but where I live now, each taxi fleet is owned by what used to be a "software company".


There's still a difference between sales and retail, investment and finance, and between software and tech.


I don't buy applying comparative advantage in this case, even after reading the more extended article that was linked to. The argument makes the fundamental assumption that the marginal value of one unit of a specific type of work is fixed. Whatever is the most profitable use case for AGI is the one that all the compute and power will be plowed into.

But what kind of economic activity actually has a fixed marginal utility, no matter how much of that activity there is? I think it's like, none at all? No matter what you're doing, there's eventually going to be diminishing returns as you do more of it. As the highest value use cases are saturated by AGI, the opportunity cost of doing something else diminishes. And comparative advantage only works because of the opportunity cost.


Yeah, I always suspected this. Increased software productivity facilitated by IA would lead to even more and more complex software.

But are we the software workers going to see the results of our increased productivity in our paychecks or are we now on the same boat as the other proletarians that haven't seen their salaries increase in proportion to the increases on their productivities facilitated by automation since the 70s.

Are we going to enter history as the last profession to resist feudalization, the last one who gave a lower class person some chance of upward mobility only to finally be conquered by the power of the ultra-rich?


Seems that way. I’ve also been thinking this lately. As the bootcamp grads of the last ten years know, getting that 6 figure salary was a blessing, prices rising faster and faster, well that takes the shine off it.


> Software development has always been a career where you are either learning new things or stagnating. AI doesn’t change the need to keep learning and evolving.

Keep in mind, there are still COBOL developers. If you want to stagnate, there's a market for that.


Legacy markets are niche, and often shrinking.


ChatGPT and similar tools have made getting started programming way more accessible. My girlfriend just learned to program starting a few months ago and I’m constantly impressed what she’s able to do with the help of AI tools.


There's also the effect that reviewing AI generated code is a lot less fun than writing your own code. It becomes more of an admin task, and less of a creative one, thereby attracting less motivated developers.


I think the two different sides of the view is really just a matter of perspective and definition.

Do you view the next evolution of software engineering with AI as still software engineering? Or do you think it's something else that replaces software engineering. Something akin to the Ship of Theseus.

I belong to the replacement camp, but I don't think the underlying thinking differs much. Just a matter of how you look at it.

My personal take: https://github.com/paradite/ai-replace-swe


> While AI is powerful, it’s also computationally expensive. Unless someone decides to rewrite the laws of physics, there will always be a limit on how much artificial intelligence humanity can bring to bear. This means that we’ll eventually allocate our scarce AI resources towards the things they are best at, which leaves plenty of things for humans to do.

Unfortunately, this argument doesn't hold up because of "cheap" models such as Deepseek R1.


If Thing B is better and cheaper than Thing A, people are going to use Thing B until it runs out. Consequently, people are going to make more Thing B. In the short term some jobs are safe only because we don't have enough Thing B, temporarily. But that's not the reason to claim "we are safe don't worry!".


I don't know what the future holds, but developing software in the present is amazing. I've always loved writing software, but I love writing software with AI even more.

All the boring stuff, like converting data from one format to another, is a prompt away. All the annoying scripts you know how to write, but cbf, like conditionally editing files across directories, is a prompt away.

Plus, I feel secure with my AI buddy. It's like having a helpful lead who's always up to help me debug, give me a second opinion, or show me what I'm missing. I feel like I can work in any language I want, and learning new things is so much easier with an AI tutor.


I have to say, it's not every day that you find a well-written article without loads of jargon or clickbait titles full of personal storytelling!


Also all the jobs for replacing unreliable/insecure systems where LLM output is being given wayyy too much trust and system authority.


"When software development becomes more efficient, the ROI of any given software project increases, which unlocks more projects." - this is just wrong. It assumes every company hiring software developers has infinite scope. In reality most companies hire programmers to create something and then scale down and maintain it. (Obviously, there are companies with nearly unlimited scope, but those are usually huge companies).

When some AI can perform the work at 100X the rate of an average developer, you will run out of requirements pretty quickly. You will need 1/100th of developers to oversee the process.


There’s a silent majority not using AI, in many cases actually building complex AI tools and models with many moving parts, with nothing to gain from saying that they’re not using AI. And the grifters have everything to gain by pouncing on anyone commenting that they’re not using AI.


I agree and like this article, but something about calling those that believe otherwise Grifters feels a bit too harsh and polarizing to me. I know some who do believe AI will cause immense software role displacement, and wouldn't label them this way.


For coders/devs --> AI is useful only to novice coders ai give them a basuc core/template code to start with that they can modify

But actual coders who work in companies on projects thay have to follow different styles/languages/algorithm for coding practices, ai cant conform to that , if you are working on legacy codes or refactoring you need to maintain the style and format the code already is in, ai cant give you exact, you cant say ai these is my million line, code scattered on thousand files, analyse and add these feature while maintaining sanctity of current code practice

Ai will just introduce new bugs and you will waste more time in finding and solving

AI gives productivity boost only if you are starting from scratch, it can give you core code / boilerplate to rapidly start with adding features


Disagree. AI can increase productivity a bunch of ways in this scenario:

1. "Help me understand what this code does." Both via text and by building diagrams.

2. "Help me understand how to solve this problem or how this tech works."

3. Normal code-completion type things

4. Helping build scripts that you can use to apply changes.

Also, in general you can ask the AI to maintain the existing style.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: