You could say the same things about assemblers, compilers, garbage collection, higher level languages etc. In practice the effect has always been an increase in the height of a mountain of software that can be made before development grinds to a halt due to complexity. LLMs are no different
In my own experience (and from everything I’ve read), LLMs as they are today don’t help us as an industry build a higher mountain of software because they don’t help us deal with complexity — they only help us build the mountain faster.
I see this response a lot but I think it's self-contradictory. Building faster, understanding faster, refactoring faster — these do allow skilled developers to work on bigger things. When it takes you one minute instead of an hour to find the answer to a question about how something works, of course that lets you build something more complex.
Could you say more about what you think it would look like for LLMs to genuinely help us deal with complexity? I can think of some things: helping us write more and better tests, fewer bugs, helping us get to the right abstractions faster, helping us write glue code so more systems can talk to each other, helping us port things to one stack so we don't have to maintain polyglot piles of stuff (or conversely helping us not worry about picking and choosing the best stuff from every language ecosystem).
> I see this response a lot but I think it's self-contradictory. Building faster, understanding faster, refactoring faster — these do allow skilled developers to work on bigger things. When it takes you one minute instead of an hour to find the answer to a question about how something works, of course that lets you build something more complex.
I partially agree. While LLMs don't magically increase a human's mental capacity, but they do allow a given human to explore the search space of e.g. abstractions faster than they otherwise could before they run out of time or patience.
But (to use GGP's metaphor) do LLMs increase the ultimate height of the software mountain at which complexity grinds everything to a halt?
To be more precise, this is point at which the cost of changing the system gets prohibitively high because any change you make will likely break something else. Progress becomes impossible.
Do current LLMs help us here? No, they don't. It's widely known that if you vibe code something, you'll pretty quickly hit a wall where any change you ask the LLM to make will break something else. To reliably make changes to a complex system, a human still needs to really grok what's going on.
Since the complexity ceiling is a function of human mental capacity, there are two ways to raise that ceiling:
1. Reduce cognitive load by building high-leverage abstractions and tools (e.g. compilers, SQL, HTTP)
2. Find a smarter person/machine to do the work (i.e. some future form of AI)
So while current LLMs might help us do #1 faster, they don't fundamentally alter the complexity landscape, not yet.
Thanks for replying! I disagree that current LLMs can't help build tooling that improves rigor and lets you manage greater complexity. However, I agree that most people are not doing this. Some threads from a colleague on this topic: