You're getting caught up on the technical meaning of terms rather than what the author actually wrote.
Theyre explicitly saying that most software will no longer be artisianal - a great literary novel - and instead become industrialized - mass produced paperback garbage books. But also saying that good software, like literature, will continue to exist.
Yes, I read the article. I still think it's incorrect. Most software (especially by usage) is already not artisanal. You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else. Those are churned out by the millions from a common blueprint and designed by teams and teams of highly skilled specialists using specialized tooling, not so different from the latest iPhone or car.
As such, the article's point fails right at the start when it tries to make the point that software production is not already industrial. It is. But if you look at actual industrial design processes, their equivalent of "writing the code" is relatively small. Quality assurance, compliance to various legal requirements, balancing different requirements for the product at hand, having endless meetings with customer representatives to figure out requirements in the first place, those are where most of the time goes and those are exactly the places where LLMs are not very good. So the part that is already fast will get faster and the slow part will stay slow. That is not a recipe for revolutionary progress.
I think the author of the post envisions more code authoring automation, more generated code/test/deployment, exponentially more. To the degree what we have now would be "quaint", as he says.
Your point that most software uses the same browsers, databases, tooling and internal libraries is a weakness, a sameness that can be exploited by current AI, to push that automation capability much further. Hell, why even bother with any of the generated code and infrastructure being "human readable" anymore? (Of course, all kinds of reasons that is bad, but just watch that "innovation" get a marketing push and take off. Which would only mean we'd need viewing software to make whatever was generated readable - as if anyone would read to understand hundreds/millions of generated complex anything.)
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
There is a large mass of unwritten software. It would add value but it is too bespoke to already have an open source solution. Think about a non-profit organization working with proprietary file formats and databases. They will be able to generate automation tools that they could otherwise not afford. This will be repeated over and over. This is what I think the author is getting at.
> You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else.
Hey! I'm going to passionately defend my choice over a really minor difference. I mean do you see how that app does their hamburger menu?! It makes the app utterly unusable!
Maybe I'm exaggerating here but I've heard things pretty close in "chrome vs Firefox" and "signal vs ..." threads. People are really passionate about tiny details. Or at least they think that's that they're passionate about.
Unfortunately I think what they don't realize is that passion often hinders that revolutionary progress you speak of. It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
> It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
I think this is understating the cost of jumping. Basically zero users care about the "technological" elements of their browser (e.g. the render engine, JS engine, video codecs) so long as it offers feature equivalence, but they do care a lot about comparatively "minor" UX elements (e.g. password manager, profile sync, cross-platform consistency, etc) which probably actually dominate their user interaction with the browser itself and thus understandably prove remarkably sticky ("minor" here is in terms of implementation complexity versus the rest of a browser).
Yeah I think you're right. That it's the little things that get people upset rather than the big things weirdly enough. But I think people should have a bit more introspection. Are their complaints things they seriously care about or justifies for their choices. Can they themselves differentiate. It might seem obvious but the easiest person to fool is yourself and we're all experts at it.
I guess two things can be true at the same time. And I think AI will likely matter a lot more than detractors think, and nowhere near as much as enthusiasts think.
Perhaps a good analogy is the spreadsheet. It was a complete shift in the way that humans interacted with numbers. From accounting to engineering to home budgets - there are few people who haven't used a spreadsheet to "program" the computer at some point.
It's a fantastic tool, but has limits. It's also fair to say people use (abuse) spreadsheets far beyond those limits. It's a fantastic tool for accounting, but real accounting systems exist for a reason.
Similarly AI will allow lots more people to "program" their computer. But making the programing task go away just exposes limitations in other parts of the "development" process.
To your analogy I don't think AI does mass-produced paperbacks. I think it is the equivalent of writing a novel for yourself. People don't sell spreadsheets, they use them. AI will allow people to write programs for themselves, just like digital cameras turned us all into photographers. But when we need it "done right" we'll still turn to people with honed skills.
I think existing skilled programmers are leveraging AI to increase productivity.
I think there are some people with limited, or no, programming experience who are vibe coding small apps out of nothing. But I think this is a tiny fraction of people. As much as the AI might write code, the tools used to do that, plus compile, distribute etc are still very developer focused.
Sure, one day my pastor might be able to download and install some complete environment which allows him to create something.
Maybe it'll design the database for him, plus install and maintain the local database server for him (or integrate with a cloud service.)
Maybe it'll get all the necessary database and program security right.
Maybe it'll integrate well with other systems, from email to text-import and export. Maybe that will all be maintainable as external services change.
Maybe it'll be able to do support when the printing stops working, or it all needs to be moved to a new machine.
Maybe this environment will be stable enough for the years and decades that the program will be used for. Maybe updating or adding to the program along the way won't break existing things.
Maybe it'll work so well it can be distributed to others.
All this without my pastor even needing to understand what a "variable" is.
That day may come. But, as well as it might or might not write code today, we're a long long way from this future. Mass producing software is a lot more than writing code.
We could have LLM’s capable of doing all that for your pastor right now and it would still take time before these systems can effectively reason through troubleshooting this bespoke software. Right now the effectiveness of LLLM-powered troubleshooting software platforms relies upon the gravity induced by millions of programmers sharing experiences upon more or less the same platforms. Gigabytes to terabytes of text training data on all sorts of things that go bonkers on each platform.
We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.
Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.
Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.
>All this without my pastor even needing to understand what a "variable" is.
Missing the point. The barrier to make software has lowered substantially. This not makes mediocre devs less mediocre and for a lot of businesses out there being slightly less mediocre is all they need most of the time. Needing decent devs 20-40% of the time is already a big win in terms of expenses. Making small quick mediocre software that later on you need a decent dev for a couple of months to clean as opposed to pay and keep that dev for several years to make the software from scratch.
Yes, it is not very efficient, but neither are those Cobol apps in old banks. It's always about it being just good enough that it works not beautifully crafted software that never breaks. The market can stay alive longer than you can keep a high salary job as a very experienced dev when you are competing against 100 other similarly experienced devs for your job.
This was already true before LLMs. "Artisinal software" was never the norm. The tsunami of crap just got a bit bigger.
Unlike clothing, software always scaled. So, it's a bit wrongheaded to assume that the new economics would be more like the economics of clothing after mass production. An "artisanal" dress still only fits one person. "Artisanal" software has always served anywhere between zero people and millions.
LLMs are not the spinning jenny. They are not an industrial revolution, even if the stock market valuations assume that they are.
Agreed, software was always kind of mediocre. This is expected given the massive first mover advantage effect. Quality is irrelevant when speed to market is everything.
Unlike speed to market it doesnt manifest in an obvious way but I've watched several companies lose significant market share because they didnt appreciate software quality.
“Garbage books” are mass-printed, but aren’t mass-written in a mass production sense. Mass production is about producing fairly exact copies of something that was designed once. The design part has always remained more artisanal than industrial. It’s only the production based on the design (or manuscript) that is industrial.
The difference with software is that software is design all the way down. It only needs to be written once, similar to how a mass-produced item needs only be designed once. The copying that corresponds to mass production is the deployment and execution of the software, not the writing of it.
Isn't this already the case? Your company doesn't build its own word processor, they license it from Microsoft, or they pay Google for G Suite, or whatever. Great books are sold in paperback, after all.
The syntactic representation will become that. End of day it's just math ops, state sync of memory and display. Even semantic objects like an OSs protected memory is a special case of access control that can be mathematically computed around. There is nothing important about special semantics.
The user experience will be less constrained as the self arrangement of pixels improves and users do not run into designer constraints, usually due to lack of granularity some button widget or layout framework is capable of.
"Artisanal" software engineers probably never were their own self selected identity.
Have been writing code since the late 80s, when Windows and commercial Unix were too expansive and we all wrote shoddy but functional kernels. Who does that now? Most gigs these days are glue code to fetch/cache deps and template concrete config values for frameworks. Artisanal SaaS configuration is not artisanal software engineering.
And because software engineers were their own worst enemy the last decade; living big as they ate others jobs and industries; hate for the industry has gone mainstream. Something politicians have to react to. Non-SWEs don't want to pay middle men to use their property. GenAI can get them to that place.
As an art teacher once said; making things for money is not the practice of a craft. It's just capitalism. Anyone building SaaS apps through contemporary methods is a Subway sandwich artist, not the old timey well rounded farmer, hunter, who also bakes bread.
What he's missing is that there's always been a market for custom-built software by non-professionals. For instance, spreadsheets. Back in the 1970s engineers and accountants and people like that wrote simple programs for programmable calculators. Today it's Python.
The most radical development in software tools I think, would be more tools for non-professional programmers to program small tools that put their skills on wheels. I did a lot of biz dev around something that encompassed "low code/no code" but a revolution there involves smoothing out 5-10 obstacles with a definite Ashby character that if you fool yourself that you can get away with ignoring the last 2 required requirements you get just another Wix that people will laugh at. For now, AI coding doesn't have that much to offer the non-professional programmer because a person without insight into the structure of programs, project management and a sense of what quality means will go in circles at best.
I think the thinking in the article is completely backwards about the economics. I mean, the point of software is you can write it once and the cost to deploy a billion units is trivial in comparison. Sure, AI slop can put the "crap" in "app" but if you have any sense you don't go cruising the app store for trash but find out about best-of-breed products or products that are the thin edge of a long wedge (like the McDonald's app which is valuable because it has all the stores baacking it)
Theyre explicitly saying that most software will no longer be artisianal - a great literary novel - and instead become industrialized - mass produced paperback garbage books. But also saying that good software, like literature, will continue to exist.