Small ot, but it's quite interesting that the highest decisive impact generative AI is having right now is on tech workers and software developers in particular.
I'm more and more convinced that we're on the edge of a major shake up in the industry with all these tools.
Not getting replaced, but at this rate of improvements I can't unsee major changes.
A recent junior I have in my team built his first app entirely with chatgpt one year ago, he still didn't know how to code, but could figure out how to fix the imperfect code by reasoning, all of it as a non coder, and actually release something that worked for other people.
I'm a solo business owner who knows enough JS/TS/HTML/CSS to get by -- my product is a website full of information -- but I've never been 'a developer'.
ChatGPT et. al. is a miraculous boost to my productivity. This morning I needed a function to iterate over some JSON and do stuff with it. Fairly mundane, and I could have written it myself.
Doing so would have been boring, routine, and would have taken me at least an hour. I asked ChatGPT 4o and I got exactly what I wanted in 30 seconds.
I can only hope that these tools enable more people like me to build more cool things. That's how it's affected me: I never would have hired another dev. No job is lost. I'm just exponentially better at mine.
it needs to be said that web dev is by far the area in which LLMs are more versed , i suppose due to the incredible amount of training data available. Other languages produce more allucinations for now.
I have had a similar experience. I build wordpress websites and ChatGPT has allowed me to extend their functionality a great deal without needing to learn how to code PHP.
I'm sure that's true but I've also seen non-developers build systems using Access or solve complex critical business problems using excel and similar. I've seen a lot of junior developers tinker with apps and do great work from reading docs and tutorials. I was one of those myself a long time ago.
For some problems this is a perfect solution. For a lot it's a short term fix that turns into a long term issue. I've been on many a project that's had to undo these types of setups, for very valid reasons and usually at a very high cost. Often you find them in clusters, with virtually no one actually having a full understanding of what they actually do anymore.
Building the initial app is only a very small part of software engineering. Maintaining and supporting a service/business and helping them evolve is far harder, but essential.
My experience is that complexity builds very quickly to a point it's unsustainable if not managed well. I fear AI could well accelerate that process in a lot of situations if engineering knowledge and tradeoffs are assumed to included in what it provides.
The more I think about it, the more I am convinced developers will be the "first to go" when AGI takes over. Before bloggers and youtubers. Because programming is an activity that requires the least amount of "grounding to reality" among all human activities. We made sure of this with layers and layers of convenient abstraction.
What about developers that code the AI systems? Well.. I am sure AGI will come from other "bootsrapping AIs" just like we see with compilers that compile themselves. When I see Altman and Sutskever talking about AGI being within reach, I feel they are talking about this bootstrapping AI being within reach.
More seriously, the output quality of LLMs for code is pretty inconsistent. I think there's an analogy to be made with literature. For instance, a short story generated by an LLM can't really hold a candle to the work of a human author.
LLM-generated code can be a good starting point for avoiding tedious aspects of software development, like boilerplate or repetitive tasks. When it works, it saves a lot of time. For example, if I need to generate a bunch of similar functions, an LLM can sometimes act like an ad-hoc code generator, helping to skip the manual labor. I’ve also gotten some helpful suggestions on code style, though mostly for small snippets. It’s especially useful for refreshing things you already know—like quickly recalling "How do I do this with TypeScript?" without needing to search for documentation.
Anyway, literature writers and software engineers aren't going to be replaced anytime soon.
Human studies participants have a safer job than firemen. The trouble is that it is already woefully underpaid, and unsettled industries will increase supply.
There are crucial quality issues with Mechanical Turk, though, and when these really start damaging AI in obvious ways, the system (and the compensation, vetting procedures and oversight) seems likely to change.
Those jobs are already outsourced if possible (remember RLHF for ChatGPT outsourced to Kenya, Facebook content moderation to India?) And if they aren't, that's usually for regulatory reasons.
Isn't scaling already a big problem in generative AI? Apparently there is not enough data to actually appreciably increase the quality of outputs, and lots of data is also being polluted by AI generations. This is why these companies are now shifting to more ancillary features like in this article (which Claude already has to some extent, as well as Copilot in VSCode and other editors).
This was pretty much refuted by Meta with their LLama3 release. Two key points I got from a podcast with the lead data person, right after release:
a) Internet data is generally shit anyway. Previous generations of models are used to sift through, classify and clean up the data
b) post-processing (aka finetuning) uses mostly synthetic datasets. Reward models based on human annotators from previous runs were already outperforming said human annotators, so they just went with it.
This also invalidates a lot of the early "model collapse" findings when feeding the model's output to itself. It seems that many of the initial papers were either wrong, used toy models, or otherwise didn't use the proper techniques to avoid model collapse (or, perhaps they wanted to reach it...)
We got a jump start with 'a ton of data' and discovering now that less but better data is actually better.
So yes we will see manual labor to finetune the data lair but this will only be necessary for a certain amount of time. And in parallel we also help by just using it: With the feedback we give these systems.
A feedback loop mechanism is fundamental part of AI ecosystems.
It's the same thing that happened with mechanical looms. Programming will go from an artisan craft to the sweatshops and software programming will become low paying precarious gig work.
This is all about labor and capital. When people toss hundreds of billions at something it almost always is.
The social relationship doesn't have to be this way. Technological improvement could help us instead of screw us over. But we'd first have to admit that profit exploitation isn't absolutely the best thing ever and we'll never do that. Soooo here we are.
That's an interesting example. Lawyers, unlike software developers, as a group go out of their way to ensure that they'll need need and cannot be replaced by others or automation. They push for certain processes to require lawyers. We on the other hand are more eager to automate ourselves than anything else. Maybe that will boost or productivity and make us even more valuable and highly paid or maybe we'll end up unemployed. Fascinating contrast between the professions
Programmers have been against unions/licenses in software development because we saw it as slowing down our ability to job hop, potentially massively lowering wages, and making a large barrier to entry (which hurts a lot of us who started as kids/teens).
Now there's a chance that this unregulated wild west with a low barrier to entry that's benefited us for so long will come back to bite us in the ass. Kind of spooky to think about.
I don't know if that's true. If I was in a WGA/DGA equivalent in my field that offered health care and scale pay that would be great!
I bet if you asked most programmers whether they'd like to have a professional guild similar to the writers who just went on strike, you'd probably be surprised, especially for gaming devs.
I would be in favor of some kind of state approved exam/certification to ensure programmers have at least some basic knowledge of computer security and engineering ethics.
> making a large barrier to entry (which hurts a lot of us who started as kids/teens)
I doubt it. In my experience autodidacts are the best programmers I know.
Yeah, plus one on this one, extremely curious to hear as well.
I am aware that remote robot surgeries have been a thing for quite a bit of time, but this is the first time ever I am hearing about unassisted robot surgeries being a thing at all.
A follow-up question: if an unassisted robot surgery goes wrong, who is liable? I know we have a similar dilemma with self-driving cars, but I was under the impression that things are way more regulated and strict in the realm of healthcare.
This fundamentally misunderstands what lawyers do. You're prediction might be right for paralegals and very junior attorneys. But the lawyers who make real money are not doing rote work. It's lunches, negotiation, politics, and for trial attorneys performance and debate. Social activities, human skills. They'll always be around.
Agree with that, the startup lawyer I used to use now charges $1,100 / hour which is untenable, would much rather get the basics covered by an AI lawyer.
Live counsel in sensitive situations is definitely in the works, if not already in beta. Get pulled over by cops, or have the authorities asking to enter the premises, bring up your AI counsel and let them talk to the officer, before giving you advice on how to proceed. I can even envision an eventual future where public pressure results in law enforcement being paired with an AI assistant to help refresh their memory on some of the articles of the law.
FWIW I used various LLMs to draft a freelance work contract with good results. Of course I carefully read, thought about every clause, edited, etc. It’s probably not as bulletproof as something a lawyer could produce, but it was definitely a big help.
I think there's a lot of evidence out there that supports your theory.
- There's the biggest/most high quality training corpus that captures all aspects of dev work (code, changes, discussions about issues, etc.) out there with open source hosting sites like GitHub
- Synthetic data is easy to generate and verify, you can just run unit tests/debugger in a loop until you get it right. Try doing that with contracts or tax statements.
> Because programming is an activity that requires the least amount of "grounding to reality" among all human activities.
Maybe once you're deep into APIs that talk to other APIs, but near the surface where the data is collected there's nothing but "grounding to reality".
As my professor of Software Engineering put it: when building a system for counting the number of people inside a room most people would put a turnstile and count the turns. Does this fulfill the requirement? No - people can turn the wheel multiple times, leave through a window, give birth inside the room, etc. Is it good enough? Only your client can say, and only after considering factors like "available technology" and "budget" that have nothing to do with software.
Maintenance and long-term development will still require grounding to reality. A super-CEO might build the app themself, but keeping it running 5, 10, 20 years is a completely different deal. I imagine developers will eventually start to act more like librarians, knowing their system very well but not necessarily in charge of making new content.
We didn't need humans in the first place. The collective "we" can decide that we want to do anything. People have this crazy fatalistic attitude about AI taking over, billionaries ejecting to Mars and humans becoming irrelevant. Let me tell you, humans have been irrelevant since forever.
That would be nice, to gambol in a lush, sunny meadow, perusing a leather-bound volume that we read in the original Greek or Sanskrit.
Unfortunately, I fear we will instead end up sweaty and dirty and bloodied, grappling in the parched terrain, trying to bash members of a neighboring clan with rocks and wooden clubs, while a skyline of crumbling skyscrapers looms in the distance.
Increased productivity has to more, not less toiling in the last 50 or so years.
This is not gonna be different with AGI (if it ever happens). Some people will get very rich while the rest is still gonna work just as much as they are now. The jobs just gonna suck even more.
I envision lots of solo coders now able to compete with bigger companies, creating more niche software that meets peoples need better than generic solutions. Truly exciting time to be in software.
I have the exact opposite concern. Software is/was one of the few industries where a solo person can already compete with big companies, or at least live comfortably alongside. Just look at the market for niche or boutique software that does stuff like calendars or emails or task management in a particular way.
To me the current direction LLMs are headed seems like it will just further entrench the power of the trillion dollar megacorps because they’re the only people that can fund the creation and operation of this stuff.
Yes, this is more or less my life. I run a small bootstrapped startup and do some consulting on the side. I have a few decades of experience. So it's not like I can't do things myself. But chat GPT has enormously enhanced my output. It's rarely perfect but I can bridge the gap usually by completing the job. My role is increasingly directing changes and telling it what needs doing next.
Canvas sounds useful. I'll be playing with that as soon as I can access it.
Another useful thing in chat gpt that I've been leveraging is its memory function. I just tell it to remember instructions so I don't have to spell them out the next time I'm doing something.
I agree, and want to add: AI will make it possible to inexpensively produce small tailored applications that only support locally required functionality. The advantage to this is very small code bases that are easier to understand. In other words, it makes it possible to avoid huge mega apps that any particular user might use a small percentage of the functionality.
Also, is it possible that smaller focused apps will have few edge cases and be more reliable?
I think that’s probably more that programmers are early adopters of new technologies and the people building the technologies are programmers. There are lots of roles that are more easy to automate completely with an LLM as they improve, but are harder to make inroads with. I expect as the initial waves of LLM startups mature and winnow we will some fields almost entirely automate. For instance medical coding feels totally ripe since it’s basically a natural language classification exercise easily fine tuned.
I think a lot of developers will get replaced by AI. I’ve worked in digitalisation and automation for a couple of decades now and I’ve gone into a role which specialises in helping start-ups grow their IT into something will will actually work as they transition into enterprise organisations. I think almost all the work I replace or optimise will be done without developers in a few years (or maybe in a decade). This is everything related to data transformation, storage and transportation - to applications and websites.
In the organisation I currently work, we’re already seeing rather large amounts of digitalisation done by non-developers. This is something organisations have tried to do for a long time, but all those no-code tools, robot process automation and so on quickly require some sort of software developer despite all their lofty promises. This isn’t what I’m seeing with AI. We have a lot of people building things that automate or enhance their workflows, we’re seeing api usage and data warehouse work done from non-developers in ways that are “good enough” and often on the same level or better than what software developers would deliver. They’ve already replaced their corporate designers with AI generated icons and such, and they’ll certainly need fewer developers going forward. Possibly relying solely on external specialists when something needs to scale or has too many issues.
I also think that a lot or “standard” platforms are going to struggle. Why would you buy a generic website for your small business when you can rather easily develop one yourself? All in all I’d bet that at least 70% of the developer jobs in my area aren’t going to be there in 10 years and so far they don’t seem to open up new software development jobs. So while they are generating new jobs, it’s not in software development.
I’m not too worried for myself. I think I’m old enough that I can ride on my specialty in cleaning up messes, or if that fails transition into medical software or other areas where you really, really, don’t want the AI to write any code. I’d certainly worry if I was a young generalist developer, especially if a big chunk of my work relies on me using AI or search engines.
Am I the only one not seeing it? AI is very useful assistant, boosts productivity, and makes coding easier, but ultimately in real life scenarios beside POCs, it cannot replace a human. You quickly reach a threshold where explaining and getting the AI to do what you want is actually harder than doing it. What happens if your LLM built app has a bug and the AI does not "get" it?
ChatGPT shows a clear path forward. Feedback loop (consistent improvement), tooling which leverages all of llms powers, writing unit tests automatically and running code (chatgpt can run python already, when will it able to run java and other langauges?)
And its arleady useful today for small things. Copilot is easier and more integrated than googling parameters or looking up documentation.
UIs/IDEs like curser.ai are a lot more integrated.
What you see today is just the beginning of a something, potentially big.
I respect your opinion and you could be right, but I don't buy it so far. While integrations have improved, for the LLM models everything relies on, we don't see major advances anymore. Compare the jump from GPT3.5 to 4, vs the next iterations, it still suffers from the same limitations LLMs have (context length, overconfidence, hallucinations). Maybe I'm too impatient.
From a research point of view, context length got a lot better in the last year and continues to become better.
Chatgpt just released new voice mode.
It took over a year to get GitHub Copilot rolled out in my very big company.
People work left and right to make it better. Every benchmark shows either smaller models or faster models or better models. This will not stop anytime soon.
Flux for Image generatin came out of nowhere and is a lot better with faces and hands and image description than anything before it.
Yes the original jump was crazy but we are running into capacity constrains left and right.
Alone how long it takes for a company to buy enough GPUs, building a platform, workflows, transition capacity into it, etc. takes time.
When i say AI will change our industry, i don't know how long it takes. I guess 5-10 years but it makes it a lot more obvious HOW and the HOW was completly missing before GPT3. I couldn't came up with an good idea how to do something like this at all.
And for hallucinations, there are also plenty of people working left and right. The reasoning of o1 is the first big throw of a big company to start running a model longer. But for running o1 for 10 seconds and longer, you need a lot more resources.
Nvidias chip production is currently a hard limit in our industry. Even getting enough energy into Datacenters is a hard limit right now.
Its clearly not money if you look how much money is thrown at it already.
As an engineer I've spoken to a couple of different designers who are building out prototypes of their startup ideas using LLM assistance with the coding.
While no actual engineer is involved at that stage, if they got funded then I'm sure their next step will be to hire a real engineer to do it all properly.
<insert manic laughter>
I mean that might happen, but why get funding? Why not move to market immediately? Without debt. Get in the thick of it. Just do. You want a wage or do you want a product that does the thing. Because sometimes, with the llm, you can just build the thing. The marketing, the compliance, you might hire for that, or you might also out-source to the llm.
Why would you hire? Either it works- in the sense of does the job and is cost effective- or it is not.
Is there a situation where paying 100's of k of wages makes a thing suddenly a good idea? I have doubts.
Let's see if your little app can handle millions of daily users without an actual engineer. Your average application will fall over before that.
It'll be some time before an AI will be able to handle this scenario.
But by then, your job, my job and everyone else's job will be automated, it's entirely possible the current economic system will collapse in this scenario.
I know a similar non-coding founder who was using LLMs to create a full fledged TypeScript based SaaS product and regularly comes to me with high level architecture questions, but also doesn't know or care to figure out what HTTP methods are, variable names are a mishmash of whatever case the LLM decided to generate that day, and there are no tests whatsoever. It's held together by sheer force of manual QA.
This my technique- test the output, make sure it works (in the sense of outputs) the way I want. Test the input edge cases, move on. Occasionally when I can't get it to do what I want the llm suggests things like logging of output between functions etc- in which case they get added, but at the end I ask it to take out all the logging and make the code more concise.
And sometimes it breaks in ways I can't fix - so rolling back or picking a new patch from a know break point becomes important.
16 hours for my first azure pipeline, auto-updates from code to prod, static app including setting up git, vscode, node, azure creds etc. I chose a stack I have never seen at work (mostly see AWS) and I am not a coder. Last code was Pascal in the 1980s.
Which is awesome, and if you wanted to understand the code, it would do an amazing job of tutoring you. I love seeing people being able to solve their own problems without the need for a professional programmer.
The downside I've noticed is if I do this, I can't explain how I "solved the problem" during job interviews. I tried once, "I didn't, chatgpt solved it for me," and they laughed and I didn't get the job, so I stopped admitting that and said I just use chatgpt to quickly write up boilerplate for me.
How was admitting this supposed to help you in an interview? Anyway you won't learn anything if you don't review and go deeper into the code you've written with ChatGPT.
A few times a month I now build something I have wanted in the past but now I can afford the time to build. I have always prided myself at being pretty good at working with other human developers, and now I feel pretty good at using LLM based AI as a design and coding assistant, when it makes sense to not just do all the work myself.
I also wonder what the person would have been hired for... maybe QA? I was doing this with random relevant scripts nearly 20 years ago but wasn't given a job where code would be relevant for the task until loooooong after I could comprehend what I was doing
So he’s a naturally talented developer who learned to code as he created his first app. Maybe he didn’t understand specifics, but you have to be able to intuit a lot to string a bunch of AI snippets into an app.
With how easy it is to do until functions, there is gonna be a new baseline in functionality and speed of development for what people expect of software in general. It’s gonna ramp up from here till we get to generative UIs
I'm more and more convinced that we're on the edge of a major shake up in the industry with all these tools.
Not getting replaced, but at this rate of improvements I can't unsee major changes.
A recent junior I have in my team built his first app entirely with chatgpt one year ago, he still didn't know how to code, but could figure out how to fix the imperfect code by reasoning, all of it as a non coder, and actually release something that worked for other people.