I'm continually surprised by the amount of negativity that accompanies these sort of statements. The direction of travel is very clear - LLM based systems will be writing more and more code at all companies.
I don't think this is a bad thing - if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss and everyone has examples of LLMs producing buggy or ridiculous code. But once the tooling improves to:
1. align produced code better to existing patterns and architecture
2. fix the feedback loop - with TDD, other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.
Then we will definitely start seeing more and more code produced by LLMs. Don't look at the state of the art not, look at the direction of travel.
> if this can be accompanied by an increase in software quality
That’s a huge “if”, and by your own admission not what’s happening now.
> other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.
What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.
> Then we will definitely start seeing more and more code produced by LLMs.
We’re already there. And there’s a lot of bad code being pumped out. Which will in turn be fed back to the LLMs.
> Don't look at the state of the art not, look at the direction of travel.
That’s what leads to the eternal “in five years” which eventually sinks everyone’s trust.
> What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.
Humans are machines which make errors. Somehow, we got to the moon. The suggestion that errors just mindlessly compound and that there is no way around it, is what's stupid.
Even if we accept the premise (seeing humans as machines is literally dehumanising and a favourite argument of those who exploit them), not all machines are created equal. Would you use a bicycle to fill your taxes?
> Somehow, we got to the moon
Quite hand wavey. We didn’t get to the Moon by reading a bunch of text from the era then probabilistically joining word fragments, passing that around the same funnel a bunch of times, then blindly doing what came out, that’s for sure.
> The suggestion that errors just mindlessly compound and that there is no way around it
Is one that you made up, as that was not my argument.
LLMs are a lot better at a lot of things than a lot of humans.
We got to the moon using a large number of systems to a) avoid errors where possible and b) build in redundancies. Even an LLM knows this and knew what the statement meant:
> LLMs are a lot better at a lot of things than a lot of humans.
Sure, I'm really poor painter, Midjourney is better than me. Are they better than a human trained for that task, on that task? That's the real question.
The real question is can they do a good enough job quickly and cheaply to be valuable. ie, quick and cheap at some level of quality is often "better". Many people are using them in the real world because they can do in 1 minute what might take them hours. I personally save a couple hours a day using ChatGPT.
Ah, well then, if the LLM said so then it’s surely right. Because as we all know, LLMs are never ever wrong and they can read minds over the internet. If it says something about a human, then surely you can trust it.
You’ve just proven my point. My issue with LLMs is precisely people turning off their brains and blindly taking them at face value, even arduously defending the answers in the face of contrary evidence.
If you’re basing your arguments on those answers then we don’t need to have this conversation. I have access to LLMs like everyone else, I don’t need to come to HN to speak with a robot.
You didn't read the responses from an LLM. You've turned your brain off. You probably think self-driving cars are also a nonsense idea. Can't work. Too complex. Humans are geniuses without equal. AI is all snake oil. None of it works.
You missed the mark entirely. But it does reveal how you latch on to an idea about someone and don’t let it go, completely letting it cloud your judgement and arguments. You are not engaging with the conversation at hand, you’re attacking a straw man you have constructed in your head.
Of course self-driving cars aren’t a nonsense idea. The execution and continued missed promises suck, but that doesn’t affect the idea. Claiming “humans are geniuses without equal” would be pretty dumb too, and is again something you’re making up. And something doesn’t have to be “all snake oil” to deserve specific criticism.
The world has nuance, learn to see it. It’s not all black and white and I’m not your enemy.
Humans are obviously machines. If not, what are humans then? Fairies?
Now once you've recognized that, you're better equiped for task at hand - which is augmenting and ultimately automating away every task that humans-as-machines perform by building equivalent or better machine that performs said tasks at fraction of the cost!
People who want to exploit humans are the ones that oppose automation.
There's still long way to go, but now we've finally reached a point where some tasks that were very ellusive to automation are starting to show great promise of being automated, or atleast being greatly augmented.
Profoundly spiritual take. Why is that the task at hand?
The conceit that humans are machines carries with it such powerful ideology: humans are for something, we are some kind of utility, not just things in themselves, like birds and rocks. How is it anything other than an affirmation of metaphysical/theological purpose to particularly humans? Why is it like that? This must be coming from a religious context, right?
I cannot at least see how you could believe this while sustaining a rational, scientific mind about nature, cosmology, etc. Which is fine! We can all believe things, just know you cant have your cake and eat it too. Namely, if anybody should believe in fairies around here, it should probably be you!
Because it's boring stuff, and most of us would prefer to be playing golf/tennis/hanging out with friends/painting/etc. If you look at the history of humanity, we've been automating the boring stuff since the start. We don't automate the stuff we like.
Recognizing that humans, just like birds are self-replicating biological machines is the most level-headed way of looking at it.
It is consistent with observations and there are no (apparent) contraditions.
The spritual beliefs are the ones with the fairies, binding of the soul, made of special substrate, beyond reason and understanding.
If you have desire to improve human condition (not everyone does) then the task at hand naturally arisies - eliminate forced labour, aging, disease, suffering, death, etc.
This all naturally leads to automation and transhumanism.
I don’t think LLMs can easily find errors in their output.
There was a recent meme about asking LLMs to draw a wineglass full to the brim with wine.
Most really struggle with that instruction. No matter how much you ask them to correct themselves they can’t.
I’m sure they’ll get better with more input but what it reveals is that right now they definitely do not understand their own output.
I’ve seen no evidence that they are better with code than they are with images.
For instance, if the time to complete only scales with length of the token and not the complexity of its contents then it probably safe to assume it’s not being comprehended.
In my experience, if you confuse an LLM by deviating from the the "expected", then all the shims of logic seem to disappear, and it goes into hallucination mode.
Tbf that was exactly my point. An adult might use 'inference' and 'reasoning' to ask clarification, or go with an internal logic of their choosing.
ChatGPT here went with a lexigraphical order in Python for some reason, and then proceeded to make false statements from false observations, while also defying its own internal logic.
"six" > "ten" is true because "six" comes after "ten" alphabetically.
No.
"ten" > "seven" is false because "ten" comes before "seven" alphabetically.
No.
From what I understand of LLMs (which - I admit - is not very much), logical reasoning isn't a property of LLMs, unlike information retrieval. I'm sure this problem can be solved at some point, but a good solution would need development of many more kinds of inference and logic engines than there are today.
Do you believe that the LLM understands what it is saying and is applying the logic that you interprets from its response, or do you think its simply repeating similar patterns of words its seen associated with the question you presented it?
If you take the time to build an (S?)LM yourself, you'll realize it's neither of these. "Understands" is an ill-defined term, as is "applying logic".
But a LLM is not "simply" doing anything. It's extremely complex and sophisticated. Once you go from tokens into high-dimensional embeddings... it seems these models (with enough training) figure out how all the concepts go together. I'd suggest reading the word2vec paper first, then think about how attention works. You'll come to the conclusion these things are likely to be able to beat humans at almost everything.
I don't like the use of hallucinate. It implies that LLM have some kind of model of reality and some times get confused. They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.
that "hallucinate" term is a marketing gimmick to make it seem to the gullible that this "AI" (i.e. LLMs) can actually think, which is flat out BS.
as many others have said here on hn, those who stand to benefit a lot from this are the ones promoting this bullcrap idea (that they (LLMs) are intelligent).
greater fool theory.
picks and shovels.
etc.
In detective or murder novels, the cliche is "look for the woman".
in this case, "follow the money" is the translation, i.e. who really benefits (the investors and founders, the few), as opposed to who is grandly proclaimed to be the beneficiary (us, the many).
When it comes to bigness, there's grand and then there's grandiose. Both words can be used to describe something impressive in size, scope, or effect, but while grand may lend its noun a bit of dignity (i.e., “we had a grand time”), grandiose often implies a whiff of pretension.
Machines are intelligently designed for a purpose. Humans are born and grow up, have social lives, a moral status and are conscious, and are ultimately the product of a long line of mindless evolution that has no goals. Biology is not design. It's way messier.
I don't see how this is sustainable. We have essentially eaten the seed corn. These current LLMs have been trained by an enormous corpus of mostly human-generated technical knowledge from sources which we already know to be currently being polluted by AI-generated slop. We also have preliminary research into how poorly these models do when training on data generated by other LLMs. Sure, it can coast off of that initial training set for maybe 5 or more years, but where will the next giant set of unpolluted training data come from? I just don't see it, unless we get something better than LLMs which is closer to AGI or an entire industry is created to explicitly create curated training data to be fed to future models.
These tools also require the developer class to that they are intended to replace to continue to do what they currently do (create the knowledge source to train the AI on). It's not like the AIs are going to be creating the accessible knowledge bases to train AIs on, especially for new language extensions/libraries/etc. This is a one and f'd development. It will give a one time gain and then companies will be shocked when it falls apart and there's no developers trained up (because they all had to switch careers) to replace them. Unless Google's expectation is that all languages/development/libraries will just be static going forward.
One of my concerns is that AI may actually slow innovation in software development (tooling, languages, protocols, frameworks and libraries), because the opportunity cost of adopting them will increase, if AI remains unable to be taught new knowledge quickly.
It also bugs me that these tools will reduce the incentive to write better frameworks and language features if all the horrible boilerplate is just written by an LLM for us rather than finding ways to design systems which don't need it.
The idea that our current languages might be as far as we get is absolutely demoralising. I don't want a tool to help me write pointless boilerplate in a bad language, I want a better language.
It’s not, unless contexts get as large as comparable training materials. And you’d have to compile adequate materials. Clearly, just adding some documentation about $tool will not have the same effect as adding all the gigabytes of internet discussion and open source code regarding $tool that the model would otherwise have been trained on. This is similar to handing someone documentation and immediately asking questions about the tool, compared to asking someone who had years of experience with the tool.
Lastly, it’s also a huge waste of energy to feed the same information over and over again for each query.
You’re assuming that everything can be easily known from documentation. That’s far from the truth. A lot of what LLMs produce is informed by having been trained on large amounts of source code and large amounts of discussions where people have shared their knowledge from experience, which you can’t get from the documentation.
The LLM codegen at Google isn't unsupervised. It's integrated into the IDE as both autocomplete and prompt-based assistant, so you get a lot of feedback from a) what suggestions the human accepts and b) how they fix the suggestion when it's not perfect. So future iterations of the model won't be trained on LLM output, but on a mixture of human written code and human-corrected LLM output.
As a dev, I like it. It speeds up writing easy but tedious code. It's just a bit smarter version of the refactoring tools already common in IDEs...
I mean what happens when a human doesn't realize the human generated code is wrong and accepts the PR and it becomes part of the corpus of 'safe' code?
maybe most of the code in the future will be very different from what we’re used to.
For instance, AI image processing/computer vision algorithms are being adopted very quickly given the best ones are now mostly transformers networks.
My main gripe with this form of code generation is that is primarily used to generate “leaf” code. Code that will not be further adjusted or refactored into the right abstractions.
It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.
In the hands of a skilled engineer it is a good tool. But for the rest it mainly serves to output more garbage at a higher rate.
>It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.
Some people are touting this as a major feature. "I don't have to pull in some dependency for a minor function - I can just have AI write that simple function for me." I, personally, don't see this as a net positive.
Yes, I have heard similar arguments before. It could be an argument for including the functionality in the standard lib for the language. There can be a long debate about dependencies, and then there is still the benefit of being able to vendor and prune them.
Because there seems to be a fundamental misunderstanding producing a lot of nonsense.
Of course LLMs are a fantastic tool to improve productivity, but current LLM's cannot produce anything novel. They can only reproduce what they have seen.
But they assist developers and collect novel coding experience from their projects all the time. Each application of LLM creates feedback to the AI code - the human might leave it as is, slightly change it, or refuse it.
> LLM based systems will be writing more and more code at all companies.
At Google, today, for sure.
I do believe we still are not across the road on this one.
> if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss
So, is it really a smart move of Google to enforce this today, before quality have increased? Or did this set off their path to losing market shares because their software quality will deteriorate further over the next couple years?
From the outside it just seems Google and others have no choice, they must walk this path or lose market valuation.
I'm not really seeing this direction of travel. I hear a lot of claims, but they are always 3rd person. I don't know or work with any engineers who rely heavily on these tools for productivity. I don't even see any convincing videos on Youtube. Just show me on engineer sitting down with theses tools for a couple hours and writing a feature that would normally take a couple of days. I'll believe it when I see it.
Well, I rely on it a lot, but not in the IDE, I copy/paste my code and prompts between the ide and LLM. By now I have a library of prompts in each project I can tweak that I can just reuse. It makes me 25% up to 50% faster. Does this mean every project t is done in 50/75% of the time? No, the actual completion time is maybe 10% faster, but i do get a lot more time to spend on thinking about the overall design instead of writing boilerplate and reading reference documents.
Why no youtube videos thought? Well, most dev you tubers are actual devs that cultivate an image of "I'm faster than LLM, I never re-read library references, I memorise them on first read" and do on. If they then show you a video how they forgot the syntax for this or that maven plugin config and how LLM fills it in 10s instead of a 5min Google search that makes them look less capable on their own. Why would they do that?
Why don’t you read reference documents? The thing with bite-sized information is that is never gives you a coherent global view of the space. It’s like exploring a territory by crawling instead of using a map.
I don't think this is a bad thing - if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss and everyone has examples of LLMs producing buggy or ridiculous code. But once the tooling improves to:
1. align produced code better to existing patterns and architecture 2. fix the feedback loop - with TDD, other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.
Then we will definitely start seeing more and more code produced by LLMs. Don't look at the state of the art not, look at the direction of travel.