I think what Andrej is describing is more "automation" than AGI. His discussion of self-driving is more analogous to robots building cars in a Tesla factory displacing workers than anything AGI. We've already had "self driving" trains where we got rid of the human train driver. Nothing "AGI" about that. The evolution of getting cars to self drive not necessarily making the entity controlling the car more human-like intelligent. It's more like meeting in between the human driver and the factory robot +/- some technology.
So how to define AGI? I'm not sure economic value factors here. I would lean towards a definition around problem solving. When computers can solve general problems as well as humans, that's AGI. You want to find a drug for cancer, or drive a car, or prove a math theorem, or write a computer program to accomplish something, or whatever problems humans solve all the time. (EDIT: or reason about what problems need to be solved as part of addressing other problems.) There's already classes of problems, like chess, where computers outperform humans. But I mean calculators did that for arithmetic a long time ago. The "G" part is whether or not we have a generalized computer that excels at everything.
It's a meaningless distinction. You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian. It's impossible to resolve. But the irony of course is the huge and growing list of things it is actually doing quite nicely.
We'll have decently smart AIs before we nail down what that G actually means, should mean, absolutely cannot mean, etc. Which is usually what these threads on HN devolve into. Andrej Karpathy is basically side stepping that debate and using self driving as a case study for two simple reasons: 1) we're already doing it (which is getting hard to deny or nitpick about) and 2) it requires a certain level of understanding of things around us that goes beyond traditional automation.
You are dismissing self driving as mere "automation". But that of course applies to just about everything we do with computers. Driving is sufficiently hard that it seems to require the best minds many years to get there and we're basically getting people like Andreij Karpathy and his colleagues from Google, Waymo, Microsoft, Tesla, etc. bootstrapping a whole new field of AI as a side effect. The whole reason we're even talking about AGI is those people. The things you list, most people cannot do either. Well over 99% of the people I meet are completely useless for any of those things. But I wouldn't call them stupid for that reason.
Some people even go as far to say that we won't nail self driving without an AGI. But then since we already have some self driving cars that are definitely not that intelligent yet, they are probably wrong. For varying definitions of the G in AGI.
> You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian.
Except today the bit (which wasn’t really a debate in the sketch because everyone agreed) would start with real current negatives such as accelerating the spread of misinformation and getting artists fired. In your analogy, it would be as if they were asking “what have the Romans ever done for us” during the war. Doesn’t really work.
I don't consider people having to adjust a negative. We don't have a right to never have to adjust or adapt to a changing world. Things change, people adapt. Well some of them. The rest just gets old and dies off. Artists will be fine; so will everybody else. If anything, people will have a lot more time to do artistic things. More than ever probably and possibly at a grander scale that past generations of artists could only dream about.
Misinformation, aka. propaganda, is as old as humanity. Probably even the Romans were whining about that back in the day. AIs are doing nothing new here. And it's not AIs spreading misinformation but people with an agenda that now use AIs as tools to generate it. People like that have always existed and they've always been creative users of whatever tools were available. We'll just have to deal with that as well and adapt.
> Things change, people adapt. Well some of them. The rest just gets old and dies off.
Which, continuing the analogy, is like watching your neighbour be slaughtered and defending the war by saying we’ll be fine because those who won’t be will eventually die. Sure, in a few generations we could be better off, but there are people living right now to think about. Those who dismiss it are the lucky ones who (think they) won’t be affected. But spare some empathy for your fellow human beings, dismissing their plight because they’ll eventually “grow old and die off” is not a solution and could even be labelled as cruel. Surely you’re not expecting them to read your words and go “yeah, they’re right, I’ll just roll over and die”.
> If anything, people will have a lot more time to do artistic things. More than ever probably and possibly at a grander scale that past generations of artists could only dream about.
That’s an unproven utopian ideal with flimsy basis in reality. The owners of the technology think of one thing: personal profit. If humanity can benefit, that’s a side benefit. It’s definitely not something we should take for granted will happen.
> And it's not AIs spreading misinformation but people with an agenda that now use AIs as tools to generate it.
Correct. And they can do so at a much faster rate and higher accuracy than before. That is the issue. Dismissing that is like comparing a machine gun to a hand gun. The principle is the same but one of them is a bigger problem.
They’re a bigger problem because there are more of them and they’re easier to get. Which isn’t a metric that applies here. Analogies seldom map on every metric, they’re a tool for exemplification. In this case it’s like anyone having equal access to either a handgun or machine gun.
Even if the analogy were wrong, that wouldn’t make the point invalid. I know the point I’m making (and presumably so do you). Again, the analogy is for exemplification, it does not alter the original problem.
I don't think shitposts are the same thing as bullets, and choosing machineguns/hanguns as your analogy is a poor exemplification considering you could have instead have chosen an IMO more apt fax machines/email analaogy while making the same underlying point of "...much faster rate and higher accuracy than before..."
Yes, spam is worse with email, but we're still in a better place overall than before in my opinion.
While I agree that issues such as artists not being able to support themselves or rampant misinformation are ultimately contingent on social issues, I think we should try to mitigate the negative impact of AI in the meantime. Otherwise, there will be lasting consequences that won't be retroactively fixed by adapting.
Also, it may be that having powerful AI tools worsens the social problem by normalizing the generated art/misinformation.
I recall Norvig's AI book preaching decades ago that "intelligent" does not mean able to do everything, and that for an agent to be useful it was enough to solve a small problem.
Which in my mind is where the G came from.
And yet we now suddenly go back to the old narrow definition?
I still see no path from LLMs and autonomous driving to AGI.
> "Yeah, it seems as if he has forgotten the G. ... I still see no path from LLMs and autonomous driving to AGI."
That is exactly my view too. While LLMs and autonomous driving can be exceptionally good at what they do, they are also incredibly specialist, they completely lack anything along the lines of what you might call "common sense".
For example, (at least last time I looked) autonomous driving largely works off object detection at discreet time intervals, so objects can pop into and out of existence, whereas humans develop a sense of "object permanence" from a young age (i.e. know that just because something is no longer visible doesn't mean it is no longer there), and many humans also know about the laws of physics (i.e. know that if an object has a certain trajectory then there are probabilities and constraints on what can happen next).
Thanks, interesting read (it was a while ago I looked into this). I think the point still remains though - a self driving car doesn't have any general knowledge which can be applied to other areas, e.g. what a pedestrian is, or why a pedestrian who sees you is unlikely to step out in front of you. And similarly, the ordered tokens that an LLM outputs sometimes appear "stupid" because it has no "common sense".
Just like the term "AI" was co-opted and ruined, "AGI" has now been co-opted and ruined, and we're going to need a replacement term to describe that concept.
> I think what Andrej is describing is more "automation" than AGI
I think you're basically right - incrementally automating aspects of one human job. However, it really ought to include AGI since I personally would never trust my life to an autonomous car they didn't have human-level ability to react appropriately to an out-of-training-set emergency.
So how to define AGI? I'm not sure economic value factors here. I would lean towards a definition around problem solving. When computers can solve general problems as well as humans, that's AGI. You want to find a drug for cancer, or drive a car, or prove a math theorem, or write a computer program to accomplish something, or whatever problems humans solve all the time. (EDIT: or reason about what problems need to be solved as part of addressing other problems.) There's already classes of problems, like chess, where computers outperform humans. But I mean calculators did that for arithmetic a long time ago. The "G" part is whether or not we have a generalized computer that excels at everything.