For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.
There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.
More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.
Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.
I don't think we disagree, but I will say that "a handful of people in SF and AZ taking rides in cars that are remotely monitored 24/7" is not the drivers-are-obsolete-now, near-term future being promised in 2016. Remember the panic because long-haul truckers were going to be unemployed Real Soon Now? I do.
Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.
Self-driving taxis are available in only a handful of cities around the world. This is far from progress. And how often are those taxis secretly controlled by an Indian call center?
Sure, but blanket pessimism isn't very insightful either. I'll use the same example you did: self-driving. The public (or "median nerd") consensus has shifted from "right around the corner" (when it struggled to lane-follow if the paint wasn't sharp) to "it's a scam and will never work," even as it has taken off with the other types of AI and started hopping hurdles every month that naysayers said would take decades. Negotiating right-of-way, inferring intent, handling obstructed and ad-hoc roadways... the nasty intractables turned out to not be intractable, but sentiment has not caught up.
For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.
Pessimism isn't insight. There is no substitute for the hard work of "try and see."
The same thing happened with nuclear fusion. People working on it have been saying sustainable fusion power is right around the corner for decades, and we still don't have it.
And it _could_ be just one clever breakthrough away, and that could happen tomorrow, or it could be centuries away. There's no way to know.
>but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’.
they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.
My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.
Ah yes, the “our brains are somehow inherently special” coalition. Hand-waving the capabilities of LLM as dumb math while not having a single clue about the math that underlies our own brains’ functionality.
I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.
This isn't a question of understanding the brain. We don't even have a theory of AGI, the idea that LLMs are somehow anywhere near even approaching an existential threat to humanity is science fiction.
LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.
> or makes life much shittier for most humans by making most of us obsolete
I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.
> If all the things people are doing are done so much more cheaply they're almost free, that would be good for us ...
Doesn't this tend to become "they're almost free to produce" with the actual pricing for end consumers not becoming cheaper? From the point of view of the sellers just expanding their margins instead.
I'm sure businesses will capture some of the value, but is there any reason to assume they'll capture all or even most of it?
Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].
It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!
Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.
> Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.
This does not make sense to me. While a higher profit margin is a signal to others that they can earn money by selling equivalent goods and services at lower prices, it is not inevitable that they will be able to. And even if they are, it behooves a seller to take advantage of the higher margins while they can.
Earning less money now in the hopes of competitors being dissuaded from entering the market seems like a poor strategy.
The premise wasn't that there weren't competitors already, I don't think. With most things the price is (usually) floored by the cost of production, ceilinged by the value it provides people, and then competition is what moves it from the latter to closer to the former.
> The higher your margins, the more attractive your market becomes to would-be competitors.
Only in very simplistic theory. :(
In practical terms, businesses with high margins seem able to afford government protection (aka "buy some politicians").
So they lock out competition, and with their market captured, price gouging (or close to it) is the order of the day.
No real sure why anyone thinks the playbook would be any different just because "AI" is used on the production side. It's still the same people making the calls, just with extra tools available to them.
This is also pretty simplistic. All the progress that's made on a variety of fronts implies that we don't have loads of static lockin businesses that bribe bureaucrats.
We won't be buyers anymore if we aren't getting paid to work.
Perhaps some kind of garanteed minimal income would be implemented, but we would probably see a shrinkage or complete destruction of the middle class, and massive increases in wealth inequality.
In an ideal world where gpus are a commodity yes. Btw at least today ai is owned/controlled by the rich and powerful and that's where majority of the research dollars are coming from. Why would they just relinquish ai so generously?
With an ever expanding AI everything should be quickly commoditized, including reduction in energy to run AI and energy itself (ie. viable commercial fusion or otherwise).
That's the thing I am struggling with. I agree things will exponentially improve with AI. What i am not seeing is who will actually capture the value. Or rather how will those other than rich and powerful get to partake in this value capture. Take viable commercial fusion for example. Best case it ends up looking like another PG&E. Worst case it is owned by yet another Musk like gatekeeper. How do you see this being truly democratized and accessible for the masses?
The most rosy outcome would be benevolent state actors control it, and the value capture is simply for everyone as the costs for everything go to zero (food, energy, housing, etc). It would be post-capitalist, post-consumer.
Of course the problem is whether or not it could be controlled, and in that case, the best hope is simply 'it' being benevolent and naturally incentivized to create such a utopia.
Where are you getting energy and land from for these AI's to consume and turn into goods?
Moreso, by making such a magical powerful AI as you've listed, the number one thing some rich controlling asshole with more AI than you, would be to create an army and take what they want because AI does nothing to solve human greed.
Up to the point of AGI, most productivity increases have resulted in less physical / menial labor, and more white collar work. If AGI is smarter than most humans, the pendulum will swing the other way, and more humans will have to work physical / menial jobs.
Can higher level formers with more at stake pool together comp for lower levels with much less at stake so they can speak to it? Obvs they may not be privy to some things, but there’s likely lots to go around.
> There’s a very real/significant risk that AGI either literally destroys the human race
If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.
>> There’s a very real/significant risk that AGI either literally destroys the human race
> If this were true, intelligent people would have taken over society by now
The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".
You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.
To every other mammal, reptile, and fish humans are the intelligence explosion. The fate of their species depends on our good will since we have so utterly dominated the planet by means of our intelligence.
Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.
Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.
>You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.
This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.
It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.
Even if lots of real-world problems are intractable in the computational complexity theory sense, that doesn't necessarily mean an upper limit to intelligence or to being able to solve those problems in a practical sense. The complexities are worst-case ones, and in case of optimization problems, they're for finding the absolutely and provably optimal solution.
In lots of real-world problems you don't necessarily run into worst cases, and it often doesn't matter if the solution is the absolute optimal one.
That's not to discredit computational complexity theory at all. It's interesting and I think proofs about the limits of information processing required for solving computational problems do have philosophical value, and the theory might be relevant to the limits of intelligence. But just because some problems are intractable in terms of provably always finding correct or optimal answers doesn't mean we're near the limits of intelligence or problem-solving ability in that fuzzy area of finding practically useful solutions to lots of real-world cases.
What does P=NP have to do with anything? Humans are incomparably smarter than other animals. There is no intelligence test a healthy human would lose to another animal. What is going to happen when agentic robots ascend to this level relative to us? This is what the GP is talking about.
Succeeding at intelligence tests is not the same thing as succeeding at survival, though. We have to be careful not to ascribe magical powers to intelligence: like anything else, it has benefits and tradeoffs and it is unlikely that it is intrinsically effective. It might only be effective insofar that it is built upon an expansive library of animal capabilities (which took far longer to evolve and may turn out to be harder to reproduce), it is likely bottlenecked by experimental back-and-forth, and it is unclear how well it scales in the first place. Human intelligence may very well be the highest level of intelligence that is cost-effective.
There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.