The author seems to be entirely unaware of the current crop of open source / public language models like LLaMA and Falcon. People are already engaging in the behavior that, according to the article,
might present an existential threat to humanity.
The existential threat argument is a ridiculous notion to begin with. Pretending open source LLMs don’t already exist makes the argument even more silly.
They're presented as a hypothetical scary technology, not something that actually exists today. That framing is important to the conversation. The author is acting like there's a decision to be made. Pretending that open source models don't exist yet makes the decision artificially more difficult.
Why not examine the usage of existing models and judge them on their merits?
Can you say more about why you think it is ridiculous?
AI is already increasingly helpful to AI researchers. So one convincing argument for existential risk is that if this increase continues we could see exponential growth in the efficacy of AI research. Given that the goal of AI research is useful intelligence, it seems reasonable to say that an exponential increase in AI research will lead to an at least linear increase in useful intelligence.
Another argument is that there are breakthrough discoveries that will dramatically increase useful intelligence, or dramatically increase the rate of increase. We've had some of these so far, so it seems likely there are others awaiting us. And it's impossible to say with any certainty that there's no chain of such discoveries that doesn't lead to a human or superhuman level of intelligence. All we can do is try to estimate the odds of it happening.
Both arguments seem a pretty clear argument for an existential threat, because as soon as you have even just a human level intelligence integrated that seamlessly with the power of conventional computing, you have something that is much more powerful than most (if not all) human organizations, and once that power exists, there's a non zero risk that it will aim at some objective that is harmful to us, in a comprehensive enough way that it also aims to stop us from being able to stop it.
We don’t have a solid understanding of what it means to be intelligent, and despite the advances with LLMs, have yet to produce anything close to general cognition.
I wouldn’t say dangerous AI is out of the realm of possibilities, only that we are far from a point technologically where it makes sense to have the discussion. It would be like speculating about the dangers of nuclear energy before discovering the atom.
Our current models do not pose an existential threat to humanity. Period. And yet here we are, discussing the merits of banning open source LLMs. Language modeling is genuinely useful. I’m worried that the AI hysteria will push us toward a future where the best models are needlessly regulated and controlled by corporations.
>Both arguments seem a pretty clear argument for an existential threat, because as soon as you have even just a human level intelligence integrated that seamlessly with the power of conventional computing, you have something that is much more powerful than most (if not all) human organizations,
Incorrect. Power comes with the ability to command resources, not intelligence. The United States military is the most powerful human organization for its ability to marshal the most amount of deadly weapons. Some rinky dink AI company cannot do this.
I think it's safe to say that, if a large entity with monopoly of force wants to stop these models from being used, they probably could. We already have a global surveillance machine watching us all, and normalized content takedowns for lesser reasons like "copyright infringement" and "obscene/exploitative material". Actual manufacturing & distribution of compute power is fairly consolidated, and hacker culture seems to have lost its activist edge in demanding legal rights around general-purpose computing. The future seems bleak if your threat model includes state actors enforcing AI regulation due to state-level safety concerns (terrorism, wmd production, large-scale foreign propaganda, etc).
He seems to be well aware of the idea that diverse models will exist.
But what happens if regulation makes the use of these models more legally dangerous than a few "well regulated" models approved by some sanctioning entity?
The existential threat argument is a ridiculous notion to begin with. Pretending open source LLMs don’t already exist makes the argument even more silly.