> then why in the world is he trying so hard to make sure his dystopian worldview actually happens?
If he's also working under the assumption that AGI is inevitable, it would make sense to want to be first at it so it aligns with his values more, while also preparing for a post-AGI world.
I don't follow that logic. If he thinks that the development of an AGI is inevitable, why would it make sense to be the first one to do it, when he also clearly thinks that its existence if a grave danger? What does it matter who does it first? The results are the same regardless.
If I thought that AGI was anywhere close to imminent (which I don't), then this perspective seems to me like it has a great risk of being a self-fulfilling prophecy. Why risk being the one to bring the bad thing into existence? Wouldn't it make more sense to let someone else be the bad guy and instead focus on defense?
Because they believe the results are not the same regardless. AGI's impact on humanity will hinge upon if we are able to correctly impart human-loving values onto what is essentially an unhuman system, so called "alignment." If we align AGI, it will make us obsolete but atleast give us a good life. If we don't, and it has goals it wants and the superpower to subvert our attempts to thwart its goals, we will end up as ants are to Google. Not hated, but a tiny nuisance to be disregarded when a new data center needs to be built. The only defense is slowing down AI capabilities until alignment has been rigorously verified.
> Wouldn't it make more sense to let someone else be the bad guy and instead focus on defense?
Maybe an analogy is more like an unsafe building collapsing randomly versus a controlled demolition after getting everyone out? The thing is going to happen, but if you're the one to make it happen, you can prevent it from being as much of a disaster. Not all possible AGIs are equal, he sees it as making it so when there is an AGI, it's aligned to his values.
It's being defensive against bad-AGI by developing good-AGI first. It's not clear if it will work, but it's better than just hoping setting up a good framework for UBI will keep your from being turned into paperclips.
If he's also working under the assumption that AGI is inevitable, it would make sense to want to be first at it so it aligns with his values more, while also preparing for a post-AGI world.