Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This answered a question I had “I wonder what that guy who wrote that thing on ‘the superintelligence/fast take off idea eating smart people’ thinks of all this new ai stuff” thanks HN!

I still can’t understand the “supersmart ai is so smart we can’t unplug it/patch it/restart it” before it transfers itself into every pacemaker.

Until these things are literally in bodies with some autonomy that allows them to control what happens to their brains, we will shut them off when they cause trouble.



Yeah this is why the Cuban Missile Crisis was a total farce. Lol to avoid catastrophe you just don’t push the button. Simple! The missiles don’t launch themselves, therefore no risk.


How do people join cults, how are people radicalised, how come there are still shootings and terrorists? People can be convinced and coerced to do things by silver tongued slick talkers promising great rewards, and some people would press the button regardless if given half a chance.


Actually, if an llm could become good at propaganda, it could quickly come to rule the world. I never considered that angle before, but it’s legitimately scary.

UPDATE: Thought of a good clarifying analogy. In one of the sequels to “Enders Game” the brother and sister of Ender adopted anonymous online personas and began writing. They were so skilled at politics and propaganda that they disrupted the entire world and the brother soon became world leader.


Is that what happened to create the Cuban Missile Crisis?

Or WW1? Or Vietnam?

Nope. Just pretty much rational people making locally-rational decisions inside a system where series of rational decisions yielded catastrophic outcomes. It’s entirely possible and history is full of such examples.


Just taking examples from history:

Why didn't we just "unplug" Hitler and Goebbels? Or Marshall Applewhite? You don't need a powerful physical body(s) to cause tremendous amounts of harm before anyone can stop you. To most people of the time Hitler was a persuasive powerful voice on the radio, or words in a paper - things SOTA generative AI are already phenomenal at.


You’re being downvoted for mentioning the H-man (bad), but I think your analogy has some merit:

A super-smart AI may be intensely popular with many people in the way that some politicians are. It may understand us and speak to us on a seemingly-personal level, the way the best politicians do. A lot of us could support the super-smart AI for that reason.


There is a difference between "X didn't happen" and "X wasn't possible".

Hitler could have been assassinated. It was tried multiple times:

https://en.wikipedia.org/wiki/List_of_assassination_attempts...

None of these attempts failed because the act of assassinating Hitler was technically impossible, they failed to chance, unfavorable conditions, intervention, human error, etc. Given enough attempts, eventuelly one of them would have succeeded.


What kind of upsidedown bar for safety is that? Hey our car isn't a death trap, given enough collisions someone will survive eventually!

"We tried to shut down the AI multiple times. It killed many millions of people but eventually we did it! You see, AI is safe!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: