Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There’s a very real/significant risk that AGI either literally destroys the human race

If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.



>> There’s a very real/significant risk that AGI either literally destroys the human race

> If this were true, intelligent people would have taken over society by now

The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".

You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.


To every other mammal, reptile, and fish humans are the intelligence explosion. The fate of their species depends on our good will since we have so utterly dominated the planet by means of our intelligence.

Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.

Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.


>You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.

This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.

It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.


Even if lots of real-world problems are intractable in the computational complexity theory sense, that doesn't necessarily mean an upper limit to intelligence or to being able to solve those problems in a practical sense. The complexities are worst-case ones, and in case of optimization problems, they're for finding the absolutely and provably optimal solution.

In lots of real-world problems you don't necessarily run into worst cases, and it often doesn't matter if the solution is the absolute optimal one.

That's not to discredit computational complexity theory at all. It's interesting and I think proofs about the limits of information processing required for solving computational problems do have philosophical value, and the theory might be relevant to the limits of intelligence. But just because some problems are intractable in terms of provably always finding correct or optimal answers doesn't mean we're near the limits of intelligence or problem-solving ability in that fuzzy area of finding practically useful solutions to lots of real-world cases.


What does P=NP have to do with anything? Humans are incomparably smarter than other animals. There is no intelligence test a healthy human would lose to another animal. What is going to happen when agentic robots ascend to this level relative to us? This is what the GP is talking about.


Succeeding at intelligence tests is not the same thing as succeeding at survival, though. We have to be careful not to ascribe magical powers to intelligence: like anything else, it has benefits and tradeoffs and it is unlikely that it is intrinsically effective. It might only be effective insofar that it is built upon an expansive library of animal capabilities (which took far longer to evolve and may turn out to be harder to reproduce), it is likely bottlenecked by experimental back-and-forth, and it is unclear how well it scales in the first place. Human intelligence may very well be the highest level of intelligence that is cost-effective.


Look up where the people in power got their college degrees from and then look up the SAT scores of admitted students from those colleges.


Of course intelligent people have taken over society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: