Ha. Do people understand time for humanity to save itself is running out. What is the point of having a super human AGI if there's no human civilization for which it can help?
"We can totally control an entity with 10^x faster and stronger intelligence than us. There is no way this could go wrong, in fact we should spend all of our money building it as soon as possible."
> We can totally control an entity with 10^x faster and stronger intelligence than us.
Unless you're referencing an unreleased model that can count the number of 'r' occurrences in "strawberry" then I don't even think we're dealing with .01*10^x intelligence right now. Maybe not even .001e depending on how bad of a Chomsky apologist you are.
An equal but faster/more numerable intelligence will still mop the floor with you.
If you pit organization A with Y number of engineers vs. organization B with 100Y engineers (who also think 100x faster and never need sleep) who do you think will win?
Even a 0.3x strength intelligence might beat you. Maybe it can't invent nukes but if it brute forces it's way to inventing steel weapons while you're still working on agriculture, you still lose.
No, a true quantum computer will not necessarily solve NP-complete (NPC) problems efficiently. Quantum algorithms like Grover’s provide quadratic speedups, but this is insufficient to turn exponential-time solutions into polynomial-time ones. While quantum computers excel in specific tasks (e.g., Shor’s algorithm for factoring), there’s no evidence they can solve all NP-complete problems efficiently.
Current complexity theory suggests that , the class of problems solvable by quantum computers, does not encompass . Quantum computers may aid in approximations or heuristics for NPC problems but won’t fundamentally resolve them in polynomial time unless , which remains unlikely.