Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

More like Claude Code's ancestor has human-level autonomy with generalized superhuman abilities and is connected to everything. We task it with solving difficult global problems, but we can't predict how it will do so. The risk is it will optimize one or more of those goals in a way that threatens human existence. It could be that it decides to keep increasing it's capacity to solve the problems, and humans end up being in the way.

Or it's militarized to defeat other powerful AI-enhanced militaries, and we have WW3.

More likely though AGI would cause economic crash from automating too many jobs too quickly.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: