Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anthropic also expects that superintelligence will be reached in the somewhat near future. The intuition is that there's just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.


Even if that intuition is correct and they can fix the vending machine with a bit more data and RLHF, I fail to see where the "super" comes in here. How the fk are they going to get superintelligent training data? A time machine?


You're getting at a deep point of disagreement - should we expect a modern or near-future LLM to be limited by the intelligence of the people who generated its training data? I don't think anyone claims to have a provably correct answer. There's one intuition that says yes (why should it be impossible to make new insights from data collected by people who didn't have those insights?) and another that says no (how can a statistical average of N people's most likely responses be smarter than any of those N?)


I think you are reading a little too much into my comment but I understand where you are coming from. My point is that even if you agree with this:

  There’s just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.
it is a huge leap to conclude this:

  There’s not much distance between a chatbot that is as intelligent as a human and a chatbot that is more intelligent than a human.
But that seems to be what Anthropic is assuming.


that intuition is wrong


[citation needed]


exactly, the burden of proof is on the person making the claim

so SpicyLemonZest, not me


Never had a spicy lemon, so I'm already sceptical.


To be clear, I personally don't think that current models point the way towards superintelligence. But it does us no good to pretend that this is some absurd opinion from a guy who's looking for the next world revolution after the Metaverse didn't work out. Zuckerberg thinks superintelligence is close because a number of experts actively engaged in the field say that it's close. When you and I say it's not close, we're disagreeing not with crazy randos who can be dismissed out of hand, but with smart people who generally know what they're doing.


In what ways?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: