Anthropic also expects that superintelligence will be reached in the somewhat near future. The intuition is that there's just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.
Even if that intuition is correct and they can fix the vending machine with a bit more data and RLHF, I fail to see where the "super" comes in here. How the fk are they going to get superintelligent training data? A time machine?
You're getting at a deep point of disagreement - should we expect a modern or near-future LLM to be limited by the intelligence of the people who generated its training data? I don't think anyone claims to have a provably correct answer. There's one intuition that says yes (why should it be impossible to make new insights from data collected by people who didn't have those insights?) and another that says no (how can a statistical average of N people's most likely responses be smarter than any of those N?)
To be clear, I personally don't think that current models point the way towards superintelligence. But it does us no good to pretend that this is some absurd opinion from a guy who's looking for the next world revolution after the Metaverse didn't work out. Zuckerberg thinks superintelligence is close because a number of experts actively engaged in the field say that it's close. When you and I say it's not close, we're disagreeing not with crazy randos who can be dismissed out of hand, but with smart people who generally know what they're doing.