While all those look like leaps in capability, they are quantitative rather than qualitative advances. For example, we've always had tools, now we can make very complex tools, i.e. machines. Additionally, those are all advances that developed over a long, or even very long time and that went hand-in-hand with similar advances in other technologies, not to mention scientific understanding.
That makes for a crucial difference with the capability to develop superintelligence: we have no idea how to do it, and we've never created anything even remotely similar to it, yet. It's impossible to see how it might happen just by mixing up some components and stirring well.
I'm not arguing for a fast AI takeoff this decade; 10k years ago we had no idea how to create a jet engine and had never created anything remotely similar to it, now we have. Saying "we've always had tools" in the sense of a flint axe doesn't feel like enough to make a jet engine inevitable. We've also always had tools of thought like notches in wood or stone trails in the woods or singing to help remember things, and we have very complex 3D world models and face recognition sytems and so on - doesn't that make intelligent machines inevitable by the same argument?
Putting global collapse aside, another 10k years will pass, and another 10k after that. Is there good reason to think either that today is approximately as close to superintelligence as we can ever get (suspiciously arbitrary), or that the "next step" is so far out of reach that no lone genius, no thousand year focused group, no brute force, no studying of differing human intelligence, no unethical human experiments, can ever climb it? "We don't know how to do it today" doesn't convince me. For the last 10k years we have hardly stopped understanding new things and making new things, that's more convincing.
All that is reasonable, but I have asked both "when" and "how", above. If we don't know "how", now, then "when" becomes the crucial question. That's because if superintelligent AI is 10k years away, then it might as well be impossible, because we have no idea whether we will still have the same technological capability, or social structures, as in the current day, in 10k years. Also any action we take now to avert AGI, or control it, or align it, or anything, will be pointless because forgotten much sooner than 10k years.
I'm not talking about global collapse, btw. I'm mainly expecting that scientific advances in the next couple hundred years will leap-frog today's unscientific research into artificial intelligence. I'm guessing that we will eventually understand intelligence and its relation to computation and that we will find out that today's ideas about artificial intelligence never made any sense, nor had any chance of leading to artificial intelligence, of any sort.
You see, I trust science. And it's obvious to me that the current dominant paradigm of AI research is not science. So I don't believe for a second that, that paradigm, can really achieve anything approaching intelligence, running on a digital computer. Because that sounds like a very hard thing, and the kind of very hard thing we can only do with science.
That makes for a crucial difference with the capability to develop superintelligence: we have no idea how to do it, and we've never created anything even remotely similar to it, yet. It's impossible to see how it might happen just by mixing up some components and stirring well.