This is highly unlikely to be a mechanical limitation of the robotic arms. As others have said, it's likely an inference speed limitation - their model is understanding, reacting, and producing outputs as fast as its supporting hardware can.
But that all just poofs away in a year or two as inferencing hardware gets better/faster. And for many use cases, the slowness/awkwardness doesn't really matter as long as the job gets done.
"AI working in meatspace" was supposed to be hard, and its rapidly becoming clear that isn't going to be the case at all.
Robot demos have been stuck at that sort of speed for more than a decade and they didn't have to wait for a giant LLM to do inference. Why's that going to get any better now? And just in a year or two?
But that all just poofs away in a year or two as inferencing hardware gets better/faster. And for many use cases, the slowness/awkwardness doesn't really matter as long as the job gets done.
"AI working in meatspace" was supposed to be hard, and its rapidly becoming clear that isn't going to be the case at all.