I agree that it's a technical challenge. And that it wouldn't work with a heavy/uncomfortable form factor. But c'mon, even if you don't agree with "so close", you see how we could get there with just a few more iterative improvements?
Criticizing XR for displaying redundant information feels a little like criticizing smart phones for needing expensive multi-touch screens. Or laptops for not hooking up to your desktop where all the power is. Or personal computers for not hooking up to the school's mainframe. In other, less snarky, terms: the cycles aren't wasted if they get you 9 monitors IMO :)
EDIT because apparently I'm obsessed with this topic: at my latest company the only program we ran on our laptops was chrome, and all computing happened in the cloud.
Obviously not an option for all jobs/people/locations, but I'm confident such a setup would work for many of the current owners of heavy expensive "Pro" laptops. LLMs and their need for datacenter-level compute during training may accellerate this among certain dev circles, too.
No, we are nowhere near close to having a full-powered laptop in a form factor that could be worn on the face. I don't see any reasonable chance it will happen in the next 10 years, for example.
And I don't agree that that's a fair comparison.
First of all, because my main point was not to criticize AR/VR, but to point out that the display system itself will inevitably consume a huge portion of a laptop's compute cycles - so a pair of glasses with the same compute power as a laptop will be far slower in running any kind of software than the laptop. You might be able to display 9 huge windows in the AR space, but you will probably not be able to run 9 different programs at the same time.
Second of all, because smartphones and laptops are not actually wasting cycles compared to a desktop to achieve a version of the same workflow. They are more expensive for the same compute power, and they are more thermally-throttled, but they don't use extra compute power to achieve the same performance.
Compute cycles: CPU and GPU are very very different here, so I'm not sure that logic holds. Unless you're doing work that's heavily GPU-bound and truly using the same resource (which is a non-trivial amount of jobs, but far from a majority. maybe even less than 1%?).
For the rest, it shouldn't cost much more compute than two 4K screens, and people already do that successfully. It doesn't do much to increase the latency or cost of running a million Chrome tabs or using other CPU- or IO-bound programs.
But I will agree that this all consumes more electricity, which is certainly a problem for mobile use.
I'm not sure how much of the image processing for AR passthrough, environment recognition and spatial placement of app windows is actually done on the GPU. I would expect at least some of it has to be done on the CPU.
Additionally, laptops are rarely known for stellar GPUs, so I'm also expecting some need to offload GPU work to the CPU, at least in terms of power and thermal budget.
> No, we are nowhere near close to having a full-powered laptop in a form factor that could be worn on the face. I don't see any reasonable chance it will happen in the next 10 years, for example.
Um.
The Vision Pro runs an M2, the same processor they have in the best laptops Apple makes. I think it’s a huuuuge reach to just assume without having used the device, that it just must be visibly slow compared to running a modern laptop. Most of the pixel pushing is done on a GPU and is totally independent of what limits you to be able to run lots of apps. The M2 is already massively powerful for a typical laptop workload.
> You might be able to display 9 huge windows in the AR space, but you will probably not be able to run 9 different programs at the same time
What are you basing this on? You realize that iOS runs the same kernel as macOS, right? It has supported “multitasking” in the sense of “multiple processes” since day one. It’s just that iOS has intentionally limited the UI so that only one app is focused, and it eagerly kills (for the purposes of saving battery) apps that you’ve tabbed away from, with a lot of help from the software to make sure apps can persist their state and quickly recover. The limitations of multitasking on iOS is not due to there not being enough CPU, it’s to save battery. And Apple’s laptops use the same CPU, and have the best battery life in the industry, so I don’t see how you can arrive at the conclusion that a device with the same specs as their best laptops just obviously can’t multitask.
Yeah. Your average U-CPU (what you'd get in a 13" or 15" ultrabook, and what we use) has a TDP of 9-28W. That's well within the range of what a modern phone can boost to.
Thanks for the response! I still don't agree with your conclusion that the nascent proXR industry is doomed, but definitely cede the point that XR displays will be a strain on the computer they're hooked up to. I personally am more interested in software than hardware so can't say I've done a careful analysis of the required compute myself - trusting in the words of (biased) experts on that one.
re:the metaphor and your last paragraph, I think I was being unclear. I'm not saying that phones and laptops were wasting compute, I'm saying that they were both heavily criticized for not having enough compute. This is when they were introduced; obviously now its a simple money vs. portable compute tradeoff, which is my ultimate point - I see that same pattern holding in this case, too :)
Criticizing XR for displaying redundant information feels a little like criticizing smart phones for needing expensive multi-touch screens. Or laptops for not hooking up to your desktop where all the power is. Or personal computers for not hooking up to the school's mainframe. In other, less snarky, terms: the cycles aren't wasted if they get you 9 monitors IMO :)
EDIT because apparently I'm obsessed with this topic: at my latest company the only program we ran on our laptops was chrome, and all computing happened in the cloud.
Obviously not an option for all jobs/people/locations, but I'm confident such a setup would work for many of the current owners of heavy expensive "Pro" laptops. LLMs and their need for datacenter-level compute during training may accellerate this among certain dev circles, too.