It is fantastic for CPU parallelism. The problem is the CPU/GPU boundary is difficult to deal with and exposing an API that is both fast and safe and flexible is almost impossible.
I don't believe it's possible to make an efficient API at a similar level of abstraction to Vulkan or D3D12 that is safe (as in, not marked unsafe in rust). To do so requires recreating all the complexity of D3D11 and OpenGL style APIs to handle resource access synchronization.
The value proposition of D3D12 and Vulkan is that the guardrails are gone and it's up to the user to do the synchronization work themselves. The advantage is that you can make the synchronization decisions at a higher level of abstraction where more assumptions can be made and enforced by a higher-level API. Generally this is more efficient because you can use much simpler algorithms to decide when to emit your barriers, rather than having the driver reverse engineer that high-level knowledge from the low-level command stream.
Rust is just not capable of representing the complex interwoven ownership and synchronization rules for using these APIs without mountains of runtime checks that suck away all the benefit of using these APIs. Lots of Vulkan map quite well to Rust's ownership rules, the memory allocation API surface maps very well. But anything that's happening on the GPU timeline is pretty much impossible to do safely. Rust's type system is not sufficiently capable of modeling this stuff without tons of runtime checks, or making the API so awful to use nobody will bother.
I've seen GP around a lot and afaik they're using WGPU which is, among other things, Firefox's WebGPU implementation. The abstraction that WebGPU provides is entirely the wrong level to most efficiently use Vulkan and D3D12 style APIs. WebGPU must be safe because it's meant to get exposed to JS in a browser, so it spends a boat load of CPU time to do all the runtime checks and work out the synchronization requirements.
Rust can be more challenging here because if you want a safe API you have to be very careful in where you set the boundary between the unsafe internals and the safe API. And Rust's safety rails will be of limited use for the real difficult parts. I'm writing my own abstraction over Vulkan/D3D12/Metal and I've intentionally decided not to make my API safe and to leave it to a higher layer to construct a safe API.
I'm currently writing a Vulkan renderer in Rust, and I decided against wgpu for this reason - its synchronization story is too blunt. But I don't necessarily agree that this style of programming is very much at odds with Rust's safety model, which is fundamentally an API design tool.
The key insight with Rust is to not try to use borrowing semantics unless the model actually matches, which it doesn't for GPU resources and command submission.
I'm modeling things using render graphs. Nodes in the graph declare what resources they use and how, such that pipeline barriers can be inserted between nodes. Resources may be owned by the render graph itself ("transient"), or externally by an asset system.
Barriers for transient resources can be statically computed when the render graph is built (no per-frame overhead, and often barriers can be elided completely). Barriers for shared resources (assets) must be computed based on some runtime state at submission time that indicates the GPU-side state of each resource (queue ownership etc.), and I don't see how any renderer that supports mutable assets or asset streaming can avoid that.
I don't think there's anything special about Rust here. Any high-level rendering API must decide on some convenient semantics, and map those to Vulkan API semantics. Nothing in Rust forces you to choose Rust's own borrowing model as those semantics, and consequently does not force you to do any more runtime validation than you would anywhere else.
> Lots of Vulkan map quite well to Rust's ownership rules, the memory allocation API surface maps very well. But anything that's happening on the GPU timeline is pretty much impossible to do safely.
I agree with this, having been dabbling with Vulkan and Rust for a few years now. Destructors and ownership can make a pretty ergonomic interface to the cpu side of gpu programming. It's "safe" as long as you don't screw up your gpu synchronization which is not perfect but it's an improvement over "raw" graphic api calls (with little to no overhead).
As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.
On the other hand, basic applications and "small world" game engines have a simpler way out. Most resources have a pre-defined lifetime, either it lives as long as the application, or the "loaded level" or the current frame. You might even use Rust lifetimes to track this (but I don't). This model is not applicable when streaming textures and geometry in and out of the GPU.
What I would really like to experiment with is using async Rust for GPU programming. Instead of using `epoll/kqueue/WaitForMultipleObjects` in the async runtime for switching between "green threads" the runtime could do `vkWaitForSemaphores(VK_SEMAPHORE_WAIT_ANY_BIT)` (sadly this function does not return which semaphore(s) were signaled). Each green thread would need its own semaphore, command pools, etc.
Unfortunately this would be a 6-12 month research project and I don't have that much free time at hand. It would also be quite an alien model for most graphics programmers so I don't think it would catch on. But it would be a fun research experiment to try.
> As for the GPU timeline, I've been experimenting with timeline semaphores. E.g. all the images (and image views) in descriptor set D must be live as long as semaphore S has value less than X. This coupled with some kind of deletion queue could accurately track lifetimes of resources on the GPU timeline.
> What I would really like to experiment with is using async Rust for GPU programming.
Most of the waiting required is of the form "X can't proceed until A, B, D, and Q are done", plus "Y can't proceed until B, C, and R are done". This is not a good match for the async model.
That many-many keeps coming up in game work.
Outside the GPU, it appears when assets such as meshes and textures come from an external server or files, and are used in multiple displayed objects.
> The abstraction that WebGPU provides is entirely the wrong level to most efficiently use Vulkan and D3D12 style APIs.
I agree with this, although the WGPU people disagree.
There could be a Vulkan API for "modern Vulkan" - bindless only, dynamic rendering only, asset loading on a transfer queue only, multithreaded transfers. That would simplify things and potentially improve performance. But it would break code that's already running and would not work on some mobile devices.
We'll probably get that in the 2027-2030 period, as WebGPU devices catch up.
WGPU suffers from having to support the feature set supported by all its back ends - DX12, Vulkan, Metal, WebGPU, and even OpenGL. It's amazing that it works, but a price was paid.
> But anything that's happening on the GPU timeline is pretty much impossible to do safely. Rust's type system is not sufficiently capable of modeling this stuff without tons of runtime checks, or making the API so awful to use nobody will bother.
I wonder if there's any thinking / research around what the PLT (programming language tech) would look like that could manage this. Depending on what kind of safety is sought, compile-time safety is not necessarily the only way to ensure this.
Of course it depends greatly on what kind of safety we are looking for (guaranteed to run without error vs memory-safe behaviour that might bail out in some cases, etc)
I indeed wouldn't expect safe approach here to be necessarily efficient. But you aren't forced to make everything safe even if it's nicer. I've seen some Vulkan Rust wrappers before which tried to do that, but as you say it comes at some cost.
So I'd guess you can always use raw Vulkan bindings and deal with related unsafety and leave some areas that aren't tied to synchronization for safer logic.
Dealing with hardware in general is unsafe, and GPUs are so complex that it's sort of expected.