- 
                Notifications
    
You must be signed in to change notification settings  - Fork 567
 
Description
This issue captures my thinking on 3D, but I'm open to discussion, especially if people bring a lot of energy and motivation.
There's a lot of interest in allowing access to 3D graphics from druid, but at the same time it's not in any way blocking Runebender, so it's hard to prioritize.
That said, one reason I'm very interested in 3D API's is to support better 2D rendering. There are at least two paths to that right now - Pathfinder and piet-gpu. These are fairly different in their approaches, as Pathfinder is designed for compatibility with installed GPU hardware and drivers, while piet-gpu explores cutting-edge compute capabilities. As of recently Pathfinder exposes enough of the 2D imaging model that we could consider it, and there is already a Pathfinder piet backend in progress.
Thus, we need to consider the approach to 3D in layers. One is: what should druid-shell expose? That layer can be consumed by druid to provide the best piet experience possible, even without exposing 3D. The other is: what should druid expose?
There is also the question of complexity. There are many, many 3D API's out there, with a complex matrix of compatibility and capabilities. Any approach to 3D must involve runtime detection, with some sort of fallback. Adding to the complexity, using 3D codepaths creates integration problems unique to desktop apps, not shared by the more typical game use cases: incremental present capability, low latency present modes (based on wait objects), smooth resize, etc.
One very appealing approach is to adopt wgpu as the primary integration point. The runtime detection would be wgpu or no wgpu, plus of course finer grained feature detection as provided by wgpu. Not all platforms can support wgpu, but compatibility work is envisioned (from the wgpu web page, OpenGL is currently unsupported but in progress).
There is another question of how to composite 3D content with the GUI. Again, two main approaches. One is to leverage the compositor capabilities of the platform, having loose coupling between the 2D and 3D pathways. Another is to use a GPU-resident Texture as the integration point. This would involve synchronization primitives to signal a frame request to the 3D subsystem (and similarly to negotiate resizes, which can get quite tricky with asynchrony), and a semaphore or fence of some kind to signal back to the 2D world that the texture is ready. Then the 2D world can consume that texture as it likes, applying clipping and translation (needed for scrolling), drawing other UI on top of it, etc. My preference is fairly strongly for the latter, though as always there are tradeoffs.
Since wgpu is not really mature yet (among other things, Pathfinder does not yet have a wgpu back-end, though it likely will soon), if we want to make faster progress we would need to add dynamic negotiation for a broader range of GPU interfaces. It's possible, but I'm certainly not enthusiastic enough about that to put time into it myself.
Discussion is welcome, we can use this issue.