-
-
Notifications
You must be signed in to change notification settings - Fork 56
Description
Overview of issues
There are currently a few issues related to how we currently deal with viewports and overlays.
- It currently prevents using the values in the depth buffer.
- Details and some experiments: Show depth buffer #491
- An earlier discussion: Access to depth buffer #320
- Another related discussion: VIsualizing a triangular terrain model using pygfx #318
- This also affects post-processing that needs the depth buffer, like fog: API for adding fog and custom full-screen post-processing effects #72
- When rendering to a viewport, code that uses events may need to offset mouse positions for viewport rect
- E.g. in controllers Make controllers configurable and apply damping #490
- Probably the Gizmo too?
- Subplots must share renderer props like blend_mode. Users may be forced to use multiple widgets/canvases.
Proposal
This is a proposal that I think fixes these issues.
Edited on 19-07-2024: the plan, as part of #495 is to make the renderer simpler and lower-level, and make the viewport play a role in interactions / events.
- The renderer no longer emits events.
- The renderer has a certain size, but is no longer associated with a canvas. It just
flush(canvas, offset)
into one. - The viewport will take the role of a higher level / evented / convenience object. It takes over the role from the current
Display
class, and emits events. - The renderer gets an option to only use color textures (omit depth and picking).
We drop the viewport class.A renderer can render to a specific region of the canvas.The renderer offsets event coordinates, so that everything is relative to the renderers rect. (We can still expose the canvas coords if that's useful.)- The renderer keeps its own internal render textures, (which may be smaller than the canvas).
We introducerendeer.render_overlay()
that allows rendering stuff like fps counters, gizmo's, axis ticks, labels, etc. This overlay pass does not clear the depth buffer.- The renderer gets an API to add post-processing steps. When the renderer flushes its color buffer to the canvas, these steps are applied in the same shader (the flush render pass). (This step is less important and can be implemented later.)
We may also add the ability for an overlay pass to the total canvas. (May not be needed; let's look at this later.)
This should make that each renderer has a depth buffer that is "intact" and can be sampled by the user. In the flush step, the depth can also be made available. I guess you could still render with the same renderer multiple times for specific use-cases, but then the depth buffer contains the values of the last pass.
The ability to render an overlay can also be achieved by creating an extra renderer that targets the same region on the canvas. But the memory consumption for an approach like that would be substantial, so we probably don't want to promote that as the standard solution. In most cases, "stuff drawn on top" does not need a depth buffer, or can do with just a few bits (we could e.g. move content close to the near plane, or move the far plane further out).
Use cases
Also some use-cases that we should support, let's make sure to have a working example of each (some or already covered):
- An fps counter.
- An
AxesHelper
in a corner of the screen. - The transform Gizmo.
- A plot layout, showing multiple subplots with axii, ticks and labels. Can be a mock-up, just to show the idea for downstream plotting libs.
- Taking snapshots of viewports / whole canvas, also see Capture frames from the canvas #754
- Having multiple groups of objects, each overlays on the previous, rendered to the same target, but blended differently.
- Picking transparent objects.
- Make it possible to create a near-invisible transparent object that is pickable.
- ... anything else?