-
-
Notifications
You must be signed in to change notification settings - Fork 56
Description
I am trying to change the vertex indices for a shader depending on the camera position (as an optimization, to reduce the number of vertex shader invocations.
My current attempt is to access shared.uniform_buffer.data["cam_transform"]
and shared.uniform_buffer.data["cam_transform"]
in my shader's get_render_info
. However, this doesn't work: get_render_info
is not called when I update the camera. As far as I can tell, the root cause of this issue is that the changes to self._shared.uniform_buffer.data
in WgpuRender._update_stdinfo_buffer
are not tracked in the Tracker
mechanism.
- Could we fix this issue by changing
WgpuRender._update_stdinfo_buffer
to somehow give access to the camera matrix throughShared
in a way that can be tracked? - Is there a better approach for what I'm trying to do? (I found a workaround by explicitly by the rendered object's
geometry.positions.draw_range
in my top-levelanimate
function and using this value inget_render_info
, but this top-level code feels like a workaround for something that should be handled within the rendering engine/shaders).
Context (feel free to ignore). I am interested in plotting very long time series (e.g., 10**8
points) and allow interactive viewing of the series (i.e., a 2d line plot where the x-coordinates of the points is np.arange(N)
, and the y-coordiantes is the data). pygfx
looks like an amazing tool for this (thank you so much for building it!), but as expected simply using Line
doesn't give a good performance. My current plan is:
- For high zoom levels (e.g. zoom such that at most a few thousands data points are visible): adapt the
Line
shader to only compute the vertex shader to the points near the viewport along the x-axis (I encountered the present issue while making a "clean" implementation of this). - For lower zoom level (possibly millions of data points in the viewport): have pre-computed downsampled (with max/min pooling) of the data, fill a polygon from the min/max downsampled data (multiple levels of downsampling according to the zoom level).
- Handle numerical accuracy issues: when zooming into the end of the time series, the numerical accuracy of float32 might not be sufficient for the x-axis. I think that the simplest solution is to use a camera that is not directly tied to the renderer, but instead directly control the behavior of a new set of "camera-aware" objects. In this way, we can keep all shaders using float32 without numerical issues, while using float64 for camera position.
If any part of this work may be interesting to you, I'd be happy to work on upstreaming it.
Edit: a patch that seems to fix the issue. get_render_info
can now rely on shared.uniform_data
:
diff --git a/pygfx/renderers/wgpu/engine/renderer.py b/pygfx/renderers/wgpu/engine/renderer.py
index 874236d..9d2600a 100644
--- a/pygfx/renderers/wgpu/engine/renderer.py
+++ b/pygfx/renderers/wgpu/engine/renderer.py
@@ -727,17 +727,20 @@ class WgpuRenderer(RootEventHandler, Renderer):
self, camera: Camera, physical_size, logical_size, ndc_offset
):
# Update the stdinfo buffer's data
- stdinfo_data = self._shared.uniform_buffer.data
- stdinfo_data["cam_transform"] = camera.world.inverse_matrix.T
- stdinfo_data["cam_transform_inv"] = camera.world.matrix.T
- stdinfo_data["projection_transform"] = camera.projection_matrix.T
- stdinfo_data["projection_transform_inv"] = camera.projection_matrix_inverse.T
- # stdinfo_data["ndc_to_world"].flat = la.mat_inverse(stdinfo_data["cam_transform"] @ stdinfo_data["projection_transform"])
- stdinfo_data["ndc_offset"] = ndc_offset
- stdinfo_data["physical_size"] = physical_size
- stdinfo_data["logical_size"] = logical_size
- # Upload to GPU
- self._shared.uniform_buffer.update_full()
+ stdinfo_data = self._shared.uniform_data
+ new_stdinfo_data = stdinfo_data.copy()
+ new_stdinfo_data["cam_transform"] = camera.world.inverse_matrix.T
+ new_stdinfo_data["cam_transform_inv"] = camera.world.matrix.T
+ new_stdinfo_data["projection_transform"] = camera.projection_matrix.T
+ new_stdinfo_data["projection_transform_inv"] = (
+ camera.projection_matrix_inverse.T
+ )
+ # new_stdinfo_data["ndc_to_world"].flat = la.mat_inverse(stdinfo_data["cam_transform"] @ stdinfo_data["projection_transform"])
+ new_stdinfo_data["ndc_offset"] = ndc_offset
+ new_stdinfo_data["physical_size"] = physical_size
+ new_stdinfo_data["logical_size"] = logical_size
+ if new_stdinfo_data.tobytes() != stdinfo_data.tobytes():
+ self._shared.uniform_data = new_stdinfo_data
# Picking
diff --git a/pygfx/renderers/wgpu/engine/shared.py b/pygfx/renderers/wgpu/engine/shared.py
index 822b6dd..73e6d9f 100644
--- a/pygfx/renderers/wgpu/engine/shared.py
+++ b/pygfx/renderers/wgpu/engine/shared.py
@@ -93,9 +93,8 @@ class Shared(Trackable):
# Create a uniform buffer for std info
# Stored on _store so if we'd ever swap it out for another buffer,
# the pipeline automatically update.
- self._store.uniform_buffer = Buffer(
- array_from_shadertype(stdinfo_uniform_type), force_contiguous=True
- )
+ self._store.uniform_data = array_from_shadertype(stdinfo_uniform_type)
+ self._store.uniform_buffer = Buffer(self.uniform_data, force_contiguous=True)
self._store.uniform_buffer._wgpu_usage |= wgpu.BufferUsage.UNIFORM
# Init glyph atlas texture
@@ -126,6 +125,22 @@ class Shared(Trackable):
"""The shared WGPU device object."""
return self._device
+ @property
+ def uniform_data(self):
+ """The shared uniform data in which the renderer puts
+ information about the canvas and camera (same content as uniform_buffer).
+ """
+ return self._store.uniform_data
+
+ @uniform_data.setter
+ def uniform_data(self, value):
+ """The shared uniform data in which the renderer puts
+ information about the canvas and camera (same content as uniform_buffer).
+ """
+ value.flags.writeable = False
+ self._store.uniform_data = value
+ self._store.uniform_buffer.set_data(value)
+
@property
def uniform_buffer(self):
"""The shared uniform buffer in which the renderer puts