-
-
Notifications
You must be signed in to change notification settings - Fork 56
Alpha mode and Transparent #989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Another issue to pay attention to is whether the renderer defaults to enabling z-sort. At present, I have maintained the behavior of not enabling it by default, so that the behavior of certain transparent examples remains consistent with before. But in fact, the rendering result may be incorrect. |
Looks good to me. Curious to hear Almar's thoughts. |
Is transparency still properly supported in points, lines and text? |
The Lines 263 to 285 in e08965a
|
What currently happens with If we sort the objects by z, we can iterate them over them in reverse in one of these cases, so that we go near-to-far for opaque and far-to-near for transparent objects. |
I'm not quite sure because theoretically, setting transparent only tells the renderer to process transparent objects according to their flow, and it doesn't necessarily have to be a truly "transparent" object. If it's not set to True, it doesn't necessarily mean it's not a transparent object (such as typical tranimissive objects based on physics). It is an identifier for reference by the renderer and has no direct relationship with the rendering process (shader). |
I think there should be no problem, and the test doesn't seem to have any abnormalities. |
A major consideration in deciding whether to enable z-sorting by default is the additional performance overhead. Because for each object, z-sorting requires an additional position conversion (from world space to camera space). I have thought about this issue again today. I did a test: (Click to see the code and output)import numpy as np
import pylinalg as la
import timeit
v = np.array([1, 2, 3], dtype=np.float32)
v4 = la.vec_homogeneous(v)
m = la.mat_compose(np.array([1, 2, 3]), la.quat_from_euler((1, 2, 3)), np.array([1, 2, 1]))
print(la.vec_transform(v, m))
def apply_matrix(v, m):
vv = m @ v.T
return vv.T
print(apply_matrix(v4, m))
v4_batch = np.stack([v4] * 100000)
vo = apply_matrix(v4_batch, m)
print(vo[:5])
print(timeit.timeit(lambda: la.vec_transform(v, m), number=100000))
print(timeit.timeit(lambda: apply_matrix(v4, m), number=100000))
print(timeit.timeit(lambda: apply_matrix(v4_batch, m), number=1)) Outpus:
If I haven't made any mistakes, there seems to be a lot of room for optimization here, so the performance cost of z-sorting can be almost negligible. There is almost only the performance cost of sorting itself. |
I believe vec_transform also accepts batches. But I see the major difference is that you are not performing the division operation by w. Is that right? Otherwise the code looks identical... |
I'm not quite sure, but it seems like there's more to it. The performance gap is still quite noticeable. |
I looked into this, see pygfx/pylinalg#102 |
Awesome how discussions like this result in performance boosts 😄 🚀. Making sorting the default feels better now. |
I want to propose:
@almarklein how do you feel about that? |
This PR highlights multiple issues with the current blending, and possibilities for improvements. This is very valuable indeed! But I also have a few objections to (the current state) of this pr. Let's discuss these. The alpha_test makes sense, no objections. The depth_write is also a good addition, but I think it perhaps needs a default "auto" value, because transparent objects should by default not write their depth? I believe the stats object does not need special handing, because it's always rendered in a separate pass anyway. You make good points about the currently incorrect sorting, and I can see it being enabled by default now. Also good point to try and make the The greater objection that I have: the new |
Er, The purpose of adding this setting is to provide users with better control over rendering behavior. Although transparent rendering typically disables depth_write, there is no inherent connection between transparent rendering and depth writing. Regardless, depth_test should remain enabled. Previously, we only had the Note that some transparent scenes may require depth_write (for example, if you consider transmissive objects as transparent, they need to enable depth_write), depending on the user's logic. My initial idea is that when users render transparent objects and explicitly set However, if we automatically set
I have tried it before because Stats contains both opaque (text) and transparent (background) objects. If you participate in sorting scene objects, you will render the text first and then the background, and the rendering result will be incorrect. In addition, if they are treated as ordinary scene objects, there are other issues, such as their participation in the generation of transmitted light sampling textures, which is incorrect in any case. My idea is that they should never be equated with ordinary renderable objects in the scene.
This PR does not have default enable sorting (by z-value), only distinguishes between transparent and opaque objects, and renders opaque objects first and then transparent objects.
I don't think so. It is meaningful to distinguish between transparent and opaque objects before rendering, and the format of the model file will also clearly indicate whether the material is transparent or not.
I actually want to get rid of Blender. And use the Blender mode of ordered1, just because it actually has no effect, 😓 I have always felt that Blender is a somewhat strange design. We have defined many object related properties (specifically, material related attributes) in Blender, but we associate them with the renderer, which is strange. For example, in Blender, we define "depth_descriptor" to describe the behavior of objects related to DepthTest and StencilTest. This is clearly an object or material related property, not a renderer property. Different materials or objects in the scene have their own depth_test logic and stencil_test logic (and it is important for the implementation of certain functions and effects, such as in post-processing where stencil _test information is needed for object edge drawing). The "color_descriptor" is the same, and its blend method is a property of the material itself, not the behavior of the renderer. For example, in the scene, ordinary transparent objects are generally using "Translucent Blend Mode" (through alpha value blending), flame particles, etc. are using "Additive Blend Mode" (direct color addition), etc. In the UE engine, there is also an explanation for the "Material Blend Mode". |
The approach of this PR assumes I strive to make things Just Work for our users. In the ideal case, a user populates a scene with different types of objects (meshes, lines, volumes, etc.) and everything looks as one would expect. That way scientists can explore data without having to be viz experts. We got a lot of things right in Pygfx, but making things Just Work for transparency is particularly tricky. It always has been. The blender is my attempt to resolve this, by providing alternative blend modes, like weighted blending. In order to implement these advanced blend modes, they need control over the blending ops and whether or not depth is written. Maybe this helps understand the reason why the blender defines them and not the material. I admit that the blender is not really successful at making things plug and play in regard to transparency. Users have to be aware of it, and chose (and understand) between the different modes. And even then, the results vary. E.g. weighted blending works well in some cases, but not so much in others. I can also see how this inhibits the more game-engine-like feature, like specifying additive blending on a specific object. But I have an idea ... The blend mode which I find most interesting is the one based on dither (stochastic transparency), since its the only one that always produces correct results, regardless of ordering. And it's a single pass. And it can deal with objects that are partially transparent. Ok, it looks noisy, but we may be able to improve that. Another big advantage (I realize now), is that it's compatible with classic blending; you can mix objects that use dither and alpha blending (you simply consider the dithered-object as opaque when you sort the objects). So, a proposal ...
With this:
|
Thank you for your detailed explanation. Taking this opportunity, I would like to share some of my thoughts here, and I hope it doesn't come across as too presumptuous. 😅 As a rendering engine, I think Pygfx's core value lies in its high-level encapsulation of wgpu functionalities and rendering pipelines. It abstracts and simplifies wgpu capabilities and rendering processes through Pygfx-specific concepts and data structures. In the rendering pipeline, we've introduced several key abstractions. The traditional programmable rendering pipeline typically consists of a geometry processing stage (vertex shader), rasterization stage, and pixel processing stage (fragment shader). For the geometry processing stage, we've designed the Geometry class as an abstraction, while for the rasterization and pixel processing stages, we use the Material class. Therefore, our renderable objects (world objects) consist of geometry and material components. The core module of the engine should automatically handle the mapping between these abstract concepts and their underlying implementations, freeing users from dealing with the low-level details of WGPU API. This includes managing the lifecycle of various GPU objects, assembling rendering pipelines, data transfer, byte alignment, and other complex technical details. Ideally, when users define vertex attributes in Geometry, these attributes should be automatically available in the vertex shader without additional effort. Similarly, when users define uniform properties or textures in Material, these resources should also be automatically accessible in shaders. Corresponding WGSL structure definitions, GPU object generation, and binding should all be handled automatically by the engine. In essence, Pygfx users can conveniently define, configure, and implement their rendering pipelines using intuitive concepts like Geometry and Material. And it can cover all or most of the capabilities provided by wgpu. With these features, developing various applications based on Pygfx becomes straightforward, without requiring in-depth knowledge of wgpu's underlying implementation. This level of functionality primarily targets developers who use Pygfx as a rendering engine. Additionally, our various built-in Material classes (with accompanying shaders) and objects can be seen as predefined "classic rendering pipelines" based on the core functionalities mentioned above. These components are ready-to-use, and by adjusting parameters, they can meet the needs of most general scenarios and typical tasks. This level of functionality is aimed at more "front-end" users or developers who may not need to deeply understand the details of rendering pipelines. As for higher-level functionalities targeting specific scenarios (such as scientific computing and data visualization), I believe these are better suited for implementation by libraries built on Pygfx (like fastplotlib). Even if some of these capabilities are to be included in Pygfx, they should be positioned as “core capability demonstrations and applications” rather than framework essentials. |
Thanks for sharing your view on Pygfx. It's very helpful. And I mostly agree 😄
I do think that Pygfx's core framework can include features that make it more suitable for scientific purposes. We've always positioned Pygfx as a scientific-able render engine. Which parts of my proposal are you referring to exactly? As for dithering, making it possible in the core means that a library/users that wants to use it, does not have to subclass the shader for every object they want to support. As for the shader determining whether the object is opaque or transparent, this is a task that the shader can do relatively easily, but is much harder to implement by higher-level code (because it's shader-specific). It's a relatively small effort from the engine's end that makes the user-experience a lot more friendly. |
I'm not sure if it was clear from what I wrote, but I'm basically proposing what you are in this PR and comments, with two additions: 1) in addition to the common options of |
The comments above are not directed towards a specific proposal you have put forward, just taking this opportunity to express my thoughts, 😅 My idea is that we should expose and map more comprehensive wgpu capabilities through basic abstract concepts such as Material, rather than hiding them (such as depth and blend related configurations), in order to provide maximum flexibility. Ideally, for all configurable and programmable places in wgpu's programmable rendering pipeline, we should provide corresponding APIs through this level of conceptual abstraction (Geometry and Material, etc.), rather than embedding some fixed pattern. |
This is superseded by #1002. The discussions in this PR initiated that work 😉 @panxinmiao I think the main thing to salvage here is the addition in |
This pull request refactors the logic related to rendering transparent objects, which is essentially part of #974.
Key Changes:
Material systems now feature a settable
transparent
property. To ensure correct rendering of transparent objects, developers must explicitly set this property to True, which triggers the appropriate transparent rendering workflow in the renderer.Added support for alpha_test - an efficient technique primarily used for achieving mask-like effects (e.g., vegetation rendering for grass and foliage).
With these changes, we can better support logic related to transparent objects and these enhancements improve pygfx's compliance with glTF specifications for transparency handling (alpha mode).
The following test uses “gltf_viewer.py" and "ordered1" Blender mode (no need for two renderings).
Test case 1: Alpha Blend Mode Test
Test case 2: Compare Alpha Coverage
Ps: Most of the work for this pull request was completed before #974. However, I considered this constitutes a significant behavioral change in pygfx. To provide a more comprehensive rationale for approval, I opened draft PR #974 in advance to facilitate thorough discussion. This is also the work that the implementation of physics based transparency in #974 relies on.