Zbuffer with Volume Rendering

Hello everyone,
I’m trying to read the Zbuffer / depth buffer during volume rendering. For this I use
renWin-> GetZbufferData (x1, y1, x2, y2) and if I look at the values, there are all values equal to 1. If I have the image output with vtkWindowToImageFilter, it is simply black.
Does anyone know where my problem is and how to read the ZBuffer in volume rendering?

Best regards,
Alex

The volume mapper just writes the bounding polygon to the Z buffer. To access true depth data of raycasting, enable RenderToImage and once rendering is complete, access the color and depth data using GetColorImage / GetDepthImage methods.

Hello everyone,
do you know if it’s possible to enable writing volume depth data in render to window mode or explain why is it not possible and only bounding box is written?
Best regards,
Konrad

I think this old answer still applies: Zbuffer with Volume Rendering - #2 by sankhesh

Or actually, it doesn’t really answer why it’s not possible.

Volume rendering involves shooting rays through the image volume, sampling at regular intervals, and accumulating the color and alpha values using custom blend modes to get the resultant color and opacity for the fragment. In other words, the fragment color is a composition of all the underlying voxels which makes it simply impossible to deduce a single depth value for that fragment.

For example, take a look at the images in VTK Volume Rendering improvements and specifically at the image below:

4_1427464096

One can argue that the depth values for the left image would be straightforward i.e. sample depth for the skin values (i.e. where the opacity gets saturated). But what about the right image? Using the same logic, the depth buffer would contain depths for the bone values entirely missing the translucent skin layer in front of the bone. Further, often volume rendering never accumulates full opacity like for a scene with fog and cloud volumes.

One more consideration - to enable the fragment shader over the entire bounding box of the volume, the vertex shader needs the coordinates at the bounds of the volume. To be able to intermix opaque geometry with volume rendering, depth testing has to be enabled, which means that if volume mapper also writes to the depth buffer, you’d end up with weird artifacts. See the Intermixing translucent geometry with volumes blog for the complexity involved when the geometry is translucent.

Having read this, one question you’d have - what does the Render To Image mode do then? Well, it makes an oversimplified assumption - it writes the depth value of the first sample with an alpha value > 0, which by no means is an accurate assumption. It is as correct as assuming the depth value should be the sample depth where alpha accumulates to 1.0, or where alpha becomes > 0.5. :person_shrugging:.

I hope this helps clarify the technical reasoning. To better assist you, could you please provide more information about your specific use case and the reason why you would like the volume mapper to write to the depth buffer? I am sure we can come up with a tailored solution for your needs.

2 Likes

Hello again,

Thank you @sankhesh for the detailed explanation and feedback. I love it !

I completely understand the issue with depth values in volume rendering. Maybe I’ll start with my project goal, which was described here also (Volume Rendering Stereo Depth). However, it is now deprecated, so I’ll try to describe it better again.

Project Goal:

In brief, I need to share a VTK render texture into Unity and mix it with Unity scene content, which is why I need both color and depth images. Importantly, the VTK scene contains just a volume.

Current Implementation:

In my stereo rendering setup, each renderer is equipped with its own dedicated volume mapper, which operates in render-to-image mode. I override Unity’s Z-buffer with the depth map from VTK, and to be honest (with slicer transfer functions) that is completely sufficient for an experience as far as it is more like in the case on the left side of your screenshot :slight_smile: Attaching SS of the result.

Problem:

I tried to integrate the SSAO feature (which is awesome, by the way!) but it is not available with render-to-image mode.

Possible Solutions:

Either write depth information to the Z-buffer in render-to-window mode or enable SSAO in render-to-image mode. I have a feeling that the first solution should be easier to achieve.

I’d be grateful for a tip on how to mix SSAO with getting depth information together.

PS.

Is SSAO using the same depth values that render-to-image mode produces? It’s very interesting that even though selecting depth values from the ray marching algorithm is not straightforward, SSAO does such great work here. Also, it is a bit odd to me that you need depth information in SSAO but you don’t have access to its map, but maybe I’m missing something here.

You specify the “Opacity threshold” (in Lights module in 3D Slicer’s Sandbox extension) that you want to use for SSAO.


I would note that Unity is a gaming engine and it provides nice framework and good rendering engine that works very well for game developers, but it is not a friendly or efficient environment for medical image computing software development.

You need completely different rendering techniques for medical image visualization than for gaming, as you have input data that is unusual in gaming, requirements that go beyond looking cool, and 99% of the Unity community does not care about you and your medical visualization application at all. You would be at a much better place if you used VTK, as it can render to AR/VR headsets directly similarly to Unity, and VTK is already optimized for scientific/technical/medical computing and medical applications are considered very important in the VTK community.

If you use Unity, you also need to redevelop/integrate and maintain all basic medical imaging features, such as DICOM networking/import, image reslicing, transforms, registration, segmentation, quantification, tools, and all advanced features as well, such as treatment planning and guidance tools, surgical navigation, AI segmentation, AI analysis tools, etc. By switching from a gaming platform to a VTK-based medical image computing platform, such as 3D Slicer, you get all these things for free (unlike Unity - completely free, without any restrictions). You also get VTK-based medical image AR/VR visualization out of the box, get access to the entire Python scientific computing ecosystem, and can do everything in one programming language. in one integrated environment.

Thank you @lassoan for explanation and technical advice ! However I have Remote Rendering engine written in Unity and VTK integration is only a part of it as whole project does not aim medical imaging only.

@sankhesh it turned out SSAO shader overrides Zbuffer so problem was only with standard volume rendering but I manged to paste similar line in //VTK::RenderToImage::Exit place and let this shaders lines run in render-to-window mode and it worked out ! Thank you fr support.
Best regards,
Konrad

1 Like