We are visualizing a TIFF stack as a volume in VTK, plus neuron data, as simple polylines.
Current rendering pipeline is using one renderer for everything, and when rendered together, a ‘problem’ occur in that the opaque volume ‘dims’ the rendered poly lines. It looks of course ‘right’, but we have some cases where we would like to render the poly lines together with the volume, as if the volume is not present (no dimming of the polylines).
Below is an example image showing how the colored polylines gets ‘dimmed’ as they travel into the volume:
I tried the tips from this post (Surface and volume rendering in the same render) but that don’t do it.
I have read “somewhere” (can’t find where), that I may have to use two different renderers, one for the volume, and one for the solid wires, and render them into the same renderwindow.
Any ideas on that?
In 3D Slicer, we implemented this “occluded visibility” feature by using two rendering layers. We render the polydata normally, in the first layer, properly composited with the volume in 3D. We render the same polydata on a layer above it, with opacity of about 20-40%. With higher opacity the geometry is better visible, but you lose 3D depth cues.
That looks like exactly what we want!
About the term ‘layer’, is that the same as an individual vtk renderer?
So, to get it working I guess this require the polydata to be rendered twice? One rendered together with the volume (renderer 1), and one by itself (renderer 2). By setting the opacity to 1 on the polydata in renderer 2, we can see the full geometry of the polydata.
I will go ahead and experiment using this technique.
In the past we actually used separate rendering layers, but now we just adjust the relative coincident topology parameters. See full implementation here:
You render once, but use two actors instead of just one.