Volume rendering issue

Mapping data for volume rendering gives strange results when the max value is set to something lower than the highest value in the vtk-file. The volume that is supposed to be shown is there, but there is also a “ghost-volume”, which follows the interface between values higher than the max value and values lower than the min value. I suppose the “ghost-volume” is the result of some interpolation between neighbouring values.

I’m inluding an image with screenshots of the problem. The first screenshot is of the result of the threshold-filter with min=16.5 and max=19, i.e., how I would want the result of the mapping data to look. The second screenshot is of the result when using mapping data with min=16.5 and max=19. There is a “ghost-volume”, which can be seen to follow the outline of the volume in the third screenshot. The third screenshot shows the result when using mapping data with min=16.5 and max=31 (31 is the highest value in the vtk-file).

Is there any way to remove the “ghost-volume”? We’re developing a VTK-based Java application and want to include a mapping data-filter with both a min and a max value, but only if we can get rid of the “ghost-volume”.

I think this is normal. There used to be a “CompositeMethod” flag that could be set to classify first or interpolate first but it does not exist anymore. See more information about this in the VTK textbook and Lisa’s comment on this blog post. Thresholding before volume rendering seems like a good solution. You may also reduce this effect by decreasing sampling distance.

Ok, thank you. I’ve tried thresholding, the problem is that the volume rendering is so much slower after thresholding. Is there any way to fix that? How would you decrease the sampling distance?

Thresholding should not slow down volume rendering (of course a more dense volume may be faster to render due to early ray termination, but then you are not rendering the data you need). Check that you don’t accidentally change the scalar type of the volume.

How big is your volume? GPU volume raycast mapper should be able to deal with all volumes that fit into GPU memory.

Search for methods in your volume mapper with name containing sample or sampling.

We haven’t implemented a thresholding filter yet. I just assumed it would be slow because it’s so slow in Paraview. With a relatively small structured point dataset (11 MB), after thresholding I get an unstructured grid dataset (65 MB) for which the volume rendering is a extremely slow. Here’s a link to a video screen capture of the volume rendering

https://chalmersuniversity.box.com/s/90f290k3390z4gf5sqtw07a8a6l0hdbz

It takes no time at all to update the volume rendering of the original geometry. After thresholding it takes 8 seconds just to load the volume rendering, and then 5 seconds to update the rendering after rotating the geometry.

Is this because the thresholding result is an unstructured grid? I don’t see any “GPU Based”-option for the “Volume Rendering Mode” after thresholding.

Using GPU volume raycast mapper you should be able to volume render such a small data in a a couple of milliseconds. The problem is that Paraview’s thresholding seems to convert your nicely structured image data to an unstructured grid, which takes magnitudes more time to render. If you use image threshold filter instead of a generic threshold filter then rendering will remain very fast.

Probably you can get Paraview to convert your thresholded data back to an image but my experience with Paraview is that it has lots of small problems when you use it with image data (small inconveniences, missing basic features, lack of optimizations, subtle bugs). If you primarily work with images, you may consider using 3D Slicer, which is mainly developed for image data visualization and processing.

Thank you, both the image threshold filter and 3D Slicer will be very useful. So it seems there’s no way to use mapping data without getting these ghost-volumes. But a solution then is to use the image threshold filter instead.

1 Like