Hi,
I am using vtk for volume rendering and now i am having issue to renderer big volume. Is there any method in VTK to render out of core data?
Thanks
Hello @Limz90,
AFAIK vtk doesn’t support out of core rendering. We need more details regarding your issue to check if a solution or a workaround is possible.
Also note taht there is some filter to reduce the data size like the vtkResampleToImage
(I asume here that you use vtkImageData)
FYI @sankhesh
Bests,
Hello @lgivord ,
Thanks for your quite response; yes indeed, i am using vtkImageData load some tiff images and in order to render them like this :
vtkSmartPointer<vtkTIFFReader> reader = vtkSmartPointer<vtkTIFFReader>::New();
reader->SetFileNames(tiffFilePaths);
reader->Update();
vtkSmartPointer<vtkImageData> imageData = vtkSmartPointer<vtkImageData>::New();
imageData = reader->GetOutput();
... then do the rendering.
But there some of my volume which are too big to be loaded into both RAM and GPU memory.
I hope the volume can be reduced to a certain size which can be loaded into memory?
I’ve seen the vtkStreamingDemandDrivenPipeline class but I don’t really know how it works and if it the solution.
Thanks.
In that case, I see 2 options:
- there is often an option to control the region that we want to manipulate named
Extent
, I believe the vtkTIFFReader could have an option for that (you can check that in the doxygen, also in vtk-example website there is probably an example for that) - using vtkResampleToImage as you can down sample your image after reading it
FWIW, you/we could implement a custom solution where the whole volume is split up into sub-blocks and then individual blocks are streamed to the volume mapper for rendering via the vtkMultiBlockVolumeMapper
.
Thank you; I tried the first option by using vtkStreamingDemandDrivenPipeline to render an interested block of data with Extent. But in your second option, the data must be read before applying the resampling which is a problem because the volume is too big to be loaded into memory.
Hi @sankhesh , thank you for your response; I will look in detail the vtkMultiBlockVolumeMapper class .
Hello,
Partitioning a large volume into smaller volumes is a good strategy. This has been used in petroleum industry to render terabyte-large seismic surveys.
best,
PC
VTK’s little used (experimental even then and maybe not actively regression tested today) memkind feature might be useful for implementing this. Memkind gives vtkObjects the ability to work directly from on-disk memory. Back in the 2020 or so I volume rendered a couple terabyte dataset within ParaView on a single workstation for example. You could read directly into a vtkImageData backed there and then use resample to image or whatnot to get it into a more reasonable size to work with interactively.
hth
That is interesting to know. Is memkind still in use elsewhere? I saw the memkind github repo is archived and suggests another project - UMF. Should VTK use UMF in order to stay up to date?
Yes, /s/memkind/UMF/ is definitely a good strategy going forward.