Hello everyone, I am a beginner to visualization software, and my running hardware is MacOS. I have managed to get some experience with 3D Slicer and carry out some volume rendering; I also downloaded Paraview and Blender.
According to what I have learned so far there are two ways to do cinematic rendering on open source softwares: 1. using BVTKNode add-on on blender 2. Using looking glass VTK module.
Unfortunately, I had no success with any of those so far. As for BVTK I get the following error: “BVTKNode add-on failed to access the VTK library.” I presume this has to do with me having VTK 9.2, while the add-on runs on VTK 9.1. I cannot revert to 9.1 because my Python is not compatible with that version.
Furthermore, I cannot operationalize the VTK looking glass module for I neither have a holographic device nor can I run the codes due to being extremely inexperienced. I found it quite difficult to find a thorough tutorial on how to utilize VTK on MacOS; I finally figured out how to install it via the terminal; however, I am not sure how to run codes on it. Is it via the terminal or somewhere else? I clearly have not ever coded anything in my life so far, so some thorough explanation will really help.
Overall, my goal is to carry out photorealistic volume rendering of DICOM files in any way possible. As detailed above, I have not managed to do that so far; so would like to get some support on this matter.
If anyone has experience with Osirix on MacOS, please let me know if you can do cinematic rendering and how you can do it. I know Siemens Healthineer provide it too, but their Cinematic Anatomy is only compatible with Windows; nevertheless if you know any other paid way of doing this, please also let me know.
Overall, my goal is to carry out photorealistic volume rendering of DICOM files
just to be precise, what siemens call cinematic rendering is not photorealistic rendering : cinematic rendering is a set of tools and algorithm in order to produce a usable and nice looking image for medical usage. Photorealistic is when you cannot tell if what you’re looking at is reality or not.
That aside, you have a few solutions ahead if you want to use VTK
first you’ll need some nice color / opacity maps in order to bring out what you want. Slicer already has some really nice presets (can’t find where they are stored anymore , feel free to explore the soft and its sources)
then you can use either the new options of the volume rendering that mathieu said, or you can also try to use OSPRay pathtracer.
if these 2 options are not to your liking you could export your dataset as a VDB volume using vtkOpenVdbWriter and open it in blender
Using bvtknode may also be an option but I’m not familiar with it
Thank you Timothee! Mathieu’s recommendation really helped; nevertheless, I will definitely at some point try to do what you suggested with Blender in order to get experience with both of those. I will also try out OSPray pathtracer.
I feel so relieved after two days of confusion and frustration; I hope this also helps others.
Is it possible to build a custom ray tracer based on the vtk library? I would also like to implement cinematic-looking images using vtk in Python for a project.
For this I am currently weighing the methods of surface based rendering versus volume rendering. How could I implement my own algorithm based on python in the vtk pipeline? On the topic of volume rendering, ihc understood that you should inherit from the vtkVolumeMapper class and then override the methods.
Can you guys help me get an overview on this?
Hey @mwestphal!
I am also new to the visualization community and really like your developments. I’m wondering if it’s possible to integrate the methods you’ve developed, apart from paraview, into a python project of my own based on the vtk toolkit? I would be very happy to receive an answer.
@mwestphal Now I’m just wondering how to integrate paraview into a custom python environment? I heard something about paraview.simple but are the included modules sufficient?
Added: And pvpython of course!
Many greetings!
I’ve tried to use these multiple scattering approximations in ParaView, but the resulting images looked messy and artificial. The computation was also really slow and on some computers it did not work at all. Most of the medical examples that were showcased in Kitware blogs looked quite odd, too, they looked different but not really better than plain volume rendering.
With the help of @pieper and @LucasGandel we implemented two improvements in VTK and 3D Slicer that dramatically improve clarity and depth perception in volume rendering of medical images:
colorized rendering (coloring images using AI-generated masks and take the alpha channel from the original image)
screen-space ambient occlusion (SSAO) for volume rendering
The methods are computationally inexpensive, so they are usable on everyday computers or for demanding applications, such as virtual reality.
I’ll make sure to work on this for the SSAO part, just need to figure out a clean way to integrate this in VTK first.
With the agreement and help of @lassoan we should also include the vtkImageBlend improvements from the colorized module.
I’ll gather screenshots and will start a draft post to present low level aspects. A higher level post to present all the volume rendering results (colorized volume, ssao, scattering) could potentially make sense too.