I’m currently working on a medical imaging project where I need to implement interactive 3D visualization using DICOM series, volume rendering, and integrate an STL model representing a screw into the scene. Additionally, I aim to enable multiplanar reconstruction (MPR) and panoramic views from the 3D scene, along with slice views from the panoramic perspective.
Here’s what I have achieved so far:
Successfully loaded DICOM series and implemented volume rendering.
Implemented basic multiplanar reconstruction (MPR) functionality.
However, I’m facing challenges with the following aspects of the project:
Integrating an STL or OBJ model representing a screw into the 3D scene and allowing for interactive manipulation of its position and angle.
Ensuring that changes made to the position and angle of the screw model in one view (coronal, axial, or sagittal) are accurately reflected in all other views.
Implementing panoramic views from the 3D scene and ensuring that changes in the STL model's position and angle are reflected in these panoramic views.
Enabling slice views from the panoramic perspective and ensuring that changes in the STL model's position and angle are reflected accurately in these slice views.
I’m using VTK (Visualization Toolkit) and Qt for the implementation. While I have a basic understanding of these libraries, I’m struggling with the intricacies of synchronizing the views and updating the STL model’s position and angle across different perspectives.
Any guidance, code snippets, or resources that could help me address these challenges would be greatly appreciated. Thank you!
@mdeepakcdac Have you looked at 3DSlicer? It is a customizable platform toolkit based on VTK, Qt and provides in-built mechanisms for all the needs you’ve listed and more. Moreover, the UI of Slicer can be extended, rebranded, minimalized, etc to suit the application workflow needs.
It seems like from your description you need to get the STL object “registered” with the VTK volume?
When DICOM image is loaded into vtkImage in 3D it is important to take the Direction Cosines of the DICOM image into account. Often these amount to an identity matirx, but they define the layout of the voxels in “Patient Coordinates”. In LPS ( (X) voxels move TOWARDS patient LEFT, Y voxels move towards patient POSTERIOR, and Z (the slices) move toward patient SUPERIOR (head) )
mode this means that each “slice” runs from the “foot” (inferior) of the patient to the head (superior) of the patient.
So *where is the STL file relative to the patient coordinate system define above? (assuming that is correct, and your DICOM image is a human, but you have to “pretend” it’s a human regardless).
As good as slicer is, it uses RAS coordinates which is a different way of orienting the DICOM image. The DICOM standard assumes LPS in the documentation. So you have to take that difference into account. Using Slicer as a base for a Commerical app is not always the best choice (although it can be).
So to summarize, how is the STL image placed in the scene?
3D Slicer uses LPS for all file input and output (unless another coordinate system is explicitly specified in the file header). Internally, Slicer indeed uses RAS as world coordinate system for historical reasons.
STL file format does not gave a standard way of specifying coordinate system axis directions (not even the units - mm, cm, m). To make STL files a bit safer, Slicer writes these information into the STL file header in the comment field.
All the features related to STL visualization, panoramic views, etc. are all already implemented in Slicer. Panoramic reconstruction is provided by Curved Planar Reformat module in Sandbox extension. This module provides two-way mapping between the original and the straightened space, so you can either transform the model to straightened space, or you can tranaform annotations, plans, etc. defined in the straightened space into the original image.
While 3D Slicer is not the only option for implementing VTK/Qt-based medical imaging applications, it is the most flexible and feature-rich, free, open-source platform with proven to be suitable for building research applications and regulatory-approved medical devices.
Yes, as I wrote above, within Slicer we still use RAS. It comes from classic neuroimaging conventions. The decision was made about 30 years ago - at that time it was impossible to know if RAS or LPS was going win and later it was hard to change. Conversion is not difficult (multiply by diag(-1, -1, -1, 1)) but I agree that it can be an inconvenience.