How to properly clip and cut multiple actors

Hello everyone,
I am new to vtk and have a question regarding the proper process to clip and cut multiple actors. In case this matters: I am doing this in Python.
Currently I have a set of vtkActors that are stored in a dictionary, along with a vtkClipPolyData object for each actor. All of the clippers are supplied the same plane and updated, and the output is rendered.
This occurs whenever the user rotates the camera far enough, with a new plane depending on the viewing direction of the camera.
What I now want to do is: Everytime this clip is “refreshed” the original actors are cut (with vtkCutter) on the same plane, and the resulting contours on this plane are used to generate a triangulated plane with vtkContourTriangulator and vtkPolyDataMapper.
My problem is that I haven’t yet figured out how to combine all of these contours. Since the objects are partially nested, but not intersecting otherwise, I would want to create a combined plane which shows a cut-through view of all the actors, including the nested ones.
Is there a way to do this?
And is there a smarter way to clip all the actors together, instead of one at a time?
If this is possible, would the resulting output still maintain the properties of the original actors? So, would they keep their colour, for example?

Thanks in advance for any help you can give me with this!


As far as I understand, you problem is that each cut faces overlap.

You can solve this at the rendering level, ensuring that the cut plane is just slightly closer to the camera for the embedded actors. This requires almost no change in your code and performance will be the same as now.

If you cut with a single plane then you can use vtkPlaneCutter, which is probably faster. For each polydata, you can turn the embedded polydata (the next nested polydata) inside out and pass it to the cutter.

The vtkContourTriangulator can cut out holes, too, so probably all you need to do is for each polydata to append the cross section of the next embedded polydata.

If you generate the contours from a labelmap volume then you may also consider displaying cross-sections directly using volume rendering, which might be several magnitudes faster if you have many segments or the segments are complex (e.g., not drawn by hand but created by thresholding a somewhat noisy image).

Thanks for the fast reply!
I think your suggestion to append the inside-out embedded polydata to achieve the holes in the outer objects solves my problem already.
As far as I understand your second suggestion is to use volume rendering: This would require the use of 3D Slicer, for example, right? Due to the current modules I am using to create the scene, I think I cannot implement a solution that uses that approach, since I am not experienced enough to connect them with something like 3D Slicer.
Do you think there is another way to improve the performance, by just using vtk? Is there a way to group the data before clipping and cutting, which still allows the result to be read as independent actors (for selections, etc.)? The reason is that the clipping and cutting process could also occur in scan-like movement through the objects and that would require a decent performance to avoid too much lag.
Also, before I saw your reply, I tried to follow this example: So far I managed to create the combined contours, but didn’t manage to apply unique colours, or even the appropriate colours from the original actors to the resulting filled contours.
Which way would you think is better performance-wise: filling the contours this way or with the vtkContourTriangulator?

VTK provides volume rendering, so with a bit of work you can set it up to render your segments.

Using 3D Slicer as a platform for medical imaging applications (as opposed to just plain VTK) because you’ll realize that there are thousands of small things that you need to develop (and test, and maintain) on top of VTK to have a minimum feature set that users nowadays expect from a desktop application.

If you are absolutely sure that you will only ever need a “lightweight” application, with very basic image viewing and editing features then developing a simple VTK-based application may look like a good idea. However, it is not 2010 anymore, so lightweight applications are expected to run in the web browser. Nobody would want to install a desktop application, download the data from somewhere, and load it into the application just for some small viewing and editing. Today, only high-end desktop applications are viable, which offer significantly more features, better performance, more convenient user experience than any web applications can. Note that the bar is being raised even for web applications: you cannot release anything less than existing free, open-source web viewers, such as OHIF.

So, probably the time of starting development of new medical applications from scratch is over now, for both desktop and web environments: you need to choose a good platform and extend/customize that. Medical AR/VR application frameworks are still quite immature (none of the game engines are well suited for medical imaging, 3D Slicer’s VR solution is still nascent), so if you really want to develop applications from scratch then you may look into this area.

I found a video showing volume rendering in VTK and it looks really good!
And I am also aware that there is no point in programming an application from scratch.
But the thing is, I already intend to extend an existing application.
I am working in research and we are using the Allen Brain Atlas to plan our brain surgeries ( I wanted to use the Brainrender module from Brainglobe ( and its associated GUI ( to display a user-defined selection of brain regions from that atlas, and then write a modification which allows the free clipping and cutting through the resulting model. This would look and feel close to the books and 2D cuts which are currently used in my lab.
That is why I wanted to improve the performance in that cutting step, since this is the only thing I want to add to the app (currently), I wouldn’t want to create any unnecessary lag.
Do you think there is any way to speed up the process of clipping and then capping the clipped surface with filled contours, beside using vtkPlaneCutter, especially if the user might potentially add many smaller actors (from .obj-files)?
Also, is it, in your opinion, better to follow the procedure in the example ( or use vtkContourTriangulator?

This is just an example of lightweight desktop visualization application that I think are not viable in the long term anymore. It may fulfill the needs of its single developer and handful of users for a few projects, but for example if you want to see the images (not just extracted surfaces), or you need to segment additional structures, or visualize surgical tools, or import DICOM data, or export plans, or interface with guidance systems, etc. then you realize that there is a lot of implementation and integration work to do. Such small tools/libraries can serve as a feasibility prototype for a full application or it can be used as a library in an application (for example, if you like the visualization modes that it offers then you could pip install and use it in 3D Slicer to use it with combination of existing surgical planning tools).

If you want to find the best possible solution (robust, accurate, and fast enough) then you need to implement all the promising options and test it with your typical data sets, on typical hardware configurations.

If you find that none of the methods are fast enough to operate on the full-resolution data then you can use level-of-detail technique to achieve arbitrarily fast clipping, at the cost of temporary image degradation (use a lower-resolution mesh while interacting with the clipping plane and compute the full-resolution result when the plane is not moving anymore).

I understand what you mean. I found this: Do you know if there are any brain atlases of mice, which are supported in, for example, 3D Slicer? Or are there other applications I should look to?
In my case I would be mainly interested in planning and visualizing injections with pipettes.

I will work for now with vtkPlaneCutter, and vtkContourTriangulator, after appending the inverted inner meshes with vtkAppendPolyData as you suggested.

I should probably use vtkDecimatePro for this, right? Or are there better approaches?

You should be able to load any atlases (images, models, etc.) into 3D Slicer that fits into memory. Needle-based interventions has been an important use case for Slicer from the very beginning, so there are a readily usable planning and visualization tools.

Ouptu mesh quality of DecimatePro is not very good. Quadric decimation should work better ( Note that VTK already has built-in LOD support (see for example, but maybe it is only for rendering.

You may also improve speed by identifying largest, most complex meshes and focus on reducing those (decimate and/or split them).

Thats good to hear, but I would also need access to the structure tree, for example, or loading in the tractography from previous experiments. Is there any program or extension that you could recommend for this?

I will see to implement the decimation with vtkQuadraticDecimation, since the current modules I use utilize vtkFollower and I dont want to change too much.

What index would you suggest I use to decide which of the meshes are the most complex? I would use the number of points or polygons, but would want to hear your opinion.

Also, in the interface I currently use, there is a neat selection feature which allows the users to pick and edit an actor. Do you know of a way to connect the “cap” created by vtkContourTriangulator to an actor, without appending the polydata, so that the user can still pick the actor by selecting its cap?

For this, you need to write a short python script that creates data tree in Slicer from the input csv file. See for example this post (and the attached script) in this post for importing FMA atlas:

SlicerDMRI extension can import/generate/visualize tractography data.

Yes, these are good metrics.

It is up to you. You can either append the polydata and get the selection in a single actor or you can display the cap and the intersection plane as two actors.

This looks great! Thanks. I should be able to work with this. And I think SlicerDMRI should work well, too. The only thing I am worried about is whether the format of the tractography data in the mouse brain projects I have access to is the same as in SlicerDMRI.

I will probably display them as 2 actors, but have to find some way to translate the selection of both actors into the selection of the original actor.

Thanks a lot for your time and help on this question! Also thank you for the perspective on the best way forward with this extension. If I run into any other problems I will ask them in a new thread, so that the thread stays on topic.