how to correctly pass a custom polygon to vtkPolyDataToImageStencil?

I’m trying to extract ROIs from 3D medical images. This ROI consists of multiple 2D polygons which themselves are represented by a list of points. Each polygon is in a plane parallel to one of the coordinates axes; meaning they all have the same x or y or z values. Example:
[[133, 31, 127], [143, 146, 127], [172, 81, 127],]
[[103, 74, 27], [103, 48, 172], [103, 201, 161], [103, 139, 48], [103, 103, 10],]

I tried to at least write a simple prototype with a single polygon for a single 2D image like the example by replacing the vtkSphereSource with my own polyData but it does not work.
As you can see by the image, I tried 3 ways.
1- Similar to the example.
2- Similar to the example with the difference that the sphere’s vtkPolyData is passed instead of the vtkAlgorithmOutput.
3- My custom Polygon polyData is passed to vtkPolyDataToImageStencil.

The first and second method work fine, but mine does not. What could I be doing wrong?

Another problem is that, in my main project, I only have access to the vtkImageData of the loaded nifti/dicom file. When I tried to simulate this in here like stencil_sphere2.SetInputData(reader.GetOutput()), nothing would render.

Any feedback on even if I’m on the right track of going about this problem (3D ROI extraction not just about this code snippet) would be highly appreciated.


#!/usr/bin/env python
import vtk
import argparse

def get_program_parameters():
    parser = argparse.ArgumentParser()
    args = parser.parse_args()
    return args.filename

def main():
    file_name = get_program_parameters()

    reader = vtk.vtkNIFTIImageReader()

    slice_n = 100
    sphere = vtk.vtkSphereSource()
    sphere.SetCenter(100, 100, slice_n)

    points = vtk.vtkPoints()
    points.InsertNextPoint(0, 0, slice_n)
    points.InsertNextPoint(100, 0, slice_n)
    points.InsertNextPoint(100, 200, slice_n)
    points.InsertNextPoint(0, 200, slice_n)
    polygon = vtk.vtkPolygon()
    polygon.GetPointIds().SetId(0, 0)
    polygon.GetPointIds().SetId(1, 1)
    polygon.GetPointIds().SetId(2, 2)
    polygon.GetPointIds().SetId(3, 3)

    polygons = vtk.vtkCellArray()
    polygonPolyData = vtk.vtkPolyData()
    poly_data_mapper = vtk.vtkPolyDataMapper()

    data2stencil_sphere1 = vtk.vtkPolyDataToImageStencil()
    data2stencil_sphere1.SetOutputOrigin(0, 0, 0)
    stencil_sphere1 = vtk.vtkImageStencil()
    # stencil_sphere1.SetInputData(reader.GetOutput()) # if this is used instead of above, nothing renders

    data2stencil_sphere2 = vtk.vtkPolyDataToImageStencil()
    data2stencil_sphere2.SetInputData(sphere.GetOutput()) # Using the vtkPolyData instead
    data2stencil_sphere2.SetOutputOrigin(0, 0, 0)
    stencil_sphere2 = vtk.vtkImageStencil()
    # stencil_sphere2.SetInputData(reader.GetOutput()) # if this is used instead of above, nothing renders

    data2stencil_polygon = vtk.vtkPolyDataToImageStencil()
    data2stencil_polygon.SetOutputOrigin(0, 0, 0)
    stencil_polygon = vtk.vtkImageStencil()
    # stencil_polygon.SetInputData(reader.GetOutput()) # if this is used instead of above, nothing renders
    stencil_polygon.SetBackgroundValue(500) # set to 500 to show nothing is selected

    image_append = vtk.vtkImageAppend()

    viewer = vtk.vtkImageViewer2()
    interator = vtk.vtkRenderWindowInteractor()

if __name__ == '__main__':

The vtkPolyDataToImageStencil expects one of two types of input:

The polydata can either be a closed surface mesh or a series of polyline contours (one contour per slice).

So you would only use SetPolys() if your polydata ROI is a closed surface. But for closed 2D contours, use SetLines().

Explanation: A polygon is not a polyline, these are two distinct kinds of VTK primitives. A polyline is what you need. So, what you need to do is build a vtkCellArray for the contour, and then use polyData.SetLines(polylines) to create a polydata for vtkPolyDataToImageStencil.

Also, the Points must take the spacing, origin, etc. of the image data into account. In general, points in VTK represent positions in physical space, i.e. for medical images positions are usually measured in millimeters. So to make an ROI for slice 100, you must first compute what the position (in physical space) of slice 100 is. Then use that position when setting the points.

Addendum regarding the “physical space” that I mentioned in the previous message. In VTK, there is a class called vtkImageTransform for converting a polydata from the raw voxel column, row, slice index to physical coordinates. Specifically, this method:

   * Given a vtkImageData (and hence its associated orientation                 
   * matrix), and an instance of vtkPointSet, transform its points, as          
   * well as any normals and vectors, associated with the                       
   * vtkPointSet. This is a convenience function, internally it calls           
   * TranslatePoints(), TransformPoints(), TransformNormals(), and/or           
   * TransformVectors() as appropriate. Note that both the normals and          
   * vectors associated with the point and cell data are transformed            
   * unless the second signature is called, which controls whether to           
   * transform normals and/or vectors.                                          
  static void TransformPointSet(vtkImageData* im, vtkPointSet* ps);

I have never used this method, but several VTK filters use it internally and it should be able to take care of the index-to-point conversion so that you don’t have to do those calculations yourself.

1 Like

Thanks David. Changing .SetPolys to .SetLines did the trick.