Hello !
I am new to VTK and I am trying to use VTK ICP algorithm in a rigid way to get a transform to use in SimpleITK then. I do that in Python.
Here is how I proceed:
- Import nifty file and apply Discrete Marching Cube Algorithm to obtain surfaces to register
- Feed source and target surface to vtkIterativeClosestPointTransform() and compute
- Apply the transform to surfaces through vtkTransformPolyDataFilter() -> Everything looks great, my surface are aligned in the viewer
- Retrieve the icp transform matrix from vtkIterativeClosestPointTransform() and convert it into physical coordinates (at least the translation)
- Apply this transform to my itk images through a sitk.Euler3DTransform() -> Images are not aligned at all
I have also tried with the inverse transform just in case but results are bad too. I have tried to do everything in VTK too but same results.
Here is my code (I skip the part where I read DICOM files in SITK to get fixed_image and moving_image):
def extract_surface_from_nifty(mask, image, option_visu=0):
source_reader=vtk.vtkNIFTIImageReader()
source_reader.SetFileName(mask)
origin=image.GetOrigin()
spacing=image.GetSpacing()
source_reader.SetDataSpacing(spacing)
source_reader.SetDataOrigin(origin)
print("Start Discrete Marching Cube algo")
source_dmc = vtk.vtkDiscreteMarchingCubes()
source_dmc.SetInputConnection(source_reader.GetOutputPort())
source_dmc.SetValue(0, 255)
source_dmc.Update()
return source_dmc
def icp_transform(fixed_mask, moving_mask, fixed_image, moving_image):
target=extract_surface_from_nifty(fixed_mask, fixed_image, option_visu=0)
source=extract_surface_from_nifty(moving_mask, moving_image, option_visu=0)
icp = vtk.vtkIterativeClosestPointTransform()
icp.SetSource(source.GetOutput())
icp.SetTarget(target.GetOutput())
icp.GetLandmarkTransform().SetModeToRigidBody()
icp.DebugOn()
icp.SetMaximumNumberOfIterations(20)
icp.StartByMatchingCentroidsOn()
icp.Modified()
icp.Update()
icpTransformFilter = vtk.vtkTransformPolyDataFilter()
icpTransformFilter.SetInputData(source.GetOutput())
icpTransformFilter.SetTransform(icp)
icpTransformFilter.Update()
transformedSource = icpTransformFilter.GetOutput()
return target, source, transformedSource, icp, icpTransformFilter
target, source, transformedSource, icp, icpTx = icp_transform(path_fixed_mask, path_moving_mask, fixed_image, moving_image)
translation=[icp.GetMatrix().GetElement(0,3),icp.GetMatrix().GetElement(1,3),icp.GetMatrix().GetElement(2,3)]
w_translation = np.zeros([1,3])
spacing=moving_image.GetSpacing()
origin=moving_image.GetOrigin()
w_translation = np.array(origin)+np.array(spacing)*np.array(translation)
outTx_itk=sitk.Euler3DTransform()
outTx_itk.SetTranslation(w_translation)
moving_resampled = sitk.Resample(moving_image, fixed_image, outTx_itk, sitk.sitkLinear, 0.0, moving_image.GetPixelID())
Do you have an idea why it isn’t working ? I suppose it has something to do with coordinate systems but I run through solutions… I have also tried to use Slicer with the parameters of the transform but it provides same results as SimpleITK…