ICP transform fails when aligning incomplete source polydata with target polydata

Hi there,

I am currently using VTK to process 3D scanning data but require assistance in aligning polydata. The 3D scan data I obtain is randomly oriented and a bit noisy, but it contains fiducial markers, so my first step is to align the scanned and reference fiducials. The initial state is demonstrated below:

First, I manually select point pairs between the reference and scanned fiducials, then align them with a rigid body landmark transformation - no problems here:

Next, I would like to use vtkIterativeClosestPointTransform() to refine the registration of the closely aligned surface data. Here is my code:

    # setup ICP transform
    icp = vtkIterativeClosestPointTransform()
    # Transform the source
    icpTransformFilter = vtkTransformPolyDataFilter()

Given that my source and target polydata are closely aligned, and the ICP algorithm matches two surfaces based on the closest surface points, I expected an improvement to my initial approximated alignment. However, I am finding that the transformed result is significantly misaligned:

How can I get the ICP transform to only consider the surfaces in the immediate neighbourhood? It appears that the ICP transformation is trying to align the centroids while considering the bottom fiducials at the negative-most y-position. However, the surfaces of those fiducials don’t correspond to any data in my scanned polydata. I notice a similar result if a portion of the bottom fiducials are included in the scan, resulting in an alignment that appears to only align the centroids and not the surface information although I have set StartByMatchingCentroidsOff():

Does anyone have any ideas on how to refine the registration?

Thank you for your assistance!

Typically one would introduce a distance cutoff such that points to far away are not included in the correspondences. The ICP of VMTK is modified to do exactly that. See vmtk - the Vascular Modelling Toolkit. If you have compiled VTK yourself, it should be fairly easy to modify the filter for this purpose and compile VTK with Python bindings. Alternatively, you could consider using the VTK bundled with VMTK or implement ICP in Python. You can inherit from VTKPythonAlgorithmBase and just translate the code from VMTK - all VTK objects needed are available from Python

Partial match is not a problem if only one mesh has missing points. You may just need to switch what is used as source and target.

VMTK is available in conda for Python-3.9 or in 3D Slicer’s Python environment.

1 Like

You are right. Correspondences are made from source to target. It would be a small improvement for ICP in VTK to add what you have done in VMTK

Hi Andras and Jens, thank you for the prompt reply.

I will look into the VMTK implementation. I am currently working with the pip installation of VTK, so I’m unable to readily modify the source code. We are looking to use VTK for app development and are working within the bounds/environment of MeVisLab which does have some VTK and pip support. I’m currently working on the visualization/interaction workflow from a simplified Python script though.

I tried to keep my post straightforward and minimal, but the actual code I’m working with does in fact switch the source and target. I use the target as the moving structure, get the inverted matrix with icp.GetMatrix().Invert(), then apply the transformation to the source structure. Using the code above or simply swapping the source and target (then applying the inverse transform) as you suggested results in a signficant misalignment of structures. My images show the best alignment that I am able to obtain.

The distance cutoff would be a great option when using the ICP transform. My current consideration is to crop out the bottom-most fiducials, but this would require additional code and would not be as robust as we require.

If you want to register fiducial markers visible on CT with the same markers visible on a surface scan then ICP is probably not the best approach.

The main issue is that the appearance of the optical markers on camera images may be impacted by many things (partial occlusion, stray reflections, etc.) and so they may not appear as complete spheres. If they are not complete spheres then ICP will not converge to an optimal solution, as it does not use the prior information that each small surface patch has sphere shape.

You would probably get significantly more accurate results if you fit a circle or sphere on the camera/3D scanner image and register the center of that sphere with the center of the sphere that was extracted from the CT. The accuracy is better because there are very sophisticated, high-precision circle and sphere fitting algorithms for images and point clouds, while ICP is a very primitive, general-purpose method.