Recently, a user wanted to align two models from a longitudinal study. He did not have access to the original image data. If he had the images, he could have used 3DSlicer or Elastix. They both have state-of-the-art intensity driven registration algorithms.
I created an example, AlignTwoPolyDatas that uses a vtkOBBTree to create oriented bounding boxes for each model. The example uses the corners of the bounding boxes as landmarks for the landmark transform. Then, that transform is refined with the iterative closest point transform. For the original, oriented bounding boxes and iterative closest point transform, the example computes a metric using the vtkHausdorffDistancePointSetFilter. The example picks the best of the three approaches and displays the aligned models. AlignTwoPolyDatas’ description provides more details.
Here are the results for the user’s time sequence. The technique can also align “similar” objects.
Here is the user’s data:
and a shark and the great white shark:
and finally a cow and a horse: .
Since the orientations of the bounding boxes may differ, the AlignBoundingBoxes function tries ten different rotations. For each rotation, it computes the Hausdorff distance between the target’s OBB corners and the transformed source’s OBB corners. Finally, transform the original source using the smallest distance.
You will probably need to transform your target object so that its orientation is similar to the reference object as shown in the images above. Generally this is done using vtkTransform and vtkTransformPolyDataFilter or something similar. To get a feel for what happens, you could open Paraview load a cone source and apply the transform filter.
Here are several examples that also may be of use:
I am a bit new to all of this but keen to learn. I was wondering if someone can advise me on how to save the aligned surface as a vtk from this example? I prefer using the Python version. Thanks in advance.