Recently, a user wanted to align two models from a longitudinal study. He did not have access to the original image data. If he had the images, he could have used 3DSlicer or Elastix. They both have state-of-the-art intensity driven registration algorithms.
VTK has several “registration” techniques: vtkLandmarkTransform,vtkIterativeClosestPointTransform, vtkThinPlateSplineTransform.
Each of these algorithms requires landmarks to exist in both the source and target models.
I created an example, AlignTwoPolyDatas that uses a vtkOBBTree to create oriented bounding boxes for each model. The example uses the corners of the bounding boxes as landmarks for the landmark transform. Then, that transform is refined with the iterative closest point transform. For the original, oriented bounding boxes and iterative closest point transform, the example computes a metric using the vtkHausdorffDistancePointSetFilter. The example picks the best of the three approaches and displays the aligned models. AlignTwoPolyDatas’ description provides more details.
Here are the results for the user’s time sequence. The technique can also align “similar” objects.
Here is the user’s data:
and a shark and the great white shark:
and finally a cow and a horse: .
Enjoy the VTKExamples,