ICP not stitching point cloud together

I have 1 scan. I have split the scan into 2 point clouds that have about 10 percent overlap. I offset one point cloud by .5 units in the X direction. I am trying to use the icp object to put the point clouds back together. When I apply the icp though I always get the same results. I have tried to vary what parameters I have like iteration max, number of points loaded, and mean distance but get the same result. Am I using this object wrong? I am following the vtk example of the icp object: https://kitware.github.io/vtk-examples/site/Python/Filtering/IterativeClosestPoints/

Before Transformation (just slightly offset to the right):

Results:
1,000,000 points per scan:

This is the function I am using to apply the icp object:

def applyICP(source, target):
    icp = vtk.vtkIterativeClosestPointTransform()
    icp.SetSource(source)
    icp.SetTarget(target)
    icp.SetMaximumMeanDistance(0.0000001)
    icp.GetLandmarkTransform().SetModeToRigidBody()
    icp.SetMaximumNumberOfIterations(30)
    icp.CheckMeanDistanceOn()
    icp.Modified()
    icp.Update()
    return icp

I think you are not transforming the source data using the icp result, that would explain why they don’t match

In the case you already accounted that if you know your points lie on a rectangle you could use a bounding box matching before the icp

Hope it helps

This is how I am applying the filter transformation to the points:

src.point_cloud_actor.mapper.SetInputConnection(icp_filter.GetOutputPort())

So the goal of this script is to see if I can register 2 pointclouds without knowing what the overlap is. I have a lot of scans with overlap at different points. They will go through a function that roughly registers them and then those scans will go through this function. The goal of this function is to finely register them. I’m starting out with this data just to test if the concept is doable.