after Reslice does the world coordinates of a point change?

Hello - I have run into some basic issues and unable to find a clear explanation of that on the web.

I have a brain scan that i want to reorient such that I have the AC ( anterior commisure point in the brain ) and PC ( posterior commisure point ) in the same slice.

I am using VtkImageReslice to reslice the original scan to the AC-PC orientation.

However after the reslice when I try and plot the AC and PC world co-ordinates that i had they are not going to the desired location.

I did try and apply the same transformation that i applied to the original scan to derive the AC-PC orientation to the AC and PC points to see if that gets me the points in the same place - however i still failed.

Not sure what i am doing wrong.

my confusion now is what needs to be done after reslice to plot the same points before reslice in the resliced version of my scans.

Thanks

When vtkImageReslice resamples an image, the coordinates of the features in the image (such as AC and PC) will change. To find the new coordinates, apply the inverse of the reslice transform to the old points.

Hi David - Thanks for the reply.

As you said, I did try applying the inverse transform on the AC/PC old points however in my case the points still do not show up on the ACPC re-sliced view.

I have a 3X3 rotation matrix and I am using vtkImageReslice as below

rotation = [0.999911, -0.0131206, -0.00248486
                    0.0133539, 0.982447, 0.186062
                    0, -0.186078, 0.982535]

self.acpcReslice = vtkImageReslice()
self.acpcReslice.SetInputConnection(self.color.GetOutputPort())
self.acpcReslice.SetResliceAxesDirectionCosines(rotation.flatten())
self.acpcReslice.SetOutputDimensionality(3)
self.acpcReslice.SetInterpolationModeToLinear()

We take the output from here and render it in vtkImageViewer2

there when we try to place new position of the old AC/PC points its always wrong

mat = vtkMatrix4x4()
mat.DeepCopy((0.999911, -0.0131206, -0.00248486, 0,
                          0.0133539, 0.982447, 0.186062, 0,
                          0, -0.186078, 0.982535, 0,
                          0, 0, 0, 1))
transform = vtkTransform()
transform.SetMatrix(mat)

new_AC_point = transform.TransformPoint(AC_worldx, AC_worldy, AC_worldz)

Method that i am using to get the transformation for ACPC

        ih = []
        ih.append((AC_worldx+ PC_worldx) * 0.5)
        ih.append((AC_worldy+ PC_worldy) * 0.5)
        ih.append(((AC_worldz+ PC_worldz) * 0.5) + 0.01)

        ih = np.array(ih)
        ih.flatten()

        ac = []
        ac.append(AC_worldx, AC_worldy, AC_worldz)
        ac = np.array(ac)
        ac.flatten()

        pc = []
        pc.append(PC_worldx, PC_worldy, PC_worldz)
        pc = np.array(pc)
        pc.flatten()

        pcAc = ac - pc
        yAxis = pcAc / np.linalg.norm(pcAc)

        # Lateral axis
        acIhDir = ih - ac
        xAxis = np.cross(yAxis, acIhDir)
        print("xaxis ", xAxis)

        xAxis /= np.linalg.norm(xAxis)

        # Rostrocaudal axis
        zAxis = np.cross(xAxis, yAxis)

        # Rotation matrix
        rotation = np.vstack([xAxis, yAxis, zAxis])

What could be the issue ?

Your code is not inverting the transform.

Hi David - Apologies , I think i missed putting the code where i am doing the inverse.

mat.DeepCopy((0.99999, -0.00434001, -0.000769226, 0,
                          0.00440765, 0.984644, 0.174519, 0,
                          2.81882e-15, -0.174521, 0.984654, 0,
                          0, 0, 0, 1))
transform = vtkTransform()
transform.SetMatrix(mat)
transform.Inverse()

the inverse of the above matrix gives me

     0.999991, 0.00440765, -4.39646e-10, 0, 
    -0.00434001, 0.984644, -0.174521, 0, 
    -0.000769227, 0.174519, 0.984653, 0, 
     0, 0, 0, 1

after the inverse

new_AC_point = transform.TransformPoint(AC_worldx, AC_worldy, AC_worldz)

This is still not placing my AC point as desired.

I’ll take a closer look to see if I can find anything. Also, try to post the code you are actually running, instead of posting pseudocode. For example, the following code doesn’t run (it gives the error “append() takes exactly one argument”).

Usually, this would be written as one line:

ac = np.array([AC_worldx, AC_worldy, AC_worldz])

The “ac.flatten()” at the end doesn’t do anything.

Thanks David.

The complete code is as follows:

ACPC realignment:

    # self.ac_dict['axial_coords'] , self.pc_dict['axial_coords'] -- this is a tuple 

    ih = []
    ih.append((self.ac_dict['axial_coords'][0] + self.pc_dict['axial_coords'][0]) * 0.5)
    ih.append((self.ac_dict['axial_coords'][1] + self.pc_dict['axial_coords'][1]) * 0.5)
    ih.append(((self.ac_dict['axial_coords'][2] + self.pc_dict['axial_coords'][2]) * 0.5) + 0.01)

    ih = np.array(ih)
    ih.flatten()
    # ih = ih[0]
    # ih value here is:  [109.9285868  101.48622267 -86.18550803]

    ac = []
    ac.append(self.ac_dict['axial_coords']) # this is a tuple 
    ac = np.array(ac)
    ac.flatten()
    ac = ac[0]
    # ac value here is: [109.8788505  112.77020418 -88.19550803]

    pc = []
    pc.append(self.pc_dict['axial_coords']) # this is a tuple 
    pc = np.array(pc)
    pc.flatten()
    pc = pc[0]
    # pc value here is: [109.97832309  90.20224115 -84.19550803]

    pcAc = ac - pc
    yAxis = pcAc / np.linalg.norm(pcAc)
    # Lateral axis
    acIhDir = ih - ac
    xAxis = np.cross(yAxis, acIhDir)
    print("xaxis ", xAxis)

    xAxis /= np.linalg.norm(xAxis)
    # Rostrocaudal axis
    zAxis = np.cross(xAxis, yAxis)

    print("z axis", zAxis)

    # Rotation matrix
    rotation = np.vstack([xAxis, yAxis, zAxis])

    # AC in rotated space
    translation = np.dot(rotation, ac)
    print("trans: ", translation)
    # Build homogeneous matrix
    matrix = np.eye(4)
    matrix[:3, :3] = rotation
    # matrix[:3, 3] = translation

    bounds = self.reader.GetOutput().GetBounds()
    center = ((bounds[1] + bounds[0]) / 2.0, (bounds[3] + bounds[2]) / 2.0, (bounds[5] + bounds[4]) / 2.0)

    self.acpcReslice = vtkImageReslice()
    if self.blend is None:
        self.acpcReslice.SetInputConnection(self.color.GetOutputPort())
    else:
        self.acpcReslice.SetInputConnection(self.blend.GetOutputPort())

    self.acpcReslice.SetResliceAxesDirectionCosines(rotation.flatten())
    self.acpcReslice.SetOutputDimensionality(3)
    self.acpcReslice.SetInterpolationModeToLinear()
    self.acpcReslice.Update()

    print("new origin: ", self.acpcReslice.GetOutput().GetOrigin())
    print("new spacing: ", self.acpcReslice.GetOutput().GetSpacing())
    self.spacing = self.acpcReslice.GetOutput().GetSpacing()
    
    # the imageviewer used is vtkImageViewer2()
    self.axial_imageviewer.SetInputConnection(self.acpcReslice.GetOutputPort())
    self.axial_imageviewer.UpdateDisplayExtent()
    self.axial_imageviewer.Render()

    print(self.acpcReslice.GetResliceAxes())

the last print statment gives:

     (0.99999, -0.00434001, -0.000769226, 0,
      0.00440765, 0.984644, 0.174519, 0,
      2.81882e-15, -0.174521, 0.984654, 0,
      0, 0, 0, 1)

Place AC point on resliced image:

        worldx = (self.ac_dict['axial_coords'][0])
        worldy = (self.ac_dict['axial_coords'][1])
        worldz = (self.ac_dict['axial_coords'][2])

        print("world: ", worldx, worldy, worldz)

        mat = vtkMatrix4x4()
        mat.DeepCopy((0.99999, -0.00434001, -0.000769226, 0,
                      0.00440765, 0.984644, 0.174519, 0,
                      2.81882e-15, -0.174521, 0.984654, 0,
                      0, 0, 0, 1))
        transform = vtkTransform()
        transform.SetMatrix(mat)
        transform.Inverse()
        new_point = transform.TransformPoint(worldx, worldy, worldz)

        worldx = new_point[0]
        worldy = new_point[1]
        worldz = new_point[2]

Even though this message board is for VTK, I can’t help but comment on the way that python and numpy are used in your code, so I’ll do that first and discuss VTK last.


First, the Python comments. Please don’t use ‘append’ like this:

ac.append(self.ac_dict['axial_coords']) # this is a tuple

Because it will give you a tuple inside a list, like this:

[(109.8788505  112.77020418 -88.19550803)]

Instead, use ac.extend() if you want to append multiple elements:

ac.extend(self.ac_dict['axial_coords']) # adds tuple's elements to the list
# (109.8788505  112.77020418 -88.19550803]

Or even better, just convert the tuple to a list:

ac = list(self.ac_dict['axial_coords'])

And I’ll explain my previous comment about “flatten()”. The flatten() method returns a flat array, so if you don’t keep the return value, it actually isn’t doing anything. For example,

a = np.array([(109.8788505  112.77020418 -88.19550803)])
# a: array([[109.8788505  112.77020418 -88.19550803]])
b = a.flatten()
# b looks good: array([109.8788505  112.77020418 -88.19550803])
# a is unchanged: array([[109.8788505  112.77020418 -88.19550803]])

But if you properly construct the list, then “flatten()” is not needed. Neither is "ac = ac[0]".


Next, the calculation of the rotation matrix is mostly fine, but some things should be changed.

First, when you compute the ‘yAxis’, you should check that it is pointing in the right direction, and reverse it if it isn’t:

if yAxis[1] < 0.0:
    yAxis = -yAxis

Second, your method of computing xAxis is not good, because yAxis and aclhDir are almost exactly parallel to each other:

xAxis = np.cross(yAxis, acIhDir)

Instead, you should cross yAxis with the original (unrotated) z axis, like this:

xAxis = np.cross(yAxis, np.array([0.0, 0.0, 1.0]))
xAxis /= np.linalg.norm(xAxis)
zAxis = np.cross(xAxis, yAxis)

Then, when you create a 4x4 matrix, you should always choose a center of rotation. This is done as follows, for example with ac as the center:

matrix = np.eye(4)
matrix[:3, :3] = rotation
matrix[:3, 3] = ac - np.dot(rotation, ac) # ac is center of rotation

Next, the rotation matrix can be turned into a vtkMatrix4x4 and put into vtkImageReslice:

resliceAxes = vtkMatrix4x4()
resliceAxes.DeepCopy(matrix.flatten())
resliceAxes.Inverse()  # resliceAxes are inverse of rotation matrix
reslice.SetResliceAxes(resliceAxes)

There are two reasons to use “SetResliceAxes()” instead of *SetResliceAxesDirectionCosines()":

  1. SetResliceAxes() allows you to define a center of rotation.
  2. SetResliceAxesDirectionCosines() actually sets columns of the matrix instead of the rows, so it sets the transpose of the matrix (which is equivalent to the inverse for a rotation matrix, so you can see how this gets confusing).

Since vtkImageReslice is using the inverse transform, you use the forward transform for the AC and PC:

transmat = vtkMatrix4x4()
transmat.DeepCopy(matrix.flatten())
transform = vtkTransform()
transform.SetMatrix(transmat)
(worldx, worldy, worldz) = transform.TransformPoint(worldx, worldy, worldz)

Sorry about the confusion about where the “inverse” should be done. After reading your code for computing the rotation, I realized the inverse transform should be applied to vtkImageReslice, not to the world coords. Also, it wasn’t until I read the SetResliceAxesDirectionCosines() documentation that I understood that it transposes the matrix.

Also, accounting for the transposition vs. inverse, the only really major difference between my suggestions and your original code is the use of a center-of-rotation. So, to be honest, I think your original code should have worked well enough, but maybe there is something funny going on with the vtkImageViewer2 class.

Also, how are you measuring the AC and PC points? Are you measuring them in VTK, or are you measuring them in some other piece of software? Does VTK correctly place the unrotated AC and PC points on the unrotated volume?