Even though this message board is for VTK, I can’t help but comment on the way that python and numpy are used in your code, so I’ll do that first and discuss VTK last.
First, the Python comments. Please don’t use ‘append’ like this:
ac.append(self.ac_dict['axial_coords']) # this is a tuple
Because it will give you a tuple inside a list, like this:
[(109.8788505 112.77020418 -88.19550803)]
ac.extend() if you want to append multiple elements:
ac.extend(self.ac_dict['axial_coords']) # adds tuple's elements to the list
# (109.8788505 112.77020418 -88.19550803]
Or even better, just convert the tuple to a list:
ac = list(self.ac_dict['axial_coords'])
And I’ll explain my previous comment about “
flatten() method returns a flat array, so if you don’t keep the return value, it actually isn’t doing anything. For example,
a = np.array([(109.8788505 112.77020418 -88.19550803)])
# a: array([[109.8788505 112.77020418 -88.19550803]])
b = a.flatten()
# b looks good: array([109.8788505 112.77020418 -88.19550803])
# a is unchanged: array([[109.8788505 112.77020418 -88.19550803]])
But if you properly construct the list, then “
flatten()” is not needed. Neither is "
ac = ac".
Next, the calculation of the rotation matrix is mostly fine, but some things should be changed.
First, when you compute the ‘yAxis’, you should check that it is pointing in the right direction, and reverse it if it isn’t:
if yAxis < 0.0:
yAxis = -yAxis
Second, your method of computing
xAxis is not good, because
aclhDir are almost exactly parallel to each other:
xAxis = np.cross(yAxis, acIhDir)
Instead, you should cross
yAxis with the original (unrotated) z axis, like this:
xAxis = np.cross(yAxis, np.array([0.0, 0.0, 1.0]))
xAxis /= np.linalg.norm(xAxis)
zAxis = np.cross(xAxis, yAxis)
Then, when you create a 4x4 matrix, you should always choose a center of rotation. This is done as follows, for example with
ac as the center:
matrix = np.eye(4)
matrix[:3, :3] = rotation
matrix[:3, 3] = ac - np.dot(rotation, ac) # ac is center of rotation
Next, the rotation matrix can be turned into a vtkMatrix4x4 and put into vtkImageReslice:
resliceAxes = vtkMatrix4x4()
resliceAxes.Inverse() # resliceAxes are inverse of rotation matrix
There are two reasons to use “
SetResliceAxes()” instead of
SetResliceAxes() allows you to define a center of rotation.
SetResliceAxesDirectionCosines() actually sets columns of the matrix instead of the rows, so it sets the transpose of the matrix (which is equivalent to the inverse for a rotation matrix, so you can see how this gets confusing).
Since vtkImageReslice is using the inverse transform, you use the forward transform for the AC and PC:
transmat = vtkMatrix4x4()
transform = vtkTransform()
(worldx, worldy, worldz) = transform.TransformPoint(worldx, worldy, worldz)
Sorry about the confusion about where the “inverse” should be done. After reading your code for computing the rotation, I realized the inverse transform should be applied to vtkImageReslice, not to the world coords. Also, it wasn’t until I read the
SetResliceAxesDirectionCosines() documentation that I understood that it transposes the matrix.
Also, accounting for the transposition vs. inverse, the only really major difference between my suggestions and your original code is the use of a center-of-rotation. So, to be honest, I think your original code should have worked well enough, but maybe there is something funny going on with the vtkImageViewer2 class.