In python I’d like to use VTK to compute an inverse displacement field.

The numpy array has shape, for example, [100, 100, 1, 3], the 3 being the tensor component. I want to copy this into a vtkImageData, and my current attempt looks like this:

extent = list(chain.from_iterable(zip([0, 0, 0], fwd_disp.shape[:-1])))
fwd_disp_flattened = fwd_disp.flatten() # need to keep this in memory
vtk_data_array = vtk_numpy_support.numpy_to_vtk(fwd_disp_flattened)
# Generating the vtkImageData
fwd_disp_vtk = vtk.vtkImageData()
fwd_disp_vtk.SetOrigin(0, 0, 0)
fwd_disp_vtk.SetSpacing(1, 1, 1)
fwd_disp_vtk.SetDimensions(*fwd_disp.shape[:-1])
fwd_disp_vtk.AllocateScalars(vtk_numpy_support.get_vtk_array_type(fwd_disp.dtype), 3)
fwd_disp_vtk.SetExtent(extent)
fwd_disp_vtk.GetPointData().AddArray(vtk_data_array)

I’m fairly sure that this gives an image of the correct size, since I can convert back to numpy and get 100 ** 2 voxels back:

The first is that SetDimensions(X,Y,Z) is just an alias for SetExtent(0,X-1,0,Y-1,0,Z-1). So calling SetExtent() after SetDimensions() is redundant, and in this case your extent is off by one.

Also, AllocateScalars() uses the Extent, so SetExtent() or SetDimensions() must be called before AllocateScalars(), not after. In other words, remove your SetExtent() call, since the SetDimensions() call is sufficient.

The reason the image is empty is that AddArray() does not replace the array that was allocated by AllocateScalars(), it adds another array. When you call GetArray(0), it returns the array that was allocated by AllocateScalars().

Use SetArray() instead of AddArray().

For inverting displacement fields, b-splines can be used for maximum smoothness:

But SetNumberOfScalarComponents accepts two arguments, the second being VTKMetaData, and I’m not sure what that entails or how to generate one. Examples elsewhere only supply one argument (maybe that’s VTK_MAJOR_VERSION <= 5?).

And SetArray takes arguments (double *array, vtkIdType size, int save). Is the second argument vtk_numpy_support.get_vtk_array_type(fwd_disp.dtype)? No idea for the third argument.

And if this is done, it should in fact not be necessary to call AllocateScalars() at all.

The old SetNumberOfScalarComponents() method was removed from VTK because the number of components is now stored in the array object, not in the vtkImageData object. The signature that takes vtkMetaData is for internal pipeline use only.

Unfortunately, if I use SetScalars, then fwd_disp_vtk.GetNumberOfScalarComponents() returns 1.

If I first do AllocateScalars(..,3), then GetNumberOfScalarComponents returns 3. But then after SetScalars, it switches back to 1 again. Presumably the set scalars replace the previously allocated scalars.

SetTensors doesn’t work either, this gives a warning on the C++ level: Can not set attribute Tensors. Incorrect number of components.

It looks like you flatten your numpy array before converting it to a VTK array. Thus, you get a VTK array where the NumberOfComponents is 1.

Do not completely flatten the array, you need an array of dimensions (n,3). When you call GetNumberOfComponents() on the VTK array, it should return 3.

Something like this should work for partially flattening the array:

I really appreciate the help! That seems to have solved all errors. I’m still getting a warning for no convergence after the default 500 iterations. I notice in the documentation it says that although 500 is default, normally only 2-5 are required.

I wonder if this suggests if I need to be doing any permuting of my data as I switch from numpy to VTK? I’ve seen it done elsewhere for scalar data when converting from numpy to VTK. Since this is tensor data, if I were to switch the H, W, D components, I’d also need to switch the order of the tensor component.

The code is here, I feel fairly confident with the inverse implementation since you’ve helped me through it in C++ in the past, so either there’s a problem in the numpy->vtk conversion, or perhaps the data isn’t invertible (it’s a randomly generated displacement, so there’s no guarantees for invertibility).

VTK expects the tensor/vector components to be ordered (x,y,z). And the vtkBSplineTransform and vtkGridTransform expect the tensors/vectors to be in voxel units (i.e. a displacement of (1,0,0) means a displacement of 1 voxel in the +x direction).

The VTK image data is ordered (z,y,x) in memory, though the APIs for indexing the data use (x,y,z) or (x,y,z,c) index ordering for convenience. In general, I think most people use a (z,y,x) memory layout for their images in numpy, too (nibabel makes a mess of this by using the conventional z,y,x memory order but by using “.T” to transpose the indexing via the strides). Therefore, in general array permutation is not necessary when moving images from numpy to VTK, and the only permutation that is needed is reversal of the indices (e.g. for converting numpy “shape” to VTK “Dimensions” or “Extent”).

If your deformation field is randomly generated, it will be non-invertible if the deformation vector for a voxel varies too much as compared to the deformation vector for the neighboring voxels. An easy way to fix this is to regularize the deformation field by applying a Guassian blur to it.

Regarding the code, the numpy array order should be DHW to match VTK’s memory ordering. In this case it’s working only because D=1. For SetDimensions, the ordering should be WHD.