Nifti to vtkImageData in memory

I have a 4D nifti_image, representing a displacement field image. I would like to get the inverse using vtkGridTransform, which implies that I need to convert my nifti_image to a vtkImageData.

Is there the functionality to do this, or is it something I’ll have to do myself? I see that there is a vtkNIFTIImageHeader class, but it seems that this is only used for reading and writing.

Just in case someone suggests - I am aware that I could save to disk and then read back in, but this is something I’d rather avoid.

thanks

Hi Richard, welcome to the forum.

I’m a bit confused by your question. You say that you have a nifti_image, but typically a “nifti image” means a .nii (or .nii.gz) file stored somewhere on disk. So please clarify: what do you mean by nifti_image? Are you using nibabel? Some other nifti library?

Thanks for the prompt reply. I’m working in C++ and am using the struct as defined in nifti1_io.h.

I’ve stumbled across the class vtkImageImport, I guess that’s the one I want? I suppose I’d have to be careful about endian-ness, right?

The vtkNIFTIImageHeader class is based on the nifti_1_header struct from niftilib. I’m not sure if it’s 100% compatible with nifti1_io.h since I’ve never used the latter.

But using vtkNIFTIImageHeader isn’t going to be of use here, because vtkNIFTIImageReader itself cannot read from memory, only from disk.

The vtkImageImport class can, indeed, be used. You can assume that the endianness of your image-in-memory is already correct, since native endianness is used almost universally for images in memory and is only switched for input/output.

For vtkImageImport, use either the SetImportVoidPointer() or CopyInputVoidPointer() to import your vector data. You’ll also need
SetScalarTypeToFloat() or Double(), depending on which you are using, and SetDataExtent(), SetDataSpacing(), SetNumberOfComponents(3).

E.g. for a 256x256x128 volume, use SetDataExtent(0, 255, 0, 255, 0, 127).

Also, vtkImageImport expects vector data to be packed, e.g. XYZXYZXYZ… (not XXX… YYY… ZZZ…)

Thanks, just gave it a go. Think I’m pretty close - my current workflow is to convert nifti to vtkImageData, use the vtkGridTransform and then fill back to nifti at the end.

However, GetScalarComponentAsFloat at the end is causing a segfault (for the first index 0,0,0,0). Does this mean I haven’t called Update correctly? Sorry if it’s a basic question, not sure how to debug this one.

Thanks

    const int *dims = this->dim;
    const float *spacing = this->pixdim;

    using vtkImage = vtkSmartPointer<vtkImageData>;
    using vtkTransform = vtkSmartPointer<vtkGridTransform>;
    
    vtkImage image_data_sptr = vtkImage::New();
    image_data_sptr->SetDimensions(dims[1],dims[2],dims[3]);
    image_data_sptr->SetSpacing(spacing[1],spacing[2],spacing[3]);
    image_data_sptr->AllocateScalars(VTK_FLOAT,dims[5]);

    int idx[7] = {0, 0, 0, 0, 0, 0, 0};
    for (idx[0]=0; idx[0]<dims[1]; ++idx[0])
        for (idx[1]=0; idx[1]<dims[2]; ++idx[1])
            for (idx[2]=0; idx[2]<dims[3]; ++idx[2])
                for (idx[4]=0; idx[4]<dims[5]; ++idx[4]) // t is 3, skip to 4 for tensor component
                    image_data_sptr->SetScalarComponentFromFloat(
                                idx[0],idx[1],idx[2],idx[4],(*this)(idx));

    vtkTransform transform_sptr = vtkTransform::New();
    transform_sptr->SetDisplacementGridData(image_data_sptr);
    transform_sptr->Update();
    vtkTransform inverse_transform_sptr =
            vtkGridTransform::SafeDownCast(transform_sptr->GetInverse());

    vtkImage inverse_image = inverse_transform_sptr->GetDisplacementGrid();

    NiftiImageData3DDisplacement<float> output = *this->clone();
    for (idx[0]=0; idx[0]<dims[1]; ++idx[0])
        for (idx[1]=0; idx[1]<dims[2]; ++idx[1])
            for (idx[2]=0; idx[2]<dims[3]; ++idx[2])
                for (idx[4]=0; idx[4]<dims[5]; ++idx[4]) {// t is 3, skip to 4 for tensor component
                    // this line segfaults at 0,0,0,0
                    inverse_image->GetScalarComponentAsFloat(idx[0],idx[1],idx[2],idx[4]);
                    // output(idx) = inverse_image->GetScalarComponentAsFloat(idx[0],idx[1],idx[2],idx[4]);
                }

The GetInverse() method doesn’t actually modify the vectors in the grid. It simply sets a flag so that, internally, the InverseTransformPoint() is used instead of the ForwardTransformPoint() method. The details of this can be seen in the implementation of vtkWarpTransform, which is the base class of vtkGridTransform.

Actually creating a new grid that is the inverse of the original grid requires the use of vtkTransformToGrid. This class will take any input transform (linear, b-spline, thin-plate-spline, or grid) and will sample that transform onto a grid. So what you want to do is sample the inverse of the original grid transform onto a new grid of the same size.

Also, when doing this, it’s very important to call SetInterpolationModeToCubic() on the original vtkGridTransform to ensure a good result.

Edit: the inversion can be further improved by using b-spline interpolation rather than cubic interpolation, but this requires an additional step:

// create b-spline coefficients for the displacement grid image_data_sptr
auto bspline_filter = vtkSmartPointer<vtkImageBSplineCoefficients>::New();
bspline_filter->SetInputData(image_data_sptr);
bspline_filter->Update();

// use these b-spline coefficients to create a transform
auto bspline_transform = vtkSmartPointer<vtkBSplineTransform>::New();
bspline_transform->SetCoefficientData(bspline_filter->GetOutput());

// invert the b-spline transform onto a new grid
auto grid_maker = vtkSmartPointer<vtkTransformToGrid>::New();
grid_maker->SetInput(bspline_transform->GetInverse());
grid_maker->SetGridOrigin(image_data_sptr->GetOrigin());
grid_maker->SetGridSpacing(image_data_sptr->GetSpacing());
grid_maker->SetGridExtent(image_data_sptr->GetExtent());
grid_maker->SetGridScalarTypeToFloat();
grid_maker->Update();

// grid_maker->GetOutput() is the new displacement grid, as a vtkImageData

Thanks for the great help. I’ve subbed in your suggested code, and am unfortunately still segfaulting at the end, e.g.:

auto inverse_image = grid_maker->GetOutput();
inverse_image->GetScalarComponentAsFloat(0,0,0,0);

and also

inverse_image->Print(std::cout);

The segfault is strange, I do not see a segfault locally when I call GetScalarComponentAsFloat(0,0,0,0) on the output of vtkTransformToGrid.

What happens if you call grid_maker->Print(std::cout)?

Not sure what was going wrong, but I did a complete rebuild when switching between release and debug mode, and the segfault seems to have disappeared and all is working as expected. Thanks again.

I was trying to emulate the inversion of displacement fields from Slicer (this post said Slicer used VTK over ITK for this purpose). When inverting with Slicer, all of my displacement fields are successfully inverted, yet the inversion method that we’ve discussed above only works for some cases, others giving the following warning (for VTK 7,8 and 9):

Warning: In /build/vtk7-w4DzBd/vtk7-7.1.1+dfsg1/Filters/Hybrid/vtkBSplineTransform.cxx, line 745
vtkBSplineTransform (0x555a24b6c720): InverseTransformPoint: no convergence (385.958, 365.096, 121.875) error = 3.29089e+08 after 500 iterations.

Do you think this could be solved by following your first suggestion (vtkTransformToGrid), instead of passing by b-spline coefficients?

When you invert the displacement field in Slicer, are you just using the inverted field in Slicer itself, or are you actually saving the inverted field out to a new nifti file? The reason I ask is that, internally, Slicer just “symbolically” inverts the transformations by setting an InverseFlag to indicate the direction in which each transform is to be applied (forward or reverse).

For inversion, the vtkGridTransform is even more likely to suffer convergence issues than the vtkBSplineTransform because the continuous displacement field it produces is less smooth. Convergence failure during inversion is most often caused when the Jacobian of the displacement field is a singular or near-singular matrix. When I implemented the inversion code, I did not include any regularization (but I’m not aware of any regularization of displacement fields in Slicer, either).

Well now that you mention it, as I went to export the inverted displacement fields in slicer, it threw up an error. At the time, I assumed it was an unrelated bug, but now I’m not so sure.

However, you can visualise the motion fields in Slicer, which presumably means that it has resampled the inverted transform back onto a grid, right?

For inversion, the vtkGridTransform is even more likely to suffer convergence issues than the vtkBSplineTransform because the continuous displacement field it produces is less smooth.

Makes sense, I’ll leave it as is in that case.

For visualization, I’m not sure how densely Slicer would need to sample the transform. It might not be hitting the points that don’t converge. I don’t suppose that you can upload one of these displacement fields so that I can run some tests?

Sure, data here. Link is live for 24 hours with password “VTKVTK”. Data contained:

  • two motion fields: mf_g1.nii, mf_g2.nii
  • their inverse, as calculated as we discussed above: mf_i_g1.nii, mf_i_g2.nii

The first gate (g1) was happily inverted by VTK, the second (g2) threw up the warnings, but the result is pretty close to what I was expecting.

Might be worth noting that the images are fairly large (whole body), but motion is only cardiac, so the displacements are zero outside of this bounding box.

Thanks, I’ve grabbed them and will check to see exactly where in the field the inversion is failing.

Here’s what I found when looking at the displacement fields.

The displacement field mf_g1.nii had one slice with several displacement values with a magnitude of 1e6 or higher. I’m guessing that these values were errors of some kind? They were far from the “cardiac box”, so I set them to zero and then the inversion worked fine.

The displacement field mf_g2.nii was more interesting. At one edge of the “cardiac box,” the Jacobian determinant was negative. Essentially, the displacement field folds over itself at this edge and becomes non-invertible. More on that later, but first here is the Python code that I used for the check:

# use vtkImagePointDataIterator to iterate through the displacement field
image_iter = vtk.vtkImagePointIterator(image_data_sptr)
# double jac[3][3]
jac = [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]
# double p2[3]
p2 = [0.0, 0.0, 0.0]
# iterate through all sample points in the image data
while not image_iter.IsAtEnd():
   # get current position in image (in mm)
   p1 = image_iter.GetPosition()
   # evaluate Jacobian of bspline_transform at this point
   bspline_transform.InternalTransformDerivative(p1, p2, jac)
   # determinant
   det = (jac[0][0]*jac[1][1]*jac[2][2] +
          jac[1][0]*jac[2][1]*jac[0][2] +
          jac[2][0]*jac[0][1]*jac[1][2] -
          jac[0][0]*jac[2][1]*jac[1][2] -
          jac[1][0]*jac[0][1]*jac[2][2] -
          jac[2][0]*jac[1][1]*jac[0][2])
   if det < 0.0:
       idx = image_iter.GetIndex()
       print(idx, p1, det)
   image_iter.Next()

So more about the fold. Everywhere outside of the “cardiac box”, the displacements are set to zero. The problem is this: at the edge of the box there are outward displacements on the order of 3 mm, but the sample spacing is only 2 mm. Since the displacement drops from 3 mm to 0 mm over a distance of 2 mm, there is a fold and therefore the one-to-one mapping between the original coordinate space and the deformed coordinate space is lost.

It looks to me that the “cardiac box” was too small when the measurements were done, resulting in displacements too large at the edge of the box. If you modify the data so that the displacements decay towards zero more gradually, the folding can be eliminated.

If the lack of invertibility at the edge doesn’t bother you, then you can just turn off the warnings when you invert the transform:

vtkObject::GlobalWarningDisplayOff();
// compute inverse grid here
vtkObject::GlobalWarningDisplayOn();

Wow this is great info, thanks. I’ll check with the team as to how the displacements were generated. Do you know what the results are of the inversion of the problematic voxels where the folding occurs? At a guess, does VTK do something like interpolate the surrounding voxels that were successfully inverted?

The average of the neighbors is not used if the inverse cannot be computed for a voxel. This would be tricky because at least some of those neighbors are likely to also be problematic.

When VTK inverts a grid transform (or bspline transform), it uses the method described in
Gobbi and Peters 2003. Briefly, it uses Newton’s method with a variable step size, where the step size is adjusted such that each successive iteration will never be further from the solution than previous iterations. Inside a fold, this method stalls (the step size becomes miniscule), and the result is close to the initial approximation, where the initial approximation is that the inverse displacement is simply the negative of the forward displacement. In practice, this gives reasonable results.

Really, though, the best approach is to detect the folds in the original displacement grid and smooth them out before performing the inverse transform (before you ask, there is no code in VTK to do this).