Feed correctly SetHandlePosition for vtkSplineRepresentation

How can I feed correctly SetHandlePosition method from vtkSplineRepresentation ?

Let say I have just 3 handles. And a vtkVolume with following dimensions: x = 465, y = 367, z = 189

I have tried:

	vtkSplineRepresentation* pSplineRepresentation = vtkSplineRepresentation::SafeDownCast(m_pSpline->GetRepresentation());
	pSplineRepresentation->SetNumberOfHandles(3);
	pSplineRepresentation->SetHandlePosition(0, 0, 0, 0);
	double d[] = { 20, 20, 20 };
	pSplineRepresentation->SetHandlePosition(1, d[0], d[1], d[2]);
	d[0] = 50;
	d[1] = 45;
	d[2] = 40;
	pSplineRepresentation->SetHandlePosition(2, d[0], d[1], d[2]);

but seem that these points are not positioned as expected:

Obviously, is out of the volume box, even I setup handle positions values way below these values (45 < 367, etc.). I have search several examples of how to feed these values, but I didn’t find something proper.

You specify widget 3D position in rendering (physical) coordinate system, which is not the same as the volume’s voxel coordinate system (you need to to translate, rotate, and scale the axes).

Placement and editing of spline curves, panoramic X-ray reconstruction from curve, nerve segmentation by curve placement, DICOM import/export, segmentation, quantification, registration, surgical planning and guidance, etc. are all readily available in 3D Slicer, for free, without restrictions, even for commercial use. It is unnecessary to redevelop all these low-level features from scratch.

@lassoan Thank you, it is a good start. I hope to find some example that show me how to do these operations on my 2d points !

If it helps, here is a complete example of a computation in 3D Slicer’s VTK/Python environment: https://www.slicer.org/wiki/Documentation/Nightly/ScriptRepository#Get_markup_fiducial_RAS_coordinates_from_volume_voxel_coordinates

This code even works if the volume is non-linearly transformed (warped to a panoramic X-ray image, or registered to a previous image of the same patient, etc.).

1 Like

I have seek on that link (thank you @lassoan) and other forums for a way to convert 2d point into a 3d point. I have found something close to my task: http://vtk.1045678.n5.nabble.com/Locating-a-3D-position-in-3D-scheme-from-a-XYZ-coordinates-td1241187.html#a1241189

And I have tried this code:

int ijk[3] = { 0, 0, 0 };
double x[3] = { 20, 30, 40 };
double pcoords[3];
TRACE("+++%d\n", pImage->ComputeStructuredCoordinates(x, ijk, pcoords));

and this one:

int ijk[3] = { 0, 0, 0 };
double x[3] = { 200, 300, 400 };
double pcoords[3];
TRACE("+++%d\n", pImage->ComputeStructuredCoordinates(x, ijk, pcoords));

where pImage is

vtkImageData* pImage = pDoc->m_pDICOMReader->GetOutput();

All of those trials tell me that my input values are outside of volume bounds (+++0).

From VTK:

/**

  • Convenience function computes the structured coordinates for a point x[3].
  • The voxel is specified by the array ijk[3], and the parametric coordinates
  • in the cell are specified with pcoords[3]. The function returns a 0 if the
    ** * point x is outside of the volume, and a 1 if inside the volume.**
    */
    virtual int ComputeStructuredCoordinates(
    const double x[3], int ijk[3], double pcoords[3]);

static int ComputeStructuredCoordinates( const double x[3], int ijk[3], double pcoords[3],
const int* extent,
const double* spacing,
const double* origin,
const double* bounds);

Can you take a little bit from your time and lead me into a right direction ?

This ComputeStructuredCoordinates surely cannot be used in general, because it has no input for axis directions.

To get 3D point position from 2D click position then you need point picking, not just voxel<->physical coordinate system transformation.

If you find that it takes too much time to implement point placement yourself then you can use 3D Slicer. Everything is nicely and conveniently implemented in it, so you don’t need to deal with such low-level details (and if anything is not clear then you can get step-by-step instructions and examples on the Slicer forum). If you prefer to work it out yourself then you can have a look at how this is implemented in existing VTK-based open-source applications, such as ParaView, 3D Slicer, MITK Workbench, etc.

In my struggle to convert 2D point into a 3D point, I wrote this code (my volume has (465, 367, 189 sizes):

	pSplineRepresentation->SetHandlePosition(0, 0, 0, 0);

	{
		int i[3]{ 232, 183, 96 };
		double coords[3];
		vtkIdType cellNum = pImage->ComputeCellId(i);
		int nSubId = pImage->GetCell(cellNum)->GetParametricCenter(coords);
		double x[3];
		double weights[4];
		pImage->GetCell(cellNum)->EvaluateLocation(nSubId, coords, x, weights);
		pSplineRepresentation->SetHandlePosition(1, x);
	}

	{
		int i[3]{ 465, 367, 189 };
		double coords[3];
		vtkIdType cellNum = pImage->ComputeCellId(i);
		int nSubId = pImage->GetCell(cellNum)->GetParametricCenter(coords);
		double x[3];
		double weights[4];
		pImage->GetCell(cellNum)->EvaluateLocation(nSubId, coords, x, weights);
		pSplineRepresentation->SetHandlePosition(2, x);
	}

wanting to cross 3 points in my volume (in diagonal). The problem is that the app crash: Run-Time Check Failure #2 - Stack around the variable ‘weights’ was corrupted.

Is there any simple method to do that ?

I found the mistake: should be allocate not 4, but 8 elements for weights (weights[8]).

I don’t think is quite correct, but it is a start.