I wanted to get the coordinate rersult of the ray intersection with a polydata/actor
I tried with vtkOBBTree and vtkModifiedBSPTree, I could get the correct result coordinate with both, but they seem take too long(~2 seconds) to finish it. (my polydata is kinda big, and I want to keep updating the result while my polydata + ray are moving).
Is there any way I could get ray intersection with actor instead of with polydata?
And if there’s a way I could get the intersection result faster than this?
You say the polydata is “moving”. Do you mean that you are constantly changing the positions of all of the points? If so, then the locator (vtkOBBTree or vtkModifiedBSPTree) has to re-build every time you “move” the data.
In other words, the 2s is the time taken to re-build the locator. Actually doing the intersection should only take a few milliseconds.
Ray intersection can be done on an actor with classes like vtkCellPicker (for this to work efficiently, you must first build a locator and then call vtkCellPicker’s AddLocator(locator) method). The picker, however, always picks with a ray along the view direction. In other words, the ray is locked to the camera’s position and orientation.
Yes, the polydata is “moving” means I’m using vtkTransformPolyDataFilter to transfrom the polydata all the time. I tried to check the time cost today earlier, as you told the locator building part is taking most time cost from the total cost, and doing the intersection almost spend no time.
And it’s correct about the picker part too. I was thinking to use pickers to do intersection at first, because I didn’t need to transform my polydata. At start, I just need to move my actors via SetUserMatrix. So I prefer to do intersection with actors. Then I checked vtkCellPicker, but I found out that vtkCellPicker::Pick() way doesn’t let me set the ray direction, then I gave up to do with pickers.
my needs could be thought like, I have one actor + one ray are both keep moving (SetUserMatrix), I want to keep tracking the ray intersection on the actor, as fast as it could. (I transformed the polydata only cause I couldn’t do intersection with actor and settable ray direction, so I had to transform the polydata synchronously with the actor).
I could only find one way to boost the speed abit so far: by reducing the size of polydata, roughly cut most inside part away, remain the most important surface part only. It can reduce the time cost significantly of vtkOBBTree locator building. But I’m not sure if there’s better solution than this, any suggestion will be appreciated.
here’s the time cost of before/after cutting polydata, in milliseconds.
There’s definitely a conceptual way to make it fast, but I’m not sure whether VTK will allow it to happen easily:
- Use the actor to transform your polydata, so that the locator can be built once and the picking will be fast.
- transform your pick ray to object coordinates to perform the pick - i.e. using the same transform you set on the actor.
Can anyone comment if they’ve done this before?
I agree with Aron’s concept., but I might change the methodology a bit.
- The polydata must be static, and the actor must be used to perform the motion.
- The locator must be build from the static polydata (though I guess this is obvious).
- The locator->IntersectWithLine() call must use a ray that has been transformed back to the static space.
So if you have a transform “M” that you apply to the data (via the actor), and if you have a ray defined by endpoints (p1, p2) that you want to intersect with the data, then you must apply the inverse of M to (p1, p2) before calling locator->IntersectWithLine().
I.e. instead of calling IntersectWithLine(p1, p2), you need to first compute these:
q1 = inverse(M) * p1
q2 = inverse(M) * p2
Then call IntersectWithLine(q1, q2).
Edit: I missed the final step!
4. After you get the intersection point, you must transform it by “M” to move it from the “static” space to the “actor” space.
Awesome, it worked perfectly by following David’s steps exactly.
Without re-building locator, the time cost of intersection on my complete polydata spends 0 - 25 ms (depends on the current ray direction)
This helped me alot, thanks to Aron and David.
Moreover, I’d like to express my gratitude to David, the [vtk-dicom] module also helped me alot by dealing with troublesome dicom series earlier.
Best wish to you.
I think it’s the right way to do that. You dont need to transform polydata each time.