Hi everyone. Is there a filter to calculate the smallest distance between to polydata objects?

Best,

Peter

Hi everyone. Is there a filter to calculate the smallest distance between to polydata objects?

Best,

Peter

To be more specific, I don’t want the distance simply between the points of polydata A and polydata B, but between the elements formed by the points (triangles, line segments, …).

I don’t know of such a filter.

It’s certainly possible to create a threaded algorithm using a cell locator (e.g., vtkStaticCellLocator) and the methods FindClosestPoint() or FindClosestPointWithinRadius(). These work pretty well if the query point is reasonably close to the cells contained in the locator. I’m not sure how well the method works if the query point is some distance away from the cell locator.

vtkDistancePolyDataFilter measures closest distance between two surfaces. It is not distance between mesh points, but distance of each mesh point of one polydata to the closest point on any cell of the other polydata.

Andras, thanks for pointing this out. The filter could be made >10x faster, and maybe much more, for large data, if threading and use of a locator were added.

1 Like

Thanks for your replies! I will check out the cell locator approach.

The vtkDistancePolyDataFilter is apparently not working with my input data since only one is a a triangle mesh and the other one only consists of lines, not triangles.

Alternatively, a brute force approach would probably be to upsample my lines to a reasonable accuracy for my purpose and then use vtkImplicitPolyDataDistance on every point of the upsampled lines to calculate the distance to the triangle mesh. Can you tell me if vtkImplicitPolyDataDistance can be executed in parallel so that I can multithread over the line points?

Do you need to know the distance of line points from the surface mesh, or distance of surface mesh points from the lines? If you are not sure then your can attach a screenshot and tell what is your overall goal.

@will.schroeder indicated above that there is plenty of room for performance improvement.

Here you go with a screenshot of the situation. I have a polydata modelling a bunch of nerve fibers in the brain (the colored lines) and a triangle mesh modelling a tumor (white blob). Now I want to calculate th shortest distance between tumor and nerves. So if there are enough points on the line, then it would be ok to calculate the distance from each point on each nerve fiber model to the tumor mesh (your first case). The most accurate way though would be to take the individual line segments (so basically two points in 3D) and look for the point on any of the line segments that is closest to any point on the triangle mesh. These two points are not necessarily coinciding with any of the points defining the nerve fiber model nor the tumor model. This accuracy is not really needed, but I was wondering if it might be faster to calculate than to simply upsample the fiber model lines to obtain the desired accuracy, because we potentially really have a lot of lines there (thousands to millions) and consequently a lot lot more points.

This is very helpful.

Do you have any time constraints for the computation? What is the ideal and maximum acceptable computation time? In an interventional application, you probably need to check distances at a few critical point positions, not on the entire tumor surface, which could make the computation 1000 times faster. If you do a retrospective study or automated analysis then computation time might not matter much, so it may be cheaper to buy a good computer and wait more for the results than spend time with optimizing computations.

Anyway, for the simple overall shape of the nerve bundles, the number of fiber tracts is excessive, so making the computation several magnitudes faster should be feasible by choosing optimal data structures and sampling strategy.

What is the typical distance between fiber line points?

Can you get a density volume of the fibers (volume that has higher voxel value if there are more fibers nearby)? Such a volume would not just make distance computation trivial and feasible in a fraction of a second, but it would also make it much more robust. If you simply computed minimum distance then results could be jeopardized by a few stray fibers, but if you work with densities then you can much more easily suppress effect of outliers.

Do you have any time constraints for the computation? What is the ideal and maximum acceptable computation time?

Probably no real time scenario, so no strict time constraints.

What is the typical distance between fiber line points?

Varying, since they are often compressed, i.e. less points on straight fiber sections and more on highly curved. As a guess, maybe one point every 0.5mm. But resampling is of course possible.

Can you get a density volume of the fibers (volume that has higher voxel value if there are more fibers nearby)?

Yes, this is possible. But in our scenario the tracts are already filtered. Nevertheless, adding an optional parameter that allows the user to specify a threshold (with default 0) would probably be good. In any way, using such a map or a corresponding surface mesh would probably be fine. I think I will go for this.

Thank you for your input!