Make nanoflann a third party used for point locators

nanoflann, forked from flann (Fast Library for Approximate Nearest Neighbors), is a very powerful and fast point search library that I’ve been using for years on point clouds. There are a lot of different indexing schemes (see page 9 of the manual).

The main difference between flann and nanoflann is that flann has run-time overheads that are squeezed out by nanoflann, which is header only.

I think it would be worth depending on this library for point locators, replacing vtkKdTreePointLocator guts by nanoflann, and potentially implement new locators with other indexing strategies directly available through nanoflann. The metric used to build the index is also customizable through a functor, which could be a cool feature to add to VTK’s locators.

Any filter that does a lot of point searches would benefit from this change. For instance, the new filter vtkOverlappingCellsDetector would benefit from it.

nanoflann works with CUDA. is thread safe, and I’ve seen an mpi directory in the flann git repository, without looking too much into it.

Any opinions?

How hard is to put together a quick performance comparison for build/lookup using vtkKdTreePointLocator and nanoflann? If there’s a dramatic performance improvement, it’d be a very easy case to make.

How are we going to manage training? Models?

I’m not sure to follow what you mean by that. Ideally the user wouldn’t be aware of the internal mechanics of it. All you want to do is to feed points, and query neighborhoods.

Yes, but there’s some pre-trained network behind the scenes right? How will we manage it? The training materials should also be in the source repo too.

I guess it wouldn’t be too hard to do. Just have to build a big point cloud and compare performances.

Some people have already looked into this kind of benchmarks (not with VTK’s version, which does not seem to be commonly used).

As far as I have looked, regarding kdtrees, when used on 2D / 3D points and when the tree is built staticly, you won’t find better than nanoflann. There seems to be a debate on what is better between flann / nanoflann, or other libraries for more exotic search spaces.

I don’t think there are pre-trained networks behind the scenes. A kd-tree does not need training to learn how to build itself. The only limitation of this kind of structure is that if you have a lot of points that are aligned in the same plane in 3D / line in 2D, you lose a lot of efficiency because the space gets weirdly split. The user should know what kind of structure it wants depending on its input data.

Oh did you mix flann (Fast Library for Approximate Nearest Neighbors) with flann ( Functional Link Artificial Neural Network)?

Ah, yes I did. Normal third party import details apply. Mangling, import, etc. :slight_smile:

I did a rapid test. To do so, I tweaked a little this example from nanoflann.

I tried on both nanoflann and VTK (Release build) to build a kdtree with one million random points, and then did one million random queries on the 10 nearest neighbors. I used rand() for the random number generator, with same 3D extent.

Results are:

  • VTK - kdtree build time is 134ms, search is 7,319ms
  • nanoflann - kdtree build time is 278ms, search is 554ms
1 Like

On a debug build, the search with VTK takes 54 seconds. I don’t know if we can keep the same performance of nanoflann with a VTK debug build if it becomes a third party. Can we @ben.boeckel?