Hi Charles. Thank for the answer but it doesnât work. The points are clustered over the metaphysis and sparse around the diaphysis of the bone using the script I posted firstly with this options:
That result looks good. Although I think Iâll have to wait to Slicer to update its VTK version and try it. The idea is to launch the registration script inside Slicer or in the Slicerâs python interactor.
As the image above) shows (and I experienced this outside of VTK, too), surface-weighted random sampling produces highly non-uniform point distribution on surface meshes: points are often clumped together in small groups and there are huge empty areas between these clumps.
Can someone explain this? Is it just because the number of samples is relatively small (and the distribution would eventually become uniform when you reach hundreds of thousands of samples)?
Is there a solution in VTK that would provide more uniform distribution for few hundred or few thousand points? Is there a point sampler or a post-processing filter that would try to maximize distance between sample points on the surface?
Current Slicer version uses VTK9. The last stable release was still created with VTK8.
You can use vtkPoissonDiskSampler if you want evenly spaced points in your output. This filter guarantees that the output cannot have 2 points closer than Radius.
BTW. I just discovered a bug that can make the filter crash⌠Iâll push a fix.
The algorithm is pretty simple. The process is called by some people âdart throwingâ.
Shuffle the points from the input. For each point taken in the shuffled order, if no point in the output is closer than Radius to the sample, you throw it away. If not, you keep it in the output.
Will this algorithm work for meshes having around 2-3 million points ?
Currently the code is crashing and I am checking if reducing the dimension will make it work.
If it only crashes if you have millions of points then the problem might be that you run out of memory. Please run your executable in a debugger (or get a stack trace) to see where the crash happens and what the error is.
Ok I will do it. But wanted to inform that the size is not a problem. It crashes for a small mesh also.
I also wanted to confirm if the input should be a mesh or pointset. As per the document, it should take pointset but I checked and the method works for a polydata sphere, however if you convert the polydata to pointset then it crashes for the same input sphere.
Following example code for demonstration:
import vtk
def readvtk(filename):
a = vtk.vtkPolyDataReader()
a.SetFileName(filename)
a.Update()
m1 = a.GetOutput()
return m1
sphere = vtk.vtkSphereSource()
sphere.SetThetaResolution(200)
sphere.SetPhiResolution(100)
sphere.SetRadius(1.0)
sphere.Update()
sphere = sphere.GetOutput()
points = sphere.GetPoints()
pointset = vtk.vtkPointSet()
pointset.SetPoints(points)
print(pointset.GetNumberOfPoints())
f = vtk.vtkPoissonDiskSampler()
if 1:
print('Using pointset')
# pointset as input will fail
f.SetInputData(pointset)
else:
print('Using polydata')
# polydata as input works
f.SetInputData(sphere)
f.SetRadius(0.1)
f.Update()
output = f.GetOutput()
print(output.GetNumberOfPoints())