decreasing and increasing the points in vtp file

Hello,
I am preparing data for a neural network, the input of my network will be a constant number of points with its coordinates and a scalar value. The problem with the vtp files is that they have a different number of points. I tried to increase the number by adding between points when the scalar is the same but it is very expensive. even decreasing the number by replacing every two points by their mean is so expensive. I found that the vtk provide a function to increase the triangles in a mesh, does it also provide something to decrease or increase the number of points in polydata in an effective way, or is there any other way to get a fixed number of points without losing information from the file?
Thanks in advance for the help

Hello, wal, welcome!

Have you considered creating code to write a file with that requirement? It is easy to loop over the points of your model, get the XYZ and scalar from them and write them to a file. What is the class of the model? Is it a vtkUnstructuredGrid? Are you using C++?

cheers,

Paulo

yes,
I considered that, the problem as i said is the time needed, since I have to loop over all the points.
no as I said I am using vtkPolyData and I am using Python.

Sorry, but you didn’t specifically say you were using vtkPolyData class. Anyway, you can use Numba to get near-native performance in Python. I’ve used that to write fast processing in pure Python. Just import it and annotate performance critical functions with @njit. Example of use:

from numba import njit

@njit
def makeVariographicSurface(  structureType       : StructureType, 
                              ellipsoidParameters : EllipsoidParameters, 
                              contribution        : float,
                              gridParameters      : GridParameters ) -> np.array :
   for( ... ) :
      for( ... ) :
         for( ... ) :

Sorry, my fault. Thank you for your reply, could you please explain it more to me. I didn’t quite understand how this can help.
Regards

Script parsing is slow. Numba enables something called Just In Time compilation (JIT for short). With JIT, your code runs slow once, is compiled, and runs fast afterwards until you make changes to it, which requires one parse and compilation step again. You can find some benchmarking here: https://murillogroupmsu.com/numba-versus-c/ . Your code will still be 10x slower than a wizard-made C++ native machine code with 2nd order optimization, but about 300x faster than a pure Python code. That’s quite an improvement given the little extra effort it takes.

2 Likes