Geometry / Triangle Filter not using enough Memory.

Not something I ever thought I’d have an issue with in regards to programming.

I have a relatively complex lattice structure, (that will be more complex with time), that I want to turn into a mesh for exporting. It has the potential of being pretty large depending on the input model it is clipped to. I use the implicitmodeller → contourfilter to create the main structure of this lattice and then use either the trianglefilter or the geometryfilter to create a mesh for export as an .stl. The trianglefilter seems to be faster, (as fastmode seems not to work for me even after compiling the latest source), however after a certain sample size with the implicitmodeller, the RAM usage seems to cap out at 10gb of 256, only use 16-ish of my 64 threads and exit without an error message.

So I guess the question here is,- Why? I’m assuming either an improper workflow,- or a limitation of Windows. I plan on eventually converting to C and wrapping in python for integration into a different program,- would doing that sooner fix the issue? Any help appreciated.

    print("clipping Start")
    clipper = vtk.vtkClipPolyData()

    edges = vtk.vtkExtractEdges()

    boundaryStrips = vtk.vtkStripper()

    clipAppend = vtk.vtkAppendPolyData()

    print("Implicit Start")
    imp = vtk.vtkImplicitModeller()
    imp.SetSampleDimensions(LatticeLength[0] * 48, LatticeLength[1] * 48, LatticeLength[2] * 48)

    print("Contour Start")
    contour = vtk.vtkFlyingEdges3D()
    contour.SetValue(0, 0.3)

    tac = time.perf_counter()
    print(f"countour calc in {tac-tic:0.4f} seconds")

    print("Geometry Conversion Start")
    triangle = vtk.vtkTriangleFilter()

    print("Clean Start")
    Clean = vtk.vtkCleanPolyData()

   # print("Writing File")
   # writerBinary = vtk.vtkSTLWriter()
   # writerBinary.SetFileName("C:\\Users\\AustinPeppel\\Desktop\\core.stl")
   # writerBinary.SetInputData(Clean.GetOutput())
   # writerBinary.SetFileType(2)
   # writerBinary.Write()
   # print("DONE!")

Why are you using triangle filter / geometry filter? The isocontouring outputs a triangle mesh so I don’t see why these are needed.

Also, what are your SampleDimensions in the implicit modeler ? This is likely where much of your memory is being chewed up.

Ah. yes. Thanks for mentioning that,- I was using InputData and wondering why it wasn’t working. Now it works with Input connection I’m pretty new at this,- sorry.

Now that I’m writing it directly,- it might not be an issue anymore,- I shall test it.
Basically,- I need each lattice section to have some minimum resolution no matter what the cell size it. And with small objects like above,- (60mm in length),- that’s fine. But when something ends up 2 feet long, and I need the same cell size,- It will just need to take up more memory. This is fine,- It just wasn’t doing it / wouldn’t display the contour either. Probably because I was making a mesh from a mesh.

Thanks for the reply!

If you start seeing volume dimensions approaching 2056^3 you will hit a wall even using flying edges. And BTW, hopefully you have threading enabled (TBB preferably see this blog post) as FE will execute much faster…

I’m using openMP since I had some CMake issues with TBB. But it ends up only using 16 threads out of 64. I have a feeling there’s some stuff I need to update / change in regards to how threadripper splits up the cores since it’s using exactly 1 die worth of threads.

Time generally isn’t too big of a deal for my needs,- but saving an .stl is, since it will eventually be thrown into a formlabs printer. Otherwise I’d try my hand at slicing and creating image masks for a dlp…

After more testing though,- Whenever the implicitmodeller samples are set above 960^3 it will refuse to write an .stl and just exit the program. RAM now seems to hit 20gb out of 256gb so I know there is still room on that front. I’m just not really sure what the limitation would be here…
Maybe it needs to write in chunks? Or there is a more appropriate format to write to that I can import and translate in another program?

To be fair,- the model I’m using now to test with is a sphere at 160mm in diameter, a 2mm cellsize and outputs at around 35million triangles. I’ve had to work with much higher than that before,- and to clean this model up,- I’d need to go a bit above that count as well…

It does at least fail faster now that I’m not meshing a mesh.

That’s pretty cool looking.

I’m leaning towards a memory problem. It shouldn’t be too hard to trace down, a brute force way is to start by performing manual Update() on each filter in the order of data flow, making sure each filter finishes completely. Find the filter in which the exit occurs and then track down the reason. There is also a possibility that an integral type is overflowing, a quick look at the code indicates vtkIdType seems to be used at critical points - are you using 64-bit id types?

There are ways to chunk the problem if it comes to that.

I’m going to start running through that today,- and see what exception handling I can throw in to start narrowing it down.

Yeah, that was set true.

I was debating on whether or not I needed to start writing a different .stl exporter that actively wrote to disk,- but that is probably presumptuous of me since there are likely other ways within VTK to handle it.
And if it is a memory issue within a filter then that is but wasted effort.

Thanks again for the help. After looking at VTK docs and seeing there was a python wrapper I thought, “I can knock this out pretty quick and then spend the time to optimize after”. I don’t know why that crossed my mind since that’s never been the case.

So I eventually found the array size larger than a 32bit int did cause the exit,- even though I built with 64bit checked.

Uninstalled Conda, Reinstalled, Rebuilt, and now it’s all good. I think there might have been some other module that came with VTK and there were some crossover there.

Thanks again!