Controlling performance for vtkLODActor

Hi all.

System:

  • Linux (Cent-OS 7), gcc10, C++14
  • VTK-7.1 (various reason for not updating)
  • No GPU available (running in VNC with OpenGL2)
  • I also use Mesa3D (OpenSWR driver) but you can ignore at least for now because it’s probably irrelevant to my question and

Flow:
vtkTensorGlyph to vtkDataSetMapper to vtkLODActor
vtkUnstructuredGrid to to vtkDataSetMapper to vtkLODActor

Problem
To my understanding by using vtkLODActor and setting the vtkInteractor::SetDesiredUpdateRate() should adjust the various levels of detail in order to achieve that frame rate (i.e. the argument to the function e.g. 10 means 10fps).
Well, in my case SetDesiredUpdateRate() seem to have no effect whatsoever.

Note that I deal with very large data sets (millions of points) that do not change (other than opacity and colors) where I end up with unaccepted rendering times of 1fps or less.

Another note: vtkRenderer::SetAllocatedRenderTime() does not seem to have any effect as well but how is is this related to SetDesiredUpdateRate()?

Thanks in advance

Dimitris

Hello,

First, software-emulated rendering does result in low FPS when compared with GPU rendering.

As for the LOD actor, can you please post the code you’re using to define the levels of detail and configure your vtkLODActor? I mean, vtkLODActor gives only three “free” LODs:

  • the top level is the whole data;
  • the mid level is a point cloud of points randomly taken from the data with point count limited by vtkLODActor::SetNumberOfCloudPoints(int) and
  • the bottom level is just the bounding box of the data.

If you want a different set of LODs, you have to code.

best,

Paulo