Measuring of rendering performance for Remote python-based Visualisation

Dear All,
I need to obtain an FPS performance of python-based server for remote visualisation. I have run several tests from C++ and python code on Nvidia GTX 2080 and has obtained quite different results for Mesh visualisation (VTK 9.01 stable was used):

The scene is a result of Marching cubes applied to Implicit function and contains approx 4Million triangles .

The codes for python and C++ are attached (rendering part). The results that I get are below for two different approaches for FPS measurement (1st with vtkRenderer::GetLastRenderTimeInSeconds() and 2-nd via averaged value of vtkTimerLog):

Python

fps 0.04378676414489746
fps 0.0001468658447265625
fps 0.00013589859008789062
fps 0.00013709068298339844
fps 0.00019478797912597656
fps 0.0002949237823486328
fps 0.00013780593872070312
fps 0.0001480579376220703
fps 0.0002181529998779297
fps 0.0002989768981933594
fps 0.00024509429931640625
fps 0.00032210350036621094
fps 0.0003600120544433594
fps 0.0003380775451660156
fps 0.00037598609924316406
fps 0.00032591819763183594
fps 0.0003628730773925781
fps 0.00032520294189453125
fps 0.00034999847412109375
fps 0.00032901763916015625
Total FPS: 4116.198527357683
AVERAGE FRAME RATE: 60.049966426570116

C++

3.50475e-05
9.05991e-06
7.15256e-06
7.15256e-06
Total FPS: 147236
AVERAGE FRAME RATE: 102217 fps

If the codes attached are correct (no bugs), then I have a number of questions about this results :

  1. Are the C++ results that I receive actually possible (147236-102217fps) for 4 Mil on RTX2080 within a basic scene (no complex shading)
    For the C++ codes if I understood it correctly it takes me approx 0.035 milliseconds to render first frame and around 0.007-0.009 afterwords.If I understood correctly from https://on-demand.gputechconf.com/gtc/2016/presentation/s6193-robert-maynard-visualization-toolkit.pdf
    5 Mil were rendered with 0.0025333sec for VTK 7.0. Are the results that I get a some kind of an error?
  2. If C++ works Ok, why do I get such a big difference in Python and C++ performance? In my understanding Python interpretable layer is only used for C++ procedures calling, and I have expected some difference but not such a big one
  3. the https://on-demand.gputechconf.com/gtc/2016/presentation/s6193-robert-maynard-visualization-toolkit.pdf discusses some rendering enhancements. Where can I find more information on this work ( papers, references in codes, etc.), just to understand how it all works
  4. Do I understand correctly, that for remote vis, where server is python based, it would be correct to measure the performance from python code?

Thank you very much in advance,
Best regards,
Evgeniya

main.cpp (4.3 KB) test.py (2.7 KB)

1 Like

The units of “frames per second” is not “seconds”, so this is incorrect:

fps=renderer.GetLastRenderTimeInSeconds()

GetLastRenderTimeInSeconds() returns a time, not a rate.

Edit:

In your C++ code you call renderer->Render(), but in your Python code you call renderWindow.Render(). These are not equivalent! Calling renderer->Render() does not perform all of the tasks that are needed to refresh the window, so renderWindow->Render() should be called for benchmarking. If you switch to renderWindow->Render(), your C++ results won’t look quite so crazy.

Also note that by default OpenGL drivers will often limit the FPS to the refresh rate of the monitor.

Dear David Gobbi,

Thank you very much.

The units of “frames per second” is not “seconds”, so this is incorrect:

fps=renderer.GetLastRenderTimeInSeconds()

Oh, yes, sorry for confusion, I have a wrong naming of variables here. Actually I was printing out seconds (as the pdf I have referenced also makes measurements in seconds) and collecting average FPS below as 1.0/renderer.GetLastRenderTimeInSeconds(). Correcting with renderWindow->Render() has slightly changed the results, although in total the situation is the same:

Python

seconds per frame 0.04394817352294922
seconds per frame 0.00014400482177734375
seconds per frame 0.0001430511474609375
seconds per frame 0.0003490447998046875
seconds per frame 0.0001652240753173828
seconds per frame 0.0001850128173828125

Total FPS: 3406.4299667781806
AVERAGE FRAME RATE: 60.03015602558754

C++

Cells number 4674431
seconds 2.28882e-05
seconds 9.05991e-06
seconds 6.91414e-06
seconds 6.91414e-06
…
seconds 6.91414e-06
seconds 7.15256e-06
seconds 0.0002141
seconds 1.00136e-05
seconds 6.91414e-05

Total FPS: 139565
AVERAGE FRAME RATE: 93902.3 fps

Please, note that I am measuring FPS by two different approaches that average the obtained values.
First approach

 for (int i=0;i<endCount;i++)
  {
 
 double max[3]={bounds[1]+vtkMath::Random(0,10),bounds[3]+vtkMath::Random(0,10),bounds[5]+vtkMath::Random(0,10)};
  double min[3]={bounds[0]+vtkMath::Random(0,10),bounds[2]+vtkMath::Random(0,10),bounds[4]+vtkMath::Random(0,10)};
 

    
    renderer->ResetCamera(min[0],max[0],min[1],max[1],min[2],max[2]);
    renderWindow->Render();
    double sec=renderer->GetLastRenderTimeInSeconds();
  time+=double(1.0)/(sec);
  std::cout<<"seconds "<< sec<<std::endl;
  
  }
  std::cout<< "Total FPS: " << time/double(endCount) << std::endl;

Gives in C++:
Total FPS: 139565

Second approach:

 vtkSmartPointer<vtkTimerLog> clock =
    vtkSmartPointer<vtkTimerLog>::New();

  clock->StartTimer();
  for (int i = 0; i < endCount; i++)
    {
double max[3]={bounds[1]+vtkMath::Random(0,10),bounds[3]+vtkMath::Random(0,10),bounds[5]+vtkMath::Random(0,10)};
  double min[3]={bounds[0]+vtkMath::Random(0,10),bounds[2]+vtkMath::Random(0,10),bounds[4]+vtkMath::Random(0,10)};
 

    
    renderer->ResetCamera(min[0],max[0],min[1],max[1],min[2],max[2]);
    renderWindow->Render();
    }
  clock->StopTimer();
 double frameRate = (double)endCount / clock->GetElapsedTime();
  std::cout << "AVERAGE FRAME RATE: " << frameRate << " fps" << std::endl;

That gives:
AVERAGE FRAME RATE: 93902.3 fps

The updated codes are attached
main.cpp (4.4 KB) test.py (2.7 KB)

Thank you very much in advance,
Best regards,
Evgeniya

You gotta clean up your code. Fix the indentation. Rename variables so that you don’t have nonsense statements like time+=double(1.0)/(sec). Remove superfluous whitespace like the three blank lines before the “ResetCamera” line.

Well-formatted code is much easier to review.

main.cpp (4.4 KB)

A small update. I have just tested this c++ code (VTK_MODULE_INIT(vtkRendererOpenGL2); was only added on top) with vtk 8.2 and it gave me the output that seems correct:

C++ output

seconds 0.000812054
seconds 0.000779867
seconds 0.000838995
seconds 0.000784159
seconds 0.000291109
seconds 0.000329971
seconds 0.000452042
seconds 0.000813961
seconds 0.000782967
seconds 0.000813007
seconds 0.0007689
Total FPS: 130.022
AVERAGE FRAME RATE: 59.9952 fps

Trying the same fix for version compiled with vtk 9.0.1 made no effect.

Also, a version 9.0.1 was compiled and tested as offscreen, 8.2 with x11 (linux system).

In the code, you can check to make sure that vtkRenderer::New() is giving you vtkRendererOpenGL2:

  std::cout << renderer->GetClassName() << std::endl;

For rendering benchmarks, you can also check driver information. Especially if you are benchmarking in different environments or between different builds of VTK. This must be called after you have called Render(), not before:

  std::cout << renderWindow->ReportCapabilities() << std::endl;

The 60 fps that you are getting is just the frame rate of your monitor. The GPU is capable of rendering much faster than that, but the OpenGL driver is synchronizing to the monitor.

Ok, seems that I has not reloaded the code before compilation.

  1. VTK_MODULE_INIT(vtkRendereringOpenGL2)
  2. renderer->GetClassName() - gives vtkRendererOpenGL

The above adjustments DID FIX the frame rate issue in VTK 9.01.

What was happening before these adjustments? The vtk 8.2 codes just did not wanted to run unless I have added them and vtk 9.0 was running without a problem silently (no warnings or something else) but gave me these strange framing rates.

Oh, and what about Python codes? A final result deals with remote rendering, so rendering happens from Python. Is it more appropriate to make FPS measurements from Python or C++?
(it seems that Python code was calling vtkOpenGLRenderer from the beginning).

Best regards,
Evgeniya

In VTK 8.2, vtkRenderer::DeviceRender is declared as a pure virtual method, so it will crash unless vtkRenderer is replaced by vtkRenderingOpenGL2:

  /**
   * Create an image. Subclasses of vtkRenderer must implement this method.
   */
  virtual void DeviceRender() =0;

In VTK 9, this method will “do nothing” instead of crashing:

  /**
   * Create an image. Subclasses of vtkRenderer must implement this method.
   */
  virtual void DeviceRender(){};

In Python, when you do “import vtk”, everything is imported – so you get vtkRenderingOpenGL2 automatically. It is possible in Python to load only vtkRenderingCore:

from vtkmodules.vtkRenderingCore import *
# without the following line, rendering will not work:
#from vtkmodules.vtkRenderingOpenGL2 import *
# with the following line, vtkRenderingOpenGL2 is imported automatically:
#from vtk import *

It is equally valid to measure the FPS from C++ or Python, and the results should be very similar (as long as the rendering is working).

Dear David Gobbi,

thank you very much. Do I understand correctly that alternative (less flexible) to VTK_MODULE_INIT macro for initialisation of a platform specific vtkRenderingOpenGL2 module during runtime, can be, for example, directly use vtkOpenGLRenderer instead of vtkRenderer?

Best regards,
Evgeniya

Hi Evgeniya,

It is not recommended to call vtkOpenGLRenderer::New() yourself, because vtkRenderingOpenGL2 contains necessary OpenGL versions of many classes. For example: vtkOpenGLPolyDataMapper, vtkOpenGLActor, vtkOpenGLCamera, vtkOpenGLLight, vtkOpenGLFramebufferObject.

Many of these classes (like Camera and Light) are created by other classes, and without VTK_MODULE_INIT(vtkRendereringOpenGL2) the wrong class will be instantiated.

David

Ah, I see. Thank you.

Best regards,
Evgeniya