I need to obtain an FPS performance of python-based server for remote visualisation. I have run several tests from C++ and python code on Nvidia GTX 2080 and has obtained quite different results for Mesh visualisation (VTK 9.01 stable was used):
The scene is a result of Marching cubes applied to Implicit function and contains approx 4Million triangles .
The codes for python and C++ are attached (rendering part). The results that I get are below for two different approaches for FPS measurement (1st with vtkRenderer::GetLastRenderTimeInSeconds() and 2-nd via averaged value of vtkTimerLog):
Total FPS: 4116.198527357683
AVERAGE FRAME RATE: 60.049966426570116
Total FPS: 147236
AVERAGE FRAME RATE: 102217 fps
If the codes attached are correct (no bugs), then I have a number of questions about this results :
- Are the C++ results that I receive actually possible (147236-102217fps) for 4 Mil on RTX2080 within a basic scene (no complex shading)
For the C++ codes if I understood it correctly it takes me approx 0.035 milliseconds to render first frame and around 0.007-0.009 afterwords.If I understood correctly from https://on-demand.gputechconf.com/gtc/2016/presentation/s6193-robert-maynard-visualization-toolkit.pdf
5 Mil were rendered with 0.0025333sec for VTK 7.0. Are the results that I get a some kind of an error?
- If C++ works Ok, why do I get such a big difference in Python and C++ performance? In my understanding Python interpretable layer is only used for C++ procedures calling, and I have expected some difference but not such a big one
- the https://on-demand.gputechconf.com/gtc/2016/presentation/s6193-robert-maynard-visualization-toolkit.pdf discusses some rendering enhancements. Where can I find more information on this work ( papers, references in codes, etc.), just to understand how it all works
- Do I understand correctly, that for remote vis, where server is python based, it would be correct to measure the performance from python code?
Thank you very much in advance,