Dear All,
I need to obtain an FPS performance of python-based server for remote visualisation. I have run several tests from C++ and python code on Nvidia GTX 2080 and has obtained quite different results for Mesh visualisation (VTK 9.01 stable was used):
The scene is a result of Marching cubes applied to Implicit function and contains approx 4Million triangles .
The codes for python and C++ are attached (rendering part). The results that I get are below for two different approaches for FPS measurement (1st with vtkRenderer::GetLastRenderTimeInSeconds() and 2-nd via averaged value of vtkTimerLog):
Python
fps 0.04378676414489746
fps 0.0001468658447265625
fps 0.00013589859008789062
fps 0.00013709068298339844
fps 0.00019478797912597656
fps 0.0002949237823486328
fps 0.00013780593872070312
fps 0.0001480579376220703
fps 0.0002181529998779297
fps 0.0002989768981933594
fps 0.00024509429931640625
fps 0.00032210350036621094
fps 0.0003600120544433594
fps 0.0003380775451660156
fps 0.00037598609924316406
fps 0.00032591819763183594
fps 0.0003628730773925781
fps 0.00032520294189453125
fps 0.00034999847412109375
fps 0.00032901763916015625
Total FPS: 4116.198527357683
AVERAGE FRAME RATE: 60.049966426570116
C++
3.50475e-05
9.05991e-06
7.15256e-06
7.15256e-06
Total FPS: 147236
AVERAGE FRAME RATE: 102217 fps
If the codes attached are correct (no bugs), then I have a number of questions about this results :
- Are the C++ results that I receive actually possible (147236-102217fps) for 4 Mil on RTX2080 within a basic scene (no complex shading)
For the C++ codes if I understood it correctly it takes me approx 0.035 milliseconds to render first frame and around 0.007-0.009 afterwords.If I understood correctly from https://on-demand.gputechconf.com/gtc/2016/presentation/s6193-robert-maynard-visualization-toolkit.pdf
5 Mil were rendered with 0.0025333sec for VTK 7.0. Are the results that I get a some kind of an error? - If C++ works Ok, why do I get such a big difference in Python and C++ performance? In my understanding Python interpretable layer is only used for C++ procedures calling, and I have expected some difference but not such a big one
- the https://on-demand.gputechconf.com/gtc/2016/presentation/s6193-robert-maynard-visualization-toolkit.pdf discusses some rendering enhancements. Where can I find more information on this work ( papers, references in codes, etc.), just to understand how it all works
- Do I understand correctly, that for remote vis, where server is python based, it would be correct to measure the performance from python code?
Thank you very much in advance,
Best regards,
Evgeniya