recently I have been running some experiments with CPU rendering of polygonal objects in cloud environment and have received quite a low performance, although CPUs are quite an old ones. On VM with 12 cores those are 5 FPS for 385140 cells and 4.1FPS for 612088 cells . The Mesa 20.3.0 with SWR was used. mpstat shows that all 12 cores are involved in rendering. I am quite new in CPU rendering area and have following questions:
- Do I understand correctly that CPU rendering parallelisation runs out of box in VTK9.01 vtkOpenGLRenderer ?
- Are there any further possibilities to speed up CPU rendering in VTK?
Thank you in advance,
Getting 4-5 FPS in software rendering (CPU) for that number of cells is quite good in my opinion. I think 12 cores can’t compare to the thousands of cores of a modern graphics card available for hardware rendering. Such is the difference in performance that no one uses software rendering today for serious applications.
Now, if you do have a reason to use software rendering, you have to impose very restrictive limits on the complexity of the scenes you need to render. Either this or buy way more CPUs from your cloud provider and implement cluster-level parallelization in your program (OpenMP, MPI, etc.). Though, it would be a lot cheaper, a lot easier and a lot more efficient to just use GPUs like everyone does.
thank you very much for your reply.
- Yes, I have no options but to use CPU, so I have to find a solution to rendering performance within this configuration
- For cluster-level parallelization (OpenMP, MPI, etc.). As far I am interested only in performance of rendering part, not the entire vtk pipeline execution. I am using Mesa for rendering that targets to get OpenGL support on nodes without GPU. It has several renderers (llvmpipe, openswr). I assume that Mesa is already handling parallel rendering inside VTK in this way, right?
- Just in case, considering the fact that the above renderers are rasterisers, can I expect any improvement with some CPU ray-tracing?
Thank you in advance,
The only real world software rendering application I know of is for offline rendering (e.g. movie production). A typical Pixar movie, for example, takes months on vast server farms to finish rendering. However, they are not interactive animations. If you seek an interactive 3D application with high FPS, you either go to GPUs or impose restrictions on scene complexity. There is a reason why the industry invests billions on hardware accelerated rendering.
You may use a Level of Detail actor, and also use some Geometry filter to extract only the outside Layer of a Bulk Mesh.
Using Mesa+LlvmPipe gives quite good results. Please try the gallium driver for that.
Also notice that if you set your actors to double side rendering is slower than only set them with one orientation.
Other thing it might be useful is doing culling, if you haven’t already. That quickly discard polygons in the same orientation than the camera, unless you use double side rendering.
Ray tracing improves quality of results, but it is typically slower than traditional rastering. Again, high-end graphics cards have ray tracing/casting hardware on them.
Depending on the original data, you may find raycasting to produce faster results. You will need a hierarchy or a quick look up method, otherwise it will be really slow. For instance, a point cloud with a octree hierarchy could be faster than mesa and rasterization if the point cloud is too massive.
Still though, you may need to research to assess correctly. Sometimes a few tweaks in rendering can make everything really fast. For instance, VTK has a lot of of linear processing in the actors. So if you amount of actors is high (thousands) some updates may be slower per frame if you instead join all the data in a single actor. Or at least, that was my experience in the past.