Multiple views of the same scene

Hi,
I’ve seen a similar question here, but the details are slightly different so I’d thought to ask again.
So I need some way to visualise multiple views of the same scene, so I need to draw a different camera perspective from one vtkRenderer to another vtkRenderer.
The reason I need this is because I’m visualising a big point cloud of data, let’s say 500MB in memory, and creating a new vtkRenderer and adding the same points would copy the cloud and double the RAM usage. So that means I don’t have the option to just recreate the same scene with new actors/points and sync their positions/transformations via vtkAssembly or something like that.
I thought of calling GetOffScreenFrameBuffer and setting the frame buffer object of the other renderwindow. Unfortunately when I tried this, the renderwindow didn’t like that as it had some visual glitches and opengl warnings printed in the console.
Alternatively I tried using the vtkWindowToImageFilter to capture the renderwindow “screenshot” and display this in an Actor2D object in the other rendererwindow. This seemed to be too slow for real time refresh rates when moving things on the scene.
So my first question is:
Is there a way to apply a vtkFrameBufferObject to another renderwindow without glitching, like maybe doing a deep copy of it?
And the second question:
Is there any other way I could go about creating multiple visualisers without the memory overhead of each of them having to load the data into RAM.

Thanks in advance.

Hi, Arijan,

The naïve way, that is, shove all points into the display will cause you problems such as low FPS or high memory consuption. That’s not VTK’s fault. You have to think that you have, say, 4 views of the same point cloud and that means four different projections of the cloud, each with different screenX, screenY values. Hence, you will have a manifold memory footprint.

Displaying large data sets requires some smart technique. Take a look at this: https://towardsdatascience.com/how-to-automate-lidar-point-cloud-processing-with-python-a027454a536c . Depending on what you’re displaying, a simple technique like random sampling at, say, 1 for every 1000 points (0.1%) may suffice. That works very well for natural structures like terrain, geology data, etc. If your cloud represents closed spaces like the factory model in the site, maybe frustum culling is a better technique.

take care,

Paulo

You can have multiple vtkRenderers in a vtkRenderWindow, and have each renderer render to a subset of the full render window size. See https://kitware.github.io/vtk-examples/site/Cxx/Rendering/Model/ for an example. The example has different actors assigned to each renderer, but AFAIK you can add the same actors to different renderers as long as they share the underlying OpenGL context. This avoids duplicating memory for the points and any kind of image buffer copying. Renderers have different cameras, so you can show different views of the same data in each renderer.