Hi,
Are there any ways to render a live video inside a vtkWindow?
Probably this can be done with vtkVideoSource/vtkWin32VideoSource? But how to setup the pipeline? Can someone share some examples for the two classes?
Thanks in advance!
Hi,
Are there any ways to render a live video inside a vtkWindow?
Probably this can be done with vtkVideoSource/vtkWin32VideoSource? But how to setup the pipeline? Can someone share some examples for the two classes?
Thanks in advance!
You can display video the same way as static images. Make sure you don’t change the pipeline or create/delete objects at each frame update - just update the image buffer content and re-render.
If you work on medical applications then you may find 3D Slicer’s 4D data infrastructure useful - it supports live display/recording/replay/compression/decompression/processing of RGB or RGBD image streams (and actually it works not just with images but with transforms, meshes, point sets, curves, etc., so you can record/replay entire procedures). A few examples (live reconstruction of 3D ultrasound volume from tracked 2D frames, synchronized RGB+D image acquisition and display):
In this vein, what do you do if the image buffer is a large data volume, ~GB, of ~50m points, and you want to view the video as a smooth animation, not with the 5s or so of latency that it takes to read the new values into the buffer every timestep?
A 1GB image (e.g., 400x400x300 voxels) is not much above average (e.g., 256x256x256). VTK should have no problems visualizing changing images of this size at hundreds of frames per second.
A polydata containing 50M points is 50-100x larger than average, but you can still replay animation at hundreds of frames per second, if you have a quite good GPU and the data is already on the GPU, by just toggling visibility of time points (by changing the actor’s visibility, opacity, color mapping, etc.).
If every point of the entire data set is changing at each time point in real-time: then uploading to the GPU may be a bottleneck, but there are many ways to address that. If the goal is visualization for human operators then you can probably crop and resample the data, because a human will most likely not notice downsampling from 50m points to 1m points if everything is moving; and if you zoom into a small part of the data set then you can drop everything that is outside the field of view.
If only a small fraction of the data is changing at each time point: then it can make sense to partition the data set and only update those cubes that have modified points.