If you are following rendering research, you certainly heard about 3D Gaussians splatting, a technology used to create a realistic 3D scene based on a video or multiple images and presented in Siggraph 2023.
I made some investigation to see what would be required to visualize these 3D Gaussians scenes in F3D, which uses VTK as a visualization backend.
Basically, the scenes are just a large number of points associated with:
- An opacity value
- A 3D scaling vector
- A rotation (quaternion)
- A view-dependent RGB color
I successfully managed to make it work with solid performance:
I’m planning to make several MRs to slowly bring the support to VTK:
- Improve the current point Gaussian shader implementation (merged)
- Add support for 3D scaling/rotation (merged)
- Add compute shader support in VTK + bitonic sort for depth reordering
- Add support for view-dependent colors (spherical harmonics)
I just wanted to share these early results and I’m happy to discuss it if there’s any interest.
This is very important work, it’s great to see this capability going into VTK!
Very cool! It looks very realistic near the trees. It seems like shadows/lighting look more natural with this method. So, the only thing you upload from host to device(gpu) are point positions and those 4 attributes? That’s very neat!
Recently, I’ve had to wrestle with lots of hairy issues where the OpenGL polydata mapper would refuse to work in webassembly because it had some non-portable code. Since all of your work is in early development stages, do you plan for your code to run in webassembly as well? Are you making an active effort to ensure your OpenGL contributions to VTK are compatible with GLES 3.0 spec? I’m asking this because we are now taking webassembly, as a supported platform, more seriously than before
Geometry shaders are not supported in webgl2. So, it’d be nice to avoid them.
I’m currently using an extended version of the point gaussian mapper.
Unfortunately, the current mapper on VTK master uses the geometry shader but it should be fairly easy to migrate to an instancing approach (it could be more performant too).
Other than that I’m also planning to use a compute shader to sort the index buffer by depth.
It’s much more performant to do it on the GPU to modify the IBO in place and avoid a reupload.
Alternatively, this step could be done on the CPU (and there’s already a
vtkDepthSortPolyData for that), but for cases with several millions of points (there’s 6M+ points in the video) that’s not an acceptable solution.
Sounds like you’re on a good path. As for the compute shader, it can be an alternate code path that runs when the OpenGL version is greater than 3.2 core or 3.1 ES.