Differentiable rendering module with PyTorch integration for VTK

Summary

I would like to propose the addition of a differentiable rendering module to VTK — tentatively VTK::RenderingDifferentiable — that exposes gradients through VTK’s rendering pipeline and bridges them to PyTorch’s autograd system. The goal is to enable gradient-based optimization of scene parameters (camera pose, lighting, geometry) directly within VTK’s pipeline model.

I’m writing here first to validate the approach, gather feedback on architectural fit, and understand whether this belongs upstream or is better maintained as an out-of-tree module initially.

Motivation

Differentiable rendering has become a core tool in scientific computing, inverse problems, and 3D machine learning — enabling tasks like:

  • Camera calibration and pose estimation from rendered images

  • Geometry reconstruction from 2D observations

  • Physics-based parameter fitting (material properties, lighting conditions)

  • Integration of 3D rendering into neural network training loops

Existing differentiable renderers (PyTorch3D, nvdiffrast, Mitsuba 3) are powerful but disconnected from VTK’s rich data pipeline — filters, readers, meshing, and visualization infrastructure. A researcher using VTK for preprocessing and visualization today has to serialize their scene out of VTK, re-import it into a separate renderer, and lose the pipeline context. This proposal aims to close that gap.

Proposed architecture

The module would consist of three layers, with PyTorch as an optional dependency (similar to how RenderingRayTracing treats OSPRay):

  • C++ core: vtkSceneParameterMap for declaring differentiable parameters, vtkGradientBuffer for storing ∂L/∂param, and a new vtkRenderPass subclass implementing soft rasterization

  • CUDA backend: GPU-accelerated gradient computation with OpenGL–CUDA interop to avoid device round-trips

  • Python bridge: A torch.autograd.Function wrapper exposing the render as a differentiable node in a PyTorch compute graph

Dependency strategy

PyTorch and CUDA would be strictly optional. A CPU fallback using finite differences would be provided so the module is functional without a GPU and without PyTorch installed.

Questions for the community

  • Is Rendering/DifferentiableRendering the right home, or would a standalone module be more appropriate?

  • Is there appetite for a PyTorch dependency (even optional) in the VTK tree, or should the Python bridge live in a separate package like vtk-torch?

  • Are there existing gradient buffer or parameter tracking abstractions in VTK I should build on rather than introduce new ones?

  • Has anything like this been attempted before?

1 Like