Show VDB with vtkOpenVDBReader in jupyterlab

Hello all,
I am a newbie for VTK, now trying to use Python vtkmodules’ vtkOpenVDBReader as a reader to read vdb files and rendering in jupyterlab.
I reference this SimpleCone example in Kitware/trame with replacing the vtkConeSource() with vtkOpenVDBReader for my file path as below, other lines remain same.

reader = vtkOpenVDBReader()
output = reader.GetOutputPort()
mapper = vtk.vtkPolyDataMapper()

But in jupyterlab, the only shows a blank black background without my vdb actor.
Does anyone have suggestions or experience for this?
(I use this *kitware/paraview:ci-superbuild-fedora38-20230810* as my Docker base image.)


replace VtkLocalView by VtkRemoteView then update the mapper and maybe add a filter to properly convert vtkPartitionedDataSetCollection into something that a mapper can process.

Also are you trying to do volume rendering? If that is the case your VTK pipeline need more changes than I mentioned.

Hello @Sebastien_Jourdain
Thanks for your response! :smile:
After replacing VtkLocalView by VtkRemoteView and adding a vtkImageDataGeometryFilter for the mapper, I can render it successfully.
And my further question is that .GetNumberOfPartitionedDataSets() shows there are more than 1 (8) set in my data collection.
Does it mean I need to iterate all data sets, createing mapper, actor and render, then AddRenderer to the vtkRenderWindow for each of them?

yes, my further purpose is to do volume rendering.
To render volume, I use vtkSmartVolumeMapper, but there will be error ERR| vtkSmartVolumeMapper (0x558a0c0ff150): Could not find the requested vtkDataArray! 0, 0, -1

reader = vtkOpenVDBReader()
output =reader.GetOutputPort()

partitioned_data = vtk.vtkPartitionedDataSetCollection.SafeDownCast(reader.GetOutputDataObject(0))
number_of_partition = partitioned_data.GetNumberOfPartitionedDataSets()

for i in range(number_of_partition):
    partition_image = vtk.vtkImageData.SafeDownCast(partitioned_data.GetPartitionAsDataObject(i, 0))

    # Convert the image to a polydata
    imageDataGeometryFilter = vtk.vtkImageDataGeometryFilter()
    ## Create mapper
    mapper = vtk.vtkSmartVolumeMapper()

So no, you should not iterate over the partitions. That will be done automatically unless the filter understand the composite dataset directly. Also by converting your data into a surface mesh (vtkImageDataGeometryFilter) you can not render it as volume since you lost that information along the way.

You should really look at volume rendering examples: