We are currently using the VolumeMapper to render medical images and it works (we use it for 2D image too, so a 512x512 CT becomes a 512x512x1 in texture size). For larger images (large XRay or Mamo images, where dimensions larger than 2048 often) it throws an error which I guess it is expected.
We thought we can use the 2D textures in the VolumeMapper but apparently it is only for the WebGl 1.0 rendering route and it seems like it will be deprecated soon. I believe the ImageMapper uses/supports 2D textures; however, we prefer not to go in that direction since the rest of our pipeline is using the volumeMapper for rendering, segmentation, segmentation outline etc.
I appreciate any comment on how to make this work in a more efficient way, or if we are missing something.
The 2d texture used in the WebGL1 path stores a 3d volume (as patches like a quilt). The code still expects a 3d volume as input and interpolates on the 2d image quilt as if it was 3D. If it doesn’t crash with a 512x512x1 input that is more by luck than design.
Image rendering has some differences from volume rendering that make stuffing the two into one mapper a bit messy. It is possible, but makes for a big vtkVolumeOrImageMapper class that can handle both and has properties for both. Not a bad idea but not trivial.
Why not set in patches of 512x512 the full slice and appemd them on the 3rd dimension? You could set up your own shader to achieve the vizualizatiom ypu nedd by trnasforming subsequemt slices
There might not be more than 1 slice. Not sure you can use a 3d texture with a third dimension of 1.
But honestly I’m not 100% sure we are talking about the same thing. I think the original question was why can’t I use the volume mapper for rendering images? And the answer is that it isn’t really designed for that, but if it is working then that’s great and it should still work after we deprecate WebGL 1. Almost nothing uses WebGL1 anymore, just iOS 14 mainly.
Yes this was my question and you answered that clearly, thanks a lot!