However, when I change the user matrix of the vtkVolume object, it seems that the way the shading is computed is done before the user matrix of the volume is used to move the 3D volume in space (see image attached). If I understand things correctly, the user matrix of the volume should be taken into account before the shading is achieved, but it seems not to be the case.
Image attached (see below): Top line: on the left, I have set a camera “front” light, and the shading of the object appears as I expect. On the left, I applied a 180° rotation of the object (the camera stays still) and it seems that the light comes from the back side, which is unexpected. Bottom line: when rendering vtkPolyData objects, however, the direction of a “camera” light works: the user matrix is applied before the light direction and after being rotated 180°, the light seems to come from the front side as well (which is the expected behavior).
It is a known limitation (which is probably not documented well enough) that user transforms in camera may cause various issues, including computing lighting. Maybe actors have similar problems?
Based on discussion with @ken-martin, my understanding is that it would be very difficult to take into account user matrices in cameras everywhere in VTK, they are rarely used, and there are alternative solutions, therefore most likely UserViewTransform of vtkCamera will be removed from VTK in the future.
Andras, could you explain the alternatives a bit more? Our models are hierarchical, with each piece being positioned relative to its container. For example, we’ve got the world containing a body containing a central torso containing a thoracic vertebra containing a ligament insertion landmark. For each of these, we simply specify the transform relative to the container, thing.actor.SetUserMatrix(xform), and the model is assembled. If you move the body, everything “inside” moves with it. How would we accomplish this without the user matrix?
You can use SetPosition and SetOrientation methods to position your actors. Does lighting work correctly then?
You may find vtkAssembly useful for positioning a hierarchy of actors.
Deprecating user transform may be planned only for vtkCamera’s UserViewTransform and not for actors (the discussions were a while ago and I don’t remember all the details). Maybe user transform/matrix should work for actors. I hope others who are more familiar with what this part of VTK should be capable of doing will join this discussion.
Many thanks a lot Eric and Andras for your feed back.
In the picture above, I did not use hierarcichal models (neither did I move the camera). I instantiated 2 vtkActors and 2 vtkVolumes, which I inserted in the renderer ( this->Renderer->AddActor(actor) and this->Renderer->AddVolume(volume);).
vtkActor: when SetOrientation is used ( actor->SetOrientation(0, 0, 180); ), lighting works correctly then.
vtkVolume: when SetOrientation is used ( volume->SetOrientation(0, 0, 180); ), lighting does not correctly work.
By the way, if your goal is to develop a software that people actually use (not just for your own fun, or for the learning experience) then don’t even think about implementing a new desktop application from scratch. On the web or in mobile apps you can still get away with small, limited applications, but on the desktop there are already several very powerful, open-source, free, unrestricted software applications that pack a lot of features and you can customize them to your specific workflow. By building on a platform, you can focus on the 0.1-1% of the work that is specific to your field.
3D Slicer has a significant digital morphometry and paleontology user base. Many features that you will need are already implemented and there are well-funded projects to improve the infrastructure even further (see SlicerMorph, SlicerSALT, etc.). You may also have a look at MITK. There are many others, too - see a few hundred of potentially relevant applications and toolkits on idoimaging.
Here I have interacted with more than 200 vtkpolydata objects (the small cylinders) and placed them interactively in order to be able to 3Dprint the model. It was done quite easily (a couple of hours).
Also, I built a few custom windows and interaction modes to be able to easily 3D tag (color) 3D surface and place many flags (landmarks with caption) on it such as below. Maybe I am wrong, but it may have been hard to do that with a 3D slicer module or Paraview-derived application.
Slicer should be able to handle your models without problems. SPL brain atlas in has over 300 structures and you can visualize and manipulate them easily. New markups widgets can display/edit thousands of labels (latest stable version slowed down after a few hundred labels, but markups were completely reworked in nightly versions).
You can still use VTK, either from C++ or Python, but you get development and maintenance of tons of features for free. You can explore models, dissect/assemble skeletons in virtual reality (just install VR extension by a few clicks), do morphometric computations, process models along with corresponding volumetric images, simulate soft-tissue deformation, run data analysis using Jupyter notebooks, etc. Extensions that you contribute to the Slicer app store are built automatically on Windows/Linux/Mac and made available to interested users by a few clicks. You can also create custom branded application, with your extensions bundled and all other extensions and modules that you don’t need disabled (see for example SlicerCMF, SlicerSALT and a number of commercial applications).
Thanks a lot, I will have a look at the SPL brain atlas. And also I am now in touch with the SlicerMoprh developers (I developed around 10 years ago a Geometric Morphometrics VTK-based application, this is where I could start to contribute the most easily, also to learn about how to code a module in Slicer).