I’m curious about scaling a rendering scene for a feature in an open-source, VTK-based project I have been contributing to: vtki. Overall, vtki provides an intuitive, pythonic means of interfacing with VTK datsets and rendering VTK datasets with matplotlib similar syntax.
I’d like to exaggerate an axis in a VTK rendering window. This is very common (almost necessary) for many geoscientific applications. Often we’re dealing with 10s-100s of square kilometers in the lateral (XY-plane) while looking only in the 10s-100s of meters in the Z-direction. These proportions make visualization rather difficult and exaggeration would help significantly.
Lately, I’ve been working towards making it easy for users to scale a scene and I have implemented a way for a user to call set_scale(1, 1, 10) to scale the rendering scene by ten along the z-axis. On the vtki backend, I iterate over every actor in the scene and call SetScale which scales all the datasets in that scene. Here’s the code that does it:
for name, actor in self._actors.items():
if hasattr(actor, 'SetScale'):
actor.SetScale(xscale, yscale, zscale)
However, whenever this gets to a vtkCubeAxesActor object, the SetScale call appears to do nothing. For an explanation please see akaszynski/vtki#39 and the following:
Scaling a scene would be done by modifying the renderer’s camera to modify the view of the coordinate system itself, rather than resizing each object in the scene. Modifying each actor in a scene changes the size of the actor’s geometry, but leaves the coordinate system untouched.
So in this case, the CubeAxesActor is doing what it is expected to do – marking the bounds of the actor in the current coordinate system.
You’ll also see issues with the actor approach once actors are no longer positioned at the origin – the positions are not scaled, only the dimensions of the geometry. This will cause them to clump up or spread apart as the scale changes.
Instead of modifying the actors, use the GetActiveCamera method on the renderer to get the camera, and modify that to update the entire scene uniformly.
If you really want to resize the actors and just have the CubeAxes show the original, unscaled bounds, you can set the label ranges using the SetXAxisRange, etc methods on the vtkCubeAxesActor.
There are a few other ways that might fix that. I’m not sure which is best for your usecase, but these are the things I’d try next:
vtkCubeAxesActor.SetUse2DMode(1). IIRC, this should render the labels in the overlay plane, which won’t be affected by the camera’s transforms. (Right now it’s rendering them as 3D geometry, so they get affected by transforms).
Go back to scaling actors (also repositioning if needed), and tell the vtkCubeAxesActor to use the original bounds of the actor as labels (vtkActor.GetBounds() and vtkCubeAxesActor.Set[XYZ]AxisRange).
I’m not exactly sure why SetScale would be ignored by this actor, but I suspect it’s because the cube axes are usually tightly coupled to an explicit bounding box, and scaling the actor would have the same effect as scaling that bounding box directly.
Either way, scaling the cube axes wouldn’t be quite right – it might get the labels back to the correct aspect ratio, but it’d resize the whole cube axes so that it wouldn’t line up with the dataset anymore.
I need the ability to show legible axes labels while scaling the scene to create exported screenshots for papers and such so I’ll hack away at this and see if I can’t find a way to scale those internal actors.
Thanks for all your help figuring out how to properly scale the whole scene!!
Hi Bane, Your scale legend looks great. I need to use a scale legend, but the built-in one legendScaleActor I use is black and white and doesn’t look good on my white background. How did you make yours?