vtk.js complex user interaction on 3D polyData

Dear vtk js,

I’m about to start building a tool for segmentation of mesh data, but I’m not sure what the best way forward would be. Could you perhaps point me towards a (high-level) approach? Also, if it would turn out that vtk.js isn’t the recommended library for this use-case, feel free to let me know - I’m happy to explore alternatives.

The tool should ideally support:

  • rendering 3D mesh data (polydata) [this already works]
  • adding balls on the mesh by clicking with the mouse [this already works; using a cellPicker]
  • dragging existing balls around on top the mesh. Some subquestions arise here:
    ** How to detect when hovering over a 3D ball? I could use the cellPicker again, but it seems like a heavy operation to do on every mouse move. I did notice that (after setting the actor of the widgetRepresentation to pickable), the mouse pointer already automatically changes when hovering over it. Is there a way to directly leverage this underlying logic?
    ** Would there be a way to constrain movement of the balls to the mesh surface? I could of course move them around in the plane (parallel to the screen) and re-project them onto the mesh after each (infetisimal) movement. However, directly forcing the balls to stay on the mesh would be even better. I found a similar question here (How to move objects with mouse? · Issue #657 · Kitware/vtk-js · GitHub) but didn’t find a lot of information on it yet. Also, I don’t seem to find the vtkContrainedPointHandleRepresentation - as recommended in the final comment.
  • doing some mathematical operations on mesh data, such as
    ** computing shortest paths between two mesh vertices
    ** getting all vertices within a closed contour on the mesh
    ** removing/splitting part of the mesh (enclosed by a contour)
    ** collision detection between polydata objects

To have a better idea of the final end goal, I’d like to refer to an existing tool, allowing to place and move around balls on a mesh surface: video (3:06-3:26)

I know these are a lot of questions at once, but the goal is rather to get an understanding of how realistic and straightforward this is in vtk.js. No need to provide me with a detailed solution of course :slight_smile:

Thanks a lot for your help and for building this great software.
Siebe

All of it is definitely doable but it might be tricky to explain here on a forum. At a high level you would use the widget infrastructure to grab and move spheres around but you also want to compute the depth map without the spheres at release time so you could use it for the positioning of your sphere on the surface you care about.

I let @Forrest expend on that if he wants.

  • Dragging balls on top of a mesh. There are two parts to this: using a widget with a vtkSphereHandleRepresentation, and using a manipulator to figure out where to put the sphere.
    1. Using a widget (example: vtkPolyLineWidget) can get you placing and selecting spheres in an efficient manner. Internally, widget handles are picked using a fast color buffer.
    2. Constraining a ball to the surface of a mesh will entail a custom widget manipulator. Manipulators translate 2D coordinates into an appropriate 3D coordinate (e.g. the PlaneManipulator keeps points on a plane). What you want is a manipulator that finds the first actor that intersects with the mouse position. This roughly looks like the following approach:
      1. use displayToWorld to convert 2D mouse coords to a point in 3D
      2. create a ray starting from the camera position to the 3D coord from step 1
      3. use a vtk.js picker to intersect with geometry. If there is an intersection, get the coords and use that as the new position of the point.
  • math on the mesh: we do not provide those operations, though if the VTK C++ library does then we have a path for porting. (e.g. there is the vtkCollisionDetectionFilter in VTK C++, but not yet in vtk.js…)

The hardware selector can be used to create a depth map of just your mesh so you could do displayToWorld the way @Forrest is explaining but using a cached buffer of your rendered geometry. This way such operation could be very fast and would not require any complex math that might not be available in JS.

You can do much better than that software. VTK has lots of tools for surface mesh manipulation, path search between control points, etc., so for example you don’t need to specify a dense set of control points (that are very tedious to edit).

In 3D Slicer we added very powerful widgets for curve editing, with the option of constraining the curve to surface, insert control points, resample control points, etc. We also added a dynamic modeler module that allows you to cut surface patches out with closed curves. See a short demo here of what you can do without any programming (and you can customize everything, simplify the workflow, the GUI, etc. using Python scripting):

If you must do everything in a web browser then you can set up a cloud computer, or port all the missing features to vtk.js. Probably some of the VTK algorithms are already there, but you might need to invest time into getting sophisticated widgets.

Thanks everyone for the quick replies and very useful information.

I managed to move points around on the mesh in an efficient way using the HardwareSelector (as mentioned by @Sebastien_Jourdain). I however didn’t find how to use de depth map of only the mesh. Currently, it’s using the complete renderWindow, so if a ball lies on top of the mesh, the HardwareSelector will return a wrong coordinate. Using the CellPicker solves this issue (since I can specify the actors to pick from, ignoring the actors for the spheres), but is indeed noticeably slower. Am I not seeing something, or is this indeed just a downside of using the HardwareSelector?

Small additional question: is there an easy way to do the inverse operation, getting the (2D) screen position of a 3D point on the mesh?

As for the math on the mesh, I’m able to do these calculations myself if needed.

@lassoan thanks for the suggestions. This software indeed looks very promising. I am however looking for a web-based approach and porting all algorithms to vtk.js isn’t feasible at this point.

When you use the hardware selector to capture your scene depth, you need to edit your scene by hiding the spheres. Then ideally, after that point it is only lookup rather than [render+lookup].

Yes the camera should be able to go from 3d to 2d.

Thanks, I will look into your suggestion.

Meanwhile, I stumbled upon another problem: somehow, my custom widgetRepresentation (displaying a polyData for the ball following the mouse) isn’t updated immediately when the mouse is being moved. I have the following setup:

  • (in my widget): publicAPI.handleMouseMove → computes 3D position on mesh (using color buffer) and sets the coordinates in the widgetState
  • (my custom representation): publicAPI.requestData → reads the position from the widgetState and generates a polyData sphere at that position

While the computation of the 3D coordinate is really fast (a few milliseconds and updated regularly), the requestData of my representation is only called when the mouse stops moving. When I keep on moving the mouse, it doesn’t get updated until I stop moving (it seems that vtk delays / blocks the updating). Any ideas on how to fix this?

Some additional information:

  • I already tried forcing an update using publicAPI.shouldUpdate on the representation (without success, same issue)
  • I also tried forcing rerender using model.apiSpecificRenderWindow.render() or model.interactor.render() (doesn’t fix it)
  • I noticed that when I make the actor of my representation pickable, it does update frequently. I however don’t want it to be pickable (and making it pickable results in strange artefacts, making the mouse pointer continuously switch between ‘default’ and ‘pointer’).

I see, I don’t remember the specifics but indeed your pipeline does not execute when it should. By disabling the picking part, you may be tagging object as “not changing”. Without seeing how you setup your pipeline/widget it is hard to tell. The pointer change is to let the user know that it is an handle that can be grabbed. Anyhow, @Forrest might be able to help if you provide more information.

Also, your representation should update the actor directly (center) rather than recomputing the polydata of the sphere.

Thanks for the reply. Hopefully @Forrest can point me in the right direction or ask for some more information if needed. I’m connecting the pipeline within the representation using connectPipeline (in the vtk WidgetRepresentation), as such:

model.pipelines = {
    handle: {
      source: publicAPI,
      mapper: vtkMapper.newInstance(),
      actor: vtkActor.newInstance({ pickable: false }),
    },
  };
  vtkWidgetRepresentation.connectPipeline(model.pipelines.handle);
  publicAPI.addActor(model.pipelines.handle.actor);

Do you call model.interactor.requestAnimation(publicAPI) in your handleMouseMove?

I don’t. I just tried adding that but it doesn’t seem to solve the problem unfortunately.