VTK on the Oculus Quest 2: Ongoing

So I created this thread to document any progress I make on getting VTK visualization to run on the standalone VR headset the Oculus Quest 2. Currently the VR example found in the VTK.js project doesn’t work within the native Oculus browser, it does however work on Firefox’s Reality Browser.

There are a few issues with the visualization, nothing major though:

  • The visualization is a tad rough, anti-aliasing will likely need to be enabled.
  • There’s no in-VR menu, controller models or interaction.
  • Hand-tracking isn’t currently supported by the browser.
  • The camera light is too narrow.
  • There’s no floor for reference.

Initial work will be on getting a proper exporter working in my existing VTK program to export a complete scene, a macro for Paraview exists here that can do this however last time I checked it uses some classes not exposed to VTK (e.g. ‘from paraview import simple’). If I can figure out a VTK-native method I’ll then likely merge sections of the VTK.js SceneExplorer and VR example to get something usable for fast visualization.

I’ll then see if I can implement some interaction using the controllers, I have some existing controller models I created for a separate project that should work. Initial work will likely be button presses and maybe joystick interaction, ideally raycasting will be added eventually.

User interaction will likely be limited to the non-VR mode until I get something usable, thankfully it’s fairly easy to switch.

I have no Javascript experience so this might take a while, any help would be appreciated.

Are you working with vtk.js or VTK? Oculus Quest 2 seems to be already compatible with virtual reality interface of VTK, as there is a “yes” in both SteamVR and PC columns here.

My current work is uses VTK, I’ll likely be doing the brunt work in that and then using vtk.js just for the VR render. I’d prefer to keep the Quest as a standalone rather than using it as a PC headset. I know VTK supports VR when compiled specifically to support it but would prefer to keep the two processes separate to avoid hassle for the end user.

To add some clarification: the Oculus Quest 1 & 2 are VR headsets that can be tethered to a PC for SteamVR however they have their own ARM-based CPU which allows them to run without the need for a PC. The OS is a heavily customised version of Android so compiling the Android build might not work, there’s also added complications w.r.t. the control scheme. These headsets can run a variety of Android apps including browsers that support WebVR and vtk.js’ VR implementation. My plan is to use my existing implementations of VTK’s functions to compose a scene and then export it, afterwards I can then import it into a vtk.js application to view in VR

VTK virtual reality works really well, including direct volume rendering of 4D data sets (e.g., beating heart), so it is a good solution for professional applications and full-quality data sets. However, I agree that it is a major inconvenience that you need to use a gaming laptop (which is heavy, hot, and has abysmal battery life) or a desktop PC (large and not portable). For mass deployment, classroom use, etc. quality and performance is not the highest priority and computational power of the standalone headsets suffice (and these devices are cheap and portable).

We keep pushing top-quality virtual reality for professional use (medical training and surgical planning), but we are also involved in creating content (such as https://www.openanatomy.org/) that would be suitable for inexpensive virtual reality visualization for education and patient communication.

What kind of data sets do you work with? What viewing features do you have in mind?
There seems to be tons of javascript 3D viewers that supports virtual reality, too. You can export the scene to glTF and load it into these viewers. Have you tried them? How do they compare to capabilities of vtk.js?

I’m currently doing a PhD in Nuclear Physics so the kinds of data I work with span a pretty wide spectrum:

  • Most of my data is discrete polydata spread across a germanium crystal volume as voxels (e.g. risetime maps, PSA deviation) in which case the glyph3D filter works really well.

  • I do electromagnetic simulations of charge distributions to create signal responses so I utilize the vtkImageData class a lot to check how the fields look within complex geometry combined with line glyphs to show particle tracks, I also slice these fields into 2D and perform contouring to check for discontinuities in the fields.

  • The core of my PhD is in the development of efficient search algorithms using topological data analysis so it’s useful to visualize several hundred thousand glyphs with weighted and directed edges which VTK does incredibly well. By linking the point IDs I can also see how the response of certain geometric positions is expressed in a multidimensional (~100D) response space.

  • I use VTK for visualizing the geometry of experimental setups for real-time control (e.g. it’s useful to see what part of the crystal is being hit by the gamma-beam without visual inspection). This is possible using VTK’s STL reader.

  • I also use it for the validation of geometry for nuclear simulation in GEANT4 and combining the produced data with CAD models (GEANT4 has an amazingly poor renderer). I wrote my own importer & exported class for GDML files but VTK is used to combine the geometry on render.

I also have gotten involved in a few other projects around the lab which utilize some of the methods described earlier:

  • I wrote a 3D reconstruction software package for Iodine-131 Thyroid imaging for cancer diagnosis that combines CT and gamma-camera data for automatic segmentation.
  • I wrote a visualisation package for a Compton Camera that allows for the combination of LIDAR SLAM data with Compton backprojection cones and CAD models to determine the presence of nuclear waste contaminants in places like nuclear power plants.

The main feature I had in mind for this work would be a roomscale 6-DoF display of a scene at 1:1 scale that would allow for the user to walk around a scene and inspect certain actors, for example if the lab Compton Camera was utilized to scan unusual activity in a concrete wall it’d be useful for an RPS to be able to inspect the activity projection on a SLAM reconstruction of the environment before visiting the contaminant to lower the exposure to harmful radiation.

I currently have an example of this working in VTK, it combines several versions of poly and image data in the same scene.

I’ve had a cursory look at some other Javascript viewers online, a lot of them have specific usage cases (e.g point clouds, model viewing) but seem pretty capable at their specific jobs, VTK and vtk.js have the benefit of supporting multiple different data types at once, the only software I’ve found that does that online would be the ParaView Web suite however AFAIK there’s no VR support at the moment. Notable examples include VRMol for molecular visualization and Poltree for point clouds (which states it works in VR however I had no success).

A-Frame has glTF support and has probably the most VR-friendly interface so I could write a viewer in that and use the glTF exporter class like you suggest however AFAIK things like 3D image data isn’t supported by the exporter class, it also doesn’t really have the functionality to do more complicated operations like contouring & slicing.

The vtk.js example by comparison is a little simple however as it’s mainly there to demonstrate the VR camera renderer I don’t think that’s too much of an issue. I’m pretty confident that the features exposed in vtk.js should be enough to get something workable however the scene will likely be static until I can figure out an interaction scheme.

Raycasting from the controllers would go a long way to producing a viable solution for interaction, VRMol effectively dedicates an entire virtual wall for 3D text to work as a menu. This should be relatively simple to implement with a proppicker and callbacks.

So I’ve had my Oculus Quest 2 for less than a day so my findings might be a little immature, part of my motivation for posting in this forum is to establish if there’s anyone else out there interested in developments like this.

There is a path where you can use ParaViewWeb to dynamically generate the scene and feed a vtk.js render window. That way you should be able to do local VR using vtk.js while having a dynamic scene.