VTK on the Oculus Quest 2: Ongoing

So I created this thread to document any progress I make on getting VTK visualization to run on the standalone VR headset the Oculus Quest 2. Currently the VR example found in the VTK.js project doesn’t work within the native Oculus browser, it does however work on Firefox’s Reality Browser.

There are a few issues with the visualization, nothing major though:

  • The visualization is a tad rough, anti-aliasing will likely need to be enabled.
  • There’s no in-VR menu, controller models or interaction.
  • Hand-tracking isn’t currently supported by the browser.
  • The camera light is too narrow.
  • There’s no floor for reference.

Initial work will be on getting a proper exporter working in my existing VTK program to export a complete scene, a macro for Paraview exists here that can do this however last time I checked it uses some classes not exposed to VTK (e.g. ‘from paraview import simple’). If I can figure out a VTK-native method I’ll then likely merge sections of the VTK.js SceneExplorer and VR example to get something usable for fast visualization.

I’ll then see if I can implement some interaction using the controllers, I have some existing controller models I created for a separate project that should work. Initial work will likely be button presses and maybe joystick interaction, ideally raycasting will be added eventually.

User interaction will likely be limited to the non-VR mode until I get something usable, thankfully it’s fairly easy to switch.

I have no Javascript experience so this might take a while, any help would be appreciated.

Are you working with vtk.js or VTK? Oculus Quest 2 seems to be already compatible with virtual reality interface of VTK, as there is a “yes” in both SteamVR and PC columns here.

My current work is uses VTK, I’ll likely be doing the brunt work in that and then using vtk.js just for the VR render. I’d prefer to keep the Quest as a standalone rather than using it as a PC headset. I know VTK supports VR when compiled specifically to support it but would prefer to keep the two processes separate to avoid hassle for the end user.

To add some clarification: the Oculus Quest 1 & 2 are VR headsets that can be tethered to a PC for SteamVR however they have their own ARM-based CPU which allows them to run without the need for a PC. The OS is a heavily customised version of Android so compiling the Android build might not work, there’s also added complications w.r.t. the control scheme. These headsets can run a variety of Android apps including browsers that support WebVR and vtk.js’ VR implementation. My plan is to use my existing implementations of VTK’s functions to compose a scene and then export it, afterwards I can then import it into a vtk.js application to view in VR

VTK virtual reality works really well, including direct volume rendering of 4D data sets (e.g., beating heart), so it is a good solution for professional applications and full-quality data sets. However, I agree that it is a major inconvenience that you need to use a gaming laptop (which is heavy, hot, and has abysmal battery life) or a desktop PC (large and not portable). For mass deployment, classroom use, etc. quality and performance is not the highest priority and computational power of the standalone headsets suffice (and these devices are cheap and portable).

We keep pushing top-quality virtual reality for professional use (medical training and surgical planning), but we are also involved in creating content (such as https://www.openanatomy.org/) that would be suitable for inexpensive virtual reality visualization for education and patient communication.

What kind of data sets do you work with? What viewing features do you have in mind?
There seems to be tons of javascript 3D viewers that supports virtual reality, too. You can export the scene to glTF and load it into these viewers. Have you tried them? How do they compare to capabilities of vtk.js?

I’m currently doing a PhD in Nuclear Physics so the kinds of data I work with span a pretty wide spectrum:

  • Most of my data is discrete polydata spread across a germanium crystal volume as voxels (e.g. risetime maps, PSA deviation) in which case the glyph3D filter works really well.

  • I do electromagnetic simulations of charge distributions to create signal responses so I utilize the vtkImageData class a lot to check how the fields look within complex geometry combined with line glyphs to show particle tracks, I also slice these fields into 2D and perform contouring to check for discontinuities in the fields.

  • The core of my PhD is in the development of efficient search algorithms using topological data analysis so it’s useful to visualize several hundred thousand glyphs with weighted and directed edges which VTK does incredibly well. By linking the point IDs I can also see how the response of certain geometric positions is expressed in a multidimensional (~100D) response space.

  • I use VTK for visualizing the geometry of experimental setups for real-time control (e.g. it’s useful to see what part of the crystal is being hit by the gamma-beam without visual inspection). This is possible using VTK’s STL reader.

  • I also use it for the validation of geometry for nuclear simulation in GEANT4 and combining the produced data with CAD models (GEANT4 has an amazingly poor renderer). I wrote my own importer & exported class for GDML files but VTK is used to combine the geometry on render.

I also have gotten involved in a few other projects around the lab which utilize some of the methods described earlier:

  • I wrote a 3D reconstruction software package for Iodine-131 Thyroid imaging for cancer diagnosis that combines CT and gamma-camera data for automatic segmentation.
  • I wrote a visualisation package for a Compton Camera that allows for the combination of LIDAR SLAM data with Compton backprojection cones and CAD models to determine the presence of nuclear waste contaminants in places like nuclear power plants.

The main feature I had in mind for this work would be a roomscale 6-DoF display of a scene at 1:1 scale that would allow for the user to walk around a scene and inspect certain actors, for example if the lab Compton Camera was utilized to scan unusual activity in a concrete wall it’d be useful for an RPS to be able to inspect the activity projection on a SLAM reconstruction of the environment before visiting the contaminant to lower the exposure to harmful radiation.

I currently have an example of this working in VTK, it combines several versions of poly and image data in the same scene.

I’ve had a cursory look at some other Javascript viewers online, a lot of them have specific usage cases (e.g point clouds, model viewing) but seem pretty capable at their specific jobs, VTK and vtk.js have the benefit of supporting multiple different data types at once, the only software I’ve found that does that online would be the ParaView Web suite however AFAIK there’s no VR support at the moment. Notable examples include VRMol for molecular visualization and Poltree for point clouds (which states it works in VR however I had no success).

A-Frame has glTF support and has probably the most VR-friendly interface so I could write a viewer in that and use the glTF exporter class like you suggest however AFAIK things like 3D image data isn’t supported by the exporter class, it also doesn’t really have the functionality to do more complicated operations like contouring & slicing.

The vtk.js example by comparison is a little simple however as it’s mainly there to demonstrate the VR camera renderer I don’t think that’s too much of an issue. I’m pretty confident that the features exposed in vtk.js should be enough to get something workable however the scene will likely be static until I can figure out an interaction scheme.

Raycasting from the controllers would go a long way to producing a viable solution for interaction, VRMol effectively dedicates an entire virtual wall for 3D text to work as a menu. This should be relatively simple to implement with a proppicker and callbacks.

So I’ve had my Oculus Quest 2 for less than a day so my findings might be a little immature, part of my motivation for posting in this forum is to establish if there’s anyone else out there interested in developments like this.

There is a path where you can use ParaViewWeb to dynamically generate the scene and feed a vtk.js render window. That way you should be able to do local VR using vtk.js while having a dynamic scene.

So just as a small update I’ve gotten some of the VTK.js examples to work on the Quest, progress is slow as I’m only working on this on weekends. Here are some initial findings:

The Quest 2 is surprisingly capable at rendering, the XR2 chipset and 6GB of RAM gives the HMD a reasonable performance. Volume rendering is a struggle but remains at least somewhat usable, I’d estimate that it’s rendering somewhat around 30FPS for complex DICOM data like what’s shown below. That’d probably be fine for normal screens but gets a bit nauseating in VR. Scenes without volume data render at the native 90FPS.

If you’re wondering it’s completely possible to stick your head directly into the chest, there’s no noticeable near-clipping so you can have a pretty freaky experience.

To use room-scale VR you have to utilize a secure connection so HTTPS is an absolute necessity, the current implementation of the web server used with the VTK.js examples is HTTP-only by default but can be converted my modifying the server config file. Additionally to actually use VR the render window must invoke a VR event after the webpage has finished rendering, the easiest way to do this is to use a JavaScript function tied to a HTML button.

Because of the secure connection utilizing functions that import local data has proven problematic (read impossible atm). I believe it’s something to do with how the HTTP proxy works however it’s a challenge to debug. Loading external datasets that are hosted with HTTPS seems to work fine.

Controllers have no input once within the VR render, the joystick is used for zooming when in a traditional render so presumably there’s still some functionality there. The controllers are only visible in the gif due to the recording method, in normal usage they’re completely absent.

I still haven’t been able to convert the ParaView export script into something usable by VTK, if anyone has a workable function for exporting all the actors in a scene into something that can be easily read by VTK.js that would be appreciated.

It turns out that ParaView Glance has a VR rendering option that works in a very similar manner to the vtk.js method, I didn’t notice it tucked in the global settings as it only exposes itself when inside a VR-ready browser. About half of the examples work fine (namely the opaque geometry), volumes are painfully slow to render and it looks like the eye spacing is set incorrectly. All the menu options seem to work well, the error dialogue presented “NS_ERROR_NOT_AVAILABLE” and failed to render, not entirely sure what that means.

All in I’m pretty happy, it’s obviously not as good as a PC-based render using VTK with VR bindings but for a standalone implementation with zero installation required it’s quite neat.

Hey,
I really hope you will succeed, I really would love to bring my confocal images in my quest 2.
cheers!

We had the same problem of nauseating volume rendering of very large volumes in 3D Slicer. We have found a solution that works very well: when the head translation or rotation speed reaches a certain threshold then we automatically lower the rendering quality (see implementation here). When your head does not move quickly then you tolerate much lower refresh rates, so when we detect that the head does not move then we switch back to rendering at full quality. It feels all very natural and intuitive, it is like stopping when you want to have a closer look at something.

That seems like a pretty intuitive and useful solution, I’ll be sure to look into using more dynamic resolution rendering once I get the key functionality ironed out. I’ve been swamped with some pretty gnarly deadlines for my PhD (which unfortunately has little overlap with this work) so progress has been relatively slow, I have however been able to get remote debugging working on my Quest 2 and figured out how to extract the necessary information from the WebVR API to hopefully do what I want.

I was pointed towards VRPanManipulator as a possible interaction method for this work but haven’t really found much documentation on it (AFAIK it thinks the Quest 2 is a trackpad too), I think that I’ll likely go down the route of making my own interactor class for the time being.

I’m trying to keep my updates fairly minimal so this thread doesn’t hog the main page too much, I’ll likely post an update when I get some form of VR controller to be displayed (probably a cube at first), I have the controller positions & orientations so it should be relatively simple to convert these to the world coordinates and use some sort of callback function to update them when moved (again, JS newbie).

After that I’m planning on working out some form of raycast picker that acts in a similar way to a laser pointer, the Quest controllers have a good number of buttons which are exposed to the API so it should be relatively simple to add in multiple functions without overlap.

1 Like