Multi-app: The next evolution in spatial computing

Image for post
Image for post

The tip of the catheter was beating in rhythm with the patient’s heartbeat. It was floating just inches in front of my face. As the catheter moved around it built out a 3D map of the patient’s heart. It didn’t feel like I was inside the operating room observing the surgery. It felt like I was inside the patient experiencing the surgery firsthand. Around me, I could see and hear virtual representations of the surgeon and their assistants as they remotely operated the robot performing surgery. In the distance, I could see the familiar pink and purple sunset over the mountains of the default OpenVR compositor background. Instead of a single application driving this experience, it was actually a layering of several applications all running at the same time. The surgeon’s controller, the surgery visualization, and the people were each separate applications all layered together. This virtual surgery is an early example of a transformation happening in the spatial computing industry.

The objective of this article is to help break down the different elements of this transformation with historical analogies, specific examples, and speculations to help you learn more and get involved.

Image for post
Image for post

In the early days of personal computing, users could only run a single application at a time. Want to run a different application? Quit the one you are running and then open another. This is very similar to where we have been with spatial. Whether it is VR or AR, you enter a single application. This application then provides all of the necessary features to create the entire experience. If it is a multiplayer or social app, the developers have to build their own networking and avatar systems. There are several examples of tools like Oculus’s avatar SDK that provide this functionality that is directly bundled into the app. However, all of this still lives in the umbrella of single apps. Looking at the PC ecosystem, these apps are still with us today, we just call them “Fullscreen’’ apps. Their design and use are meant to take up the user’s full attention and are often centered around games and entertainment.

Image for post
Image for post

As personal computers evolved, graphical user interfaces were created that allowed users to navigate between different windows. Since each of these windows could be a separate application, users could now start to run more than one at a time. Simple applications like notepad and paint became incredibly valuable not for what they could do on their own, but for how they could be used alongside other applications. Features like copy and paste were added so users could move information between these different applications. Personal computers today can run dozens if not hundreds of different applications and processes at the same time. Right now, I have my music player keeping me in the zone while I work on this medium article in a word processor, while I browse the web to find inspiration and references, all while collaborating with a co-worker on Slack. Each one of these applications was created by a different developer, yet they all form my single computing experience.

Spatial computing is starting to go through its own evolution towards multiple apps. OpenVR launched with support for overlay applications which are primarily 2D screens that can be brought with you through different applications.

Examples of 2D overlay use-cases:

VR Game Streamers:

While playing a VR game and streaming to their followers, they can use overlays to monitor and respond to their Twitch chat feed

Image for post
Image for post
How To See Twitch and YouTube Chat In Virtual Reality Games — Open VR Desktop Display Portal — Guide
GamerMuscleVideos https://www.youtube.com/watch?v=Xe-seYTX3HI

VR Artists:

A 2D canvas can be brought into the VR space allowing for artists to sketch and be inspired by their virtual surroundings

Image for post
Image for post
Virtual plein air painting in Half-Life: Alyx Painted on a virtual canvas sitting in front of the citadel: https://twitter.com/lizaledwards/status/1256959862216495105?s=20

Virtual Desktop Screens:

Users can be in whatever VR experience they want while still being able to see and interact with their 2D desktop applications running on their computer

Image for post
Image for post
I love coding on VR in VR setup: * Lovr with lodr for auto-refresh on save * vscode * Valve Index for high enough res to read code in VR * OVR Toolkit to show desktop in VR on top of “game” (yes, still working on grabbing/moving things) https://twitter.com/nevyn/status/1301256618177441792?s=20

These 2D overlays have been extended even further to create 3D views. The apps that use these 3D overlays act in the same ways that AR apps do on phones and AR headsets. They are just augmenting virtual reality instead of actual reality.

Examples of 3D overlay use-cases:

Mixed Reality Recording

LIV provides tools and avatars to help with mixed reality recording of other applications. In this image, the avatar shown is a 3D mirror of the user.

Image for post
Image for post
How To Setup A Mixed Reality Avatar in VR Using LIV *Updated* https://www.youtube.com/watch?v=ZtKUxqlbzUc

Spatial Communications

Pluto creates a persistent social layer where people can see and hear each other regardless of what other applications they are running. In this image Pluto avatars can be seen running alongside several Aardvark gadgets.

Image for post
Image for post
I had my mind blown after doing a @PlutoVR demo on Friday. They’re a telepresence app providing a persistent social layer across a multi-app ecosystem using OpenXR overlays to prototype AR experiences within VR. https://twitter.com/kentbye/status/1338333530938478592?s=20

WebXR Content

Metachromium is a browser made for OpenVR that supports WebXR content. This content can be run alongside any other native applications in OpenVR

Image for post
Image for post
https://store.steampowered.com/app/685110/Metachromium/

OpenVR provided the starting point for developers to experiment and explore these multi-app concepts. The OpenXR standard is now poised to become the foundation for the future of spatial multi-apps. Beyond creating a more unified way of connecting headsets and runtimes together, extensions are being added to facilitate these overlays and multi-app capabilities.

Image for post
Image for post
https://www.khronos.org/openxr/

As more of these tools exist and seamlessly run alongside each other, new opportunities arise for developers to create experiences that leverage them. This gets us back to the virtual surgery I started with.

In this new multi-app ecosystem companies like Dopl Technologies can focus on developing the applications that enable their remote robotic surgery use-case. Instead of having to build out everything themselves, they are able to rely on other applications like Pluto to provide the communication layer. Separating each of these functions gives developers the ability to focus on their specific use-case, instead of having to reimplement the wheel every time they build a new app.

“DOPL’s mission is to give patients access to advanced surgical care, anywhere. We’re developing technology that enables doctors to remotely control surgical robots so they can deliver treatment to patients around the world. Through our research, we learned that spatial technology gave doctors a better way to operate, but removing them from the operating room disrupted team dynamics. We wanted the best of both worlds — the benefits of a spatial procedure and the benefits of in-person communication — so we developed the concept for a virtual operating room that brought everyone back together. Around that time, we learned about Pluto.

Pluto’s service provides in-person spatial communication. It runs alongside our virtual operating room, enabling us to focus on medical tools while Pluto focuses on people. This paradigm shift in spatial development saves us a tremendous amount of time, effort and money. More importantly, it gives medical teams an effective way to communicate while they deliver treatment during complex telerobotic procedures. We believe this has significant implications for the future of spatial computing.

Imagine this: A doctor wakes up, puts on a headset, and goes to work in a virtual hospital. They view their patients’ images, consult with their team, and perform a few telerobotic procedures before they’re done for the day. We believe futures like this will be made possible by multiple spatial applications running alongside one another. We couldn’t be more excited to be part of the ecosystem that’s developing this future.”- Ryan James, Ph.D, Co-founder and CTO of DOPL Technologies

Having only experienced a taste of what the spatial multi-app ecosystem can provide, I can already say that it will be one of the most important driving factors in the adoption of spatial computing. Not only does it dramatically reduce the upfront development costs for spatial software as it breaks the people, places, and things into discrete elements, it also opens up the opportunity for novel cross application behaviors. A simple pen app doesn’t do a whole lot on its own in a single app world. However, running alongside other applications, it provides immense value.

Image for post
Image for post

Much like how copy and paste helped bridge the gap between windowed applications, there will be new and novel ways for virtual applications to share data. Imagine that the pen mentioned above can interact with a color picker app. I, as the user may want to run my favorite pen app and then run my favorite color picker app, even if the pen comes with its own color picker. Each of these applications can be made by different developers but provide an important part of a user’s experience.

New forms of physical interactions will exist between these apps. A ball app can bounce off other apps. A measuring tape would let you determine the length of other models and apps. A flashlight app can help add light to another app or scene. A magnifying glass app could let you get a closer look at something. A telescope app could let you see at a distance more easily. Each of these apps don’t do much on their own, but when paired with an ecosystem of other apps they become powerful tools.

We are just at the beginning of this spatial multi-app computing transformation. Many of the engineering hurdles and design challenges still need smart people to solve them. Questions around business models and application distribution needs to be answered. The opportunities are nearly endless on a long enough timeline, but the small subset of viable use-cases for today will need to be discovered.

If you are interested in shaping the next generation of computing and don’t know where to start, feel free to reach out directly. I’m happy to chat or point you in a direction based on what you are interested in. My email: forest (at) plutovr.com

Additional Resources:

Voice of VR Podcast

My cofounder Jared and I talk with Kent Bye about Pluto’s history and the power of multi-app for spatial computing

Pluto Developer Toolkit

The very earliest release of Pluto’s developer documentation and tools to more easily create multi-user multi-apps

Aardvark

A future forward framework for developing AR apps in VR.

MetaChromium

A web browser that supports WebXR on OpenVR

Cofounder of Pluto VR. Exploring #SpatialComputing #WebXR #OpenXR #Robotics

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store