What’s missing for XR to become mainstream?

As we enter the second quarter of 2023, many are predicting a ‘VR winter’ and there are concerns about the future of augmented reality (AR) and virtual reality (VR) - particularly in comparison with the new hype around AI sparked by ChatGPT and consort, and with the term ‘metaverse’ adding to the confusion. Will AR and VR ever go mainstream or is extended reality (XR) just a big waste of money with no actual market? We believe it has mainstreaming potential, and this article explores how the real blockers for XR’s democratization can be unblocked. It might even change your mind!

First off, despite these negative predictions, the market has been flooded since the beginning of the year with exciting announcements of new XR devices like the PlayStation PSVR 2, Xiaomi Technology AR Glass Discovery, OPPO Air Glass 2, ZTE Corporation Nubia Neovision Glass headset, and, of course, Apple's upcoming headset, plus many more. It's clear that the big tech players are taking XR’s potential very seriously (as demonstrated and well documented by Alex Heath from The Verge on the Meta VR roadmap and Google's 'Project Iris').

After the initial hype and subsequent disillusionment surrounding virtual reality and the unrealistic expectations of a virtual world resembling Ready Player One, the market is now shifting its focus again on a more practical and pragmatic approach. Augmented/mixed reality seems to be where most efforts are now concentrated - pure virtual reality platforms without passthrough or mixed reality features becoming increasingly rare.

Apple’s Tim Cook shared a similar sentiment at the end of last year:

“AR is the biggest technological promise for the future for years [to come]”
“We are really going to look back and think about how we once lived without AR" compared to VR, which he sees as suitable “for set periods, but not [as] a way to communicate well.”

It’s also important to note that venture capitalists and funds continue to invest in XR. As Alfred Lin from Sequoia Capital stated in the company’s Seed Fund V recruitment campaign in January 2023:

“AR will lead a revolution in the physical world; VR will change the way we work and play. Potentially the next consumer platform to drive wide-scale innovation, we may enter a world where we work and play through the “lens” of AR/VR glasses.”

Despite this, the mass adoption of XR has yet to happen, and the “aha" AR and VR moment  is still on the horizon. So, what’s missing? Why isn’t that bright and promising future already here?

Beyond the hardware and mass market use cases, we believe that one critical aspect is the need for new and innovative interfaces that can enhance the user experience and enable XR to go mainstream. In this article, we'll explore some of the ideas put forth by experts in the field, such as Ben Evans and Josh Wolfe Lux Capital on how new interfaces can help bridge the gap between XR's potential and its current limitations. This is as important as the need to improve the hardware or reinvent use cases for AR and VR. Let’s dig deeper into  the blockers to XR mass adoption together!

1. Hardware challenges still need to be addressed

Although there have been significant advancements in VR technology since the release of the first Virtual Boy by Nintendo in 1995, there are still some limitations to the current augmented reality (AR and XR headsets. While we can now wear these devices for several hours without any motion sickness, certain challenges still need to be addressed for this tech to fully potential.

Aucun texte alternatif pour cette image
Nintendo Virtual Boy - the first ever virtual reality headset, realized in 1995.

In particular, AR and VR devices require lightweight, high-resolution displays that are comfortable to wear for extended periods of time. Current AR headsets use small displays that either project images onto the user's retina or present them on a screen before the user's eyes. However, these displays often have a limited field-of-view and resolution, which can hamper the effectiveness of AR experiences. While the initial iterations of AR headsets were based on birdbath and bug-eye lens designs, which were inexpensive to scale, the form factors were too cumbersome and dark for widespread adoption. The emergence of a new generation of form factors based on optical waveguide design, employing geometric or diffractive lenses, has enabled the development of thinner, lighter, more promising 'air' AR glasses. (For further information, refer to this comprehensive article by VR Expert). VR headsets, on the other hand, still give a limited field of view that induces a sense of tunnel vision, reduces immersion and causes motion sickness.

In addition, AR devices need to operate for extended periods without requiring frequent recharging, a vital consideration for mobile AR applications. Battery technology and power management improvements will be critical for facilitating the widespread adoption of AR.

Both AR and VR devices need to address some hardware concerns when worn for long durations.

Karl Guttag's comical graph from 2019 provides a succinct summary of the current hardware status quo for AR; this is still very relevant. Nevertheless, although the killer form factor is yet to emerge for mass adoption, he reviewed some promising options like the latest Lumus Ltd. glasses showcased at CES 2023.

Aucun texte alternatif pour cette image
Karl Kuttag's AR Expectations published in 2019 on @KGOnTech and www.kguttag.com

We can expect hardware improvements to be forthcoming. Alfred Lin from Sequoia Capital affirmed that

“The hardware is getting better and better: brick headsets to portable headsets, and lightweight glasses will be coming soon. One day, those glasses will be replaced with daily contacts. Imagine having all the power of your smartphone piped through your eyes constantly. Imagine combining that power with AI algorithms to surface relevant information, suggest creative ideas and options, improve your every decision, and make your life more fulfilled, fun, and entertaining.”

Although hardware is still an obstacle for now, it seems that’s about to change. So, if we have a clear plan and the ecosystem is aligned with improving it, what other challenges are hindering the widespread adoption of XR technology?

2. Mass market use cases are yet to be completely defined.

When it comes to mass adoption, the main question is: why wear a smartphone in front of our eyes? When would we need instant access to digital information in the case of AR, or immersivity in a digital world for VR?

A. There are already mature use cases for AR and VR

For AR, there are proven benefits in the industrial and professional sectors. As Ori Inbar, founder of the AWE (Augmented World Expo), mentioned in a podcast, it's worth noting that the Google Glass, which many thought disappeared in 2015, was still being used daily by thousands of employees up until this year. This is backed up by Vuzix Corporation, a leading professional AR company, which recently reported its best-ever earnings quarter in 10 years and has been joining forces with health, logistics and manufacturing partners. Furthermore, Vuzix CEO Paul Travers believes that a mainstream AR device will emerge soon, not only for professional applications but for consumers as well.

“When you wear a display that gives you the bulk of what you really need off your phone without taking your phone out, that's a pretty powerful toolset,” he said.

Travers compares the evolution of AR to the iPhone, which started with just three applications and has since evolved into a complex device. He expects AR experiences to get better and more capable over time, and for new B2B and B2C use cases to come.

For VR, gaming and entertainment are, of course, the mature use cases, as highlighted by the number of VR demos at leading industry events like SXSW 23, from major IPs like Red Bull partnering with @Varjo to the craziest innovations as tested by Antony Vitillo also known as @SkarredGhost with this Vitruvian VR device. As this topic has already been highly discussed, and as mentioned in the intro, we believe that AR would require a more crucial societal change than VR. We won’t be doing a deep dive here, but just have a look at the lengthy conversations on Twitter about VR gaming or the Playstation VR 2 for an understanding of how much gaming and entertainment have already been reinvented by VR!

B.  What’s needed for new AR and VR use cases? It’s not about immersivity, but more about interfaces.

The first thing that comes to mind when we think of VR is immersivity, and the ability to experience the digital world for long periods of time. However, as we mentioned in the introduction, we should move away from this clichéd notion. While immersivity is certainly important for gaming and leisure activities, we don't believe that it should replace real-world interactions. This sentiment is echoed by experts in the field, such as Josh Wolfe from Lux Capital, who believes that XR technology should act as an extension and enhancement of our natural senses, rather than replace them entirely. The focus should be on providing access to the digital layer faster, in a way that’s more private and accessible for everyone, rather than solely on immersivity.

To this end, we believe that XR devices should be looked at like smartwatches. Just as no one could have predicted the market for connected watches (110M+ units sold last year!), they’ve become a must-have item for many people because they provide faster, more private and more convenient access to digital notifications. We believe that AR devices will have similar benefits, making it easier for users to access digital information seamlessly in their daily lives.

  • Regarding speed: as the weight of digital is increasing in our lives, having a higher speed access to information becomes a priority. The first massively adopted smart glasses will probably be like smartwatches for eyes, providing instant access to simple yet important notifications like GPS instructions, message previews and so on… They’ll help reduce the latency and delay between the useful information you need, hosted online, and your brain even more.
  • Regarding privacy, AR and VR are highly secure ecosystems, as the visual display is only visible to the user. Unlike smartphones, voice assistants or smartwatches, people around you can’t sneak a peek at your screen and read your notifications. Moreover, with AR or VR headsets, your activities remain private, and no one can see what you’re doing. While some concerns may arise, which may necessitate product modifications to respect individuals' privacy (for example, having a visual indicator when you start recording your surroundings using your smart glasses, as is the case with Snapchat Spectacles, to notify people around you), the privacy-by-design element of these devices has countless benefits. It ensures the security of your data but also makes the technology less intrusive and doesn’t interfere with our real-life interactions. Imagine being at a café with a friend, having to look down at your phone’s screen to see when your Uber is arriving, or having to abruptly interrupt your conversation to ask Google Assistant or Siri to change the song. It's frustrating and disrupts the conversation’s flow. The privacy that mixed reality provides enables people to enjoy the present moment while using technology as an enabler rather than a source of disturbance, hence better merging the digital and real layers of our lives.
  • Regarding accessibility: AR glasses provide a more accessible way to interact with the digital world compared to smartphones, computers or voice assistants. With AR glasses, you don't need to physically handle or constantly check a handheld device, making it ideal for situations where you’re physically busy, such as biking or working in professional contexts that require both hands - moments of ‘situational’ disabilities, as well as improving access for people with physical disabilities. This accessibility is further increased as it reduces the need to move or leverage our arms, making it suitable for a wider range of users, based on their abilities or context of usage.

Mixed reality devices offer three key benefits: speed, privacy and accessibility, which will undoubtedly create new usage opportunities in the future. However, while the advantages of displaying information are already being leveraged, it’s surprising that these factors are not yet being applied to the interaction aspects of these devices. Despite having faster, more private and more convenient access to information right before our eyes, we’re still relying on the same controls and interfaces that have been around for the past 30 years: touch, buttons, clicks, and sometimes voice. This creates a gap in the value proposition that, in our view, hinders the full experience and potential of mixed reality from being realized and taking off.

3. To support the upcoming AR and XR revolution, a new generation of human-computer interfaces needs to emerge.

AR and VR are not just new entertainment options, but a new way of interacting with digital technology. Current interfaces like websites and apps are designed for screens, mice and keyboards, and are intended for use in a static position. AR/VR completely changes this by providing dynamic, 3D experiences that can be used on the go. This expands the possibilities of how we can engage with digital content and information. A major success factor of AR/VR will be their adaption to the mobility/wearability needs of users. If we accept tomorrow to wear glasses and headsets, it’s because they’ll allow us to access the digital layer in many more situations than what we can do today: when we are moving and on the go. If we look at the last 30 years, the main updates in our daily-life technologies have been to increase their mobility. If the hardware is going to take this path, the interface must also follow this direction and offer better controls while moving or on the go, being mobile. Without intuitive and always accessible interfaces adapted to new user needs, it’ll remain a niche technology that can only be accessed by a small group of enthusiasts. But with the right interfaces, it has the potential to become a truly transformative technology that changes the way we live, work and play.

The controls we developed in the past 30 years are outdated.

In a conversation with experts from Andreessen Horowitz, Neal Stephenson, former Chief Futurist at Magic Leap and a well-known SciFi writer who coined the term 'metaverse', expressed his surprise that we’re still relying on 'primitive' controls like keyboards, mice and touchpads - which he refers to as 'Victorian technologies' - to navigate mixed reality ecosystems. These controllers demand constant access to our hands, limiting their accessibility and mobility, and aren’t high-speed enough to support the full potential of mixed reality. Even the remote control model suffers from speed limitations.

Over the past decade, we've seen a few other control methods attempt to keep up with the high-speed, private and accessible expectations of AR glasses. But let's be real, they all fall short.

  • Voice control is hands-free and fast, but we all know our innermost thoughts aren’t meant to be shared with our AR glasses.
  • As for hand tracking, it's a step in the right direction, but let's face it, waving our hands around all day is just asking for a workout. We're trying to enhance our reality, not become Olympic athletes.

So, it's time for some fresh, innovative interfaces that can keep up with the demands of mixed reality. We need something that's fast, accessible and private. It's time to unlock the full potential of AR and VR, folks.

Neural interfaces will be key to developing a new mix of human-computer interfaces  that’ll enable this AR revolution.

Picture a world where your digital reality blends so seamlessly with your physical one that you start wondering if you're living in a simulation. Directions, notifications and translations all projected onto the world around you, making your daily life more convenient and personalized. It's like having a personal assistant on steroids, without the need for awkward water cooler conversations.

But how do you control all this information without looking like you're doing an interpretive dance routine? Enter neural interfaces, the controllers of the future. With electrodes placed in our earphones or smart glasses, we can control our devices with facial muscles, eye movements and even brain activity. No more fumbling with touchpads or struggling with hand tracking methods that make you feel like you're doing an interpretive dance.

And don't worry about privacy, because with neural interfaces, your controls are your own. No more worrying about your colleagues voice-sending that love message to your partner.

As AR glasses become powered by the next generation of generative AI and ambient computing, you won't even need to formulate complex commands. Personalized suggestions will be generated, making navigation between tasks even simpler. It's like having a personal assistant that knows you better than you know yourself.

In short, with neural interfaces, AR and XR become a game-changer. They're high-speed, totally private and accessible to almost all humans. Go on, control your digital world with the power of your mind, or at least your facial muscles. And who knows, with neural interfaces becoming more mainstream, maybe one day we'll finally understand what our cats are thinking.

4. Conclusion: how does the road to AR/VR look like, and most importantly, who’s going to be building it?

In the mature use cases we’ve identified (Enterprise AR, Gaming VR), hardware improvement is the key factor for mass adoption, and this change is ongoing.

In the upcoming use cases, we foresee different steps:

  • In the short term, mixed reality will be leveraged as an extended display, similar to what some brands like Nreal are offering.
  • In the mid term, augmented reality will be used as smartwatches in front of our eyes, with a redesigned experience and interfaces focusing on high speed, private and accessible criteria.
  • In the long term, augmented reality will not only be a smartwatch offering 2D display of information, but it’ll also be able to showcase more spatially anchored 3D visuals like Pokemon jumping around a virtual hologram of colleagues seated on the real chair in front of you, right your eyes. And this whole new layer of information - for which we can’t yet imagine the width and richness - will require an even bigger improvement in our interfaces and seamless interactions!

As Josh Wolfe asked in the podcast quoted earlier, where will the solutions come from? Major players like meta investing in and buying startups and tech companies like CTRL Labs? Apple finally reinventing AR? Or perhaps another small player disrupting the approach?

To conclude, we’d like to quote Ben Evans in his super comprehensive article about XR ecosystems:

“Going back to the mobile internet in 2002, many of us knew that this would be big, almost no-one thought it would replace PCs, and only a crazy person would have said that the telcos, Nokia and Microsoft would play no role at all and a has-been PC company in Cupertino and a weird little ‘search engine’ would build the new platforms. So be careful building castles in the sky.”

We’re at the very beginning of this road, but the journey is bound to be highly disruptive and there’s plenty of room for startups to bring lots of value to the table. That’s why Wisear was created, why we’re strongly convinced that AR and VR are promising fields, and why we’re happy to wake up every morning to shape this industry!