The concept of the metaverse has catapulted into the public consciousness. But in a sea of emerging technologies, it is nascent and still not all that well defined.

No matter the final scope and texture of the metaverse, one thing is clear: the ultimate metaverse will need to depict real experiences as accurately and naturally as possible. Reality and society can’t glitch. Our immersive experience can’t stop because of a third-party code update or spike in latency from a service provider. The metaverse requires ‘infinite nines’ of availability and globally-intelligent redundancy, re-routing and failover to deliver an always-on experience.

But one can only begin to imagine the network and connectivity that will be required to support such a world. And with so much competition and money in the space, the pressure to get things right the first time is intense.

109_OpenAI_DALLE_2__-_Data_centers_forcing_houses_out_of_a_city_digital_art_3.png
– DCD/DALL·E 2

So, with this in mind, what changes might the metaverse bring to the fields of application development and digital experience?

The need to be truly global and redundant

The world is already in many ways moving to real-time or near real-time processing. For example, in the financial world, straight-through processing means transactions occur as they are initiated rather than in day-end batches. Mid-sized to larger organizations are also implementing large event-streaming platforms and ingesting these streams into huge analytics engines to understand customer needs and system responses to real-world conditions on-the-fly.

The metaverse takes this to another level, as it’s expected to essentially be real-time and persistent, with no pause button, continuing to exist and function even after users have left. This means that the centricity is not on the user, but the virtual world itself.

That level of functionality will require more performance and put more demands on what’s already a best-effort Internet infrastructure, incorporating virtual reality (VR) technologies, haptics, virtual identity and online-only currencies. However, despite this, it will need to be capable of keeping people in the metaverse experience, regardless of connection adversity or ambient traffic conditions.

Web-based code characteristics

As a result, any software application coded for the metaverse will need to be designed with the underlying network - and, specifically, any constraints posed by that network - in mind. That is very different from how many applications are currently designed, often with only passing consideration put into how they will perform on different types of networks that could exhibit varying latency and performance characteristics.

Today’s web-based applications are already heavily reliant on a large number of dependencies and interdependent systems and services in order to function. A break or vulnerability in that chain can already cause degradation or loss of service. The metaverse, and the applications coded for it, are likely to be made up of even more tightly integrated dependencies. At a basic level, it’s still going to be an application, or set of applications, distributed across a cloud or hyperscale data center infrastructure and relying on the Internet and cloud or private connectivity to perform. There will likely be a heavy reliance on the performance of APIs to deliver an integrated experience, alongside technologies like blockchain and payment processing, as well as potentially edge computing, taking processing power closer to the user.

In the event that one part of the experience fails to render, the whole experience will be materially impacted. Part of the metaverse may simply fail to appear in front of us. In an immersive virtual experience in which we are active participants, having a piece of reality fail before us won't cut it.

There’s already evidence that metaverse developers won’t tolerate these kinds of experience glitches. Many are currently developing on multiple metaverse platforms, effectively hedging their bets as to which might first gain traction or offer a higher level of experience or resiliency. Metaverses that are unreliable, either due to the way they are coded or due to underlying operating systems and infrastructure constraints, may not get a second chance with both developer or user ecosystems.

Monitoring the metaverse

It isn’t just code quality and network resiliency that will determine whether a metaverse succeeds or fails. Delivery of metaverse experiences will also require new types of monitoring, testing, and insight.

Developers will likely have access to some open telemetry, courtesy of their metaverse platform of choice. However, they may also wish to instrument different parts of the end-to-end experience independently to verify that each is functioning and responding as intended. A collective intelligence approach will help to ensure that developers have access to the right combination of metrics to judge the health and performance of the metaverse, and of their specific contribution to it.

This will include end-to-end visibility across the entire digital supply chain and cloud and internet networks that deliver the digital experience of the metaverse, as it will be crucial to see, detect, and optimize any performance issues before they cause users to experience abrupt outages or glitchy interactions. New methods will be required to fit this new reality, far beyond traditional monitoring.

The metaverse and the opportunities that this new immersive reality will bring is certainly exciting. But like all new frontiers in today’s Internet-centric environments, glitchy, error-prone experiences are not an option. In the metaverse, concepts like ‘downtime’ and ‘uptime’ cease to exist - because the only acceptable availability level for reality is ‘all of the time.’

Subscribe to our daily newsletters