I/O Adventure Google Cloud architecture

Since 2020, many conferences have moved online – either fully or partially – as organizers, presenters, and attendees all reimagine how we approach our life and our work. Google I/O Adventure is a virtual conference experience that brings some of the best parts of real-world events into the digital world.

Inside I/O Adventure, event attendees can see product demos, chat with Googlers and other attendees, earn virtual swag, engage with the developer community, create a personal avatar, and look for easter eggs.

This post details how we’re using Google Cloud to power the I/O Adventure experience.

The frontend consists of static assets that would be sufficient for the attendees to enjoy the experience solo, in a sort of “offline mode”.

The graphics and animations in the browser are rendered using popular libraries: React, PixieJS, GSAP, and Spine.

If the experience offered nothing more than this “offline mode” containing only static assets and links to external resources, then a minimal web server would be sufficient for the backend.

Of course it’s more fun to be immersed in the same world as other attendees, and to interact with them by text chat and voice chat! 

For this multiplayer online experience, we needed a more sophisticated backend, with game servers deployed as stateful pods in Google Kubernetes Engine (GKE).

The conference world map is large, with 12 different zones:

Each zone of the  map is powered by a different, independent pod:

This means that any given attendee is connected to a single game server, depending on their zone (their location in the virtual world).
When there is high traffic, attendees are dispatched to one of several shards for each zone:

For I/O’22 Google decided to overprovision by launching many shard servers before the event started, ready to be used immediately when needed. The scalability strategy was simply to fill a shard with attendees until the capacity threshold was reached, and then to start using the next, empty shard.

Shard servers are stateful. Each of them powers its own small world autonomously, requiring minimal communication with the other shards. The state of the shard (such as the current position of its attendees) is maintained in memory by the shard server executable, which is written in Go.

The shard servers share some information (for example, the number of attendees connected to a given shard) with a central server, which is responsible for routing new attendees to a shard. This information is maintained in a global Memorystore for Redis instance.

Once an attendee’s client browser has been assigned to a shard, it establishes a WebSocket connection with the shard server and communicates bidirectionally throughout the experience, sending attendee actions and receiving environment state updates.

Each GKE Node has a local Redis instance used to communicate with a Voice server.

To simplify the architecture, all of the servers are located in the same Google Cloud region (us-central1). This design choice provides low-latency communication among all of the server components. It also means that attendees in Europe, Africa, Asia, Oceania, and South America will connect to distant overseas servers, which is fine because it’s still acceptable to have up to several hundreds of milliseconds of latency for interactions like other attendee movements and text chat messages.

To access I/O Adventure, attendees need to log in with a Google account. For this, we use the Firebase Authentication service. All avatars are customizable with hats, skin color, hand accessories, and so on. These avatar features, as well as game progress and completed quests, form a attendee profile that we store in a Firestore document database. Optionally, attendees can link their Google Developer Profile to their avatar, providing relevant information for their badge, which is visible to other attendees.

In addition to I/O Adventure’s core servers and components, the conference experience also leverages dynamic elements, including:

Most of these integrations are handled directly in I/O Adventure’s frontend (i.e. in the browser), decoupled from the core server architecture.

Conclusion

Building the I/O Adventure web experience was a huge effort led by the Googler Tom Greenaway and the Google I/O team, and built by the talented designers and developers from the Set Snail studio.

It was a success! The servers, all fully hosted on Google Cloud, handled the load gracefully, and the social media coverage was very positive. It turns out people love swag, even in virtual form!

Related posts

Giving more Google Workspace customers access to AppSheet

by Kartika Triyanti
10 months ago

Bringing Gemini to organizations everywhere

by Cloud Ace Indonesia
10 months ago

How to build comprehensive customer financial profiles with Elastic Cloud and Google Cloud

by Cloud Ace Indonesia
2 years ago