How Did Quest 3 Change Mixed Reality Game Design?
Quest 3's full-colour, high-resolution passthrough at 18 pixels per degree of arc transformed Mixed Reality from a gimmick into a genuine game design platform. Mixed Reality on Quest 2 was a curiosity — a black-and-white passthrough view that felt more like a tech demo than a compelling medium. Having shipped a title on Quest 3 and worked extensively with the platform's MR SDK, I want to share the design lessons that surprised me most.
- The player's real room is your level — design to embrace it, not work around it.
- Environment depth occlusion is the highest-impact single rendering feature in MR — enable it by default.
- Anchor game state to semantic room elements (floor, walls, desk), not absolute world coordinates.
- Design interactions within a 60cm standing reach radius for safety; movement-required mechanics need explicit boundary design.
- Adjust virtual ambient intensity dynamically from the passthrough camera's exposure data for lighting consistency.
- Invest heavily in the tutorial — confused players in MR are more disoriented than in flat games.
How Do You Use the Real World as a Level Designer in MR?
The most powerful — and most terrifying — aspect of MR game design is that you cannot control the environment your player is in. Your game must work in a tiny studio apartment and a sprawling living room. It must work with furniture, pets, children, and bad lighting. This is not a limitation to work around; it is the medium's defining characteristic, and the best MR experiences lean into it.
Quest 3's Scene Understanding API gives you access to the semantic map of the player's room: floors, walls, ceilings, desks, couches, and doors are all available as mesh data. Use this data aggressively. Spawn enemies on the couch. Make the floor a lava field. Attach game elements to walls. Players react with genuine delight when the game acknowledges their specific, real space rather than pretending it doesn't exist.
Why Is Depth Occlusion Critical for Mixed Reality?
Nothing breaks MR immersion faster than a virtual object that floats in front of real-world geometry that should be occluding it. Quest 3's depth API provides per-pixel depth data from the passthrough cameras. Use it to implement environment depth occlusion for all your virtual objects. The technical implementation involves rendering the depth texture into Unity's depth buffer before drawing your scene geometry.
Without occlusion, virtual objects feel like screen overlays. With it, they feel genuinely placed in the room. This is the single highest-impact rendering feature in MR development and should be enabled by default in every project.
How Should You Anchor Objects in Mixed Reality?
World-space positioning — "put this object at (2.3, 0, 1.1) in the Guardian boundary space" — is fragile. Players recentre their guardian, move furniture, or play in a different room. Anchor your game state to semantic anchors: attach things to the floor, to the desk surface, to the wall. When a player resumes a session, re-derive the world-space positions from the current room scan rather than storing absolute coordinates.
For persistent content (objects the player has placed that should return next session), use Meta's Spatial Anchors API to store a UUID-keyed transform relative to a persistent scene anchor. This survives guardian resets and gives you reliable MR persistence across sessions.
How Do You Design Safe Interactions in Mixed Reality?
Designing interactions for MR is different from pure VR because the player is physically moving around a real room while wearing the headset. Game mechanics that require large arm sweeps, spinning, or stepping backward carry real collision risk. Design your interaction radius conservatively: actions that can be performed while standing still and reaching within a 60cm sphere are universally safe. Anything requiring the player to move their feet requires explicit safety design — countdown timers before high-movement sections, boundary warnings, large guardian requirements.
The best MR experiences I've played keep the player relatively stationary while making them feel like the world is coming to them. Think tower defence, crafting, puzzle games — genres where staying in one spot is natural.
How Do You Match Virtual and Real-World Lighting in MR?
Real-world lighting changes moment to moment: sunlight shifts, lamps turn on, someone walks past a window. Virtual objects rendered with fixed lighting look increasingly wrong as real-world conditions change. Quest 3 exposes the passthrough camera's exposure data, which you can use to dynamically adjust your virtual scene's ambient intensity. This single technique dramatically improves lighting consistency throughout a play session.
For materials, prefer physically-based materials with low metallic and moderate roughness values. Highly specular virtual objects look immediately artificial because their highlights don't match the real-world light sources the player can see around them. When in doubt, go matte.
How Do You Teach Game Mechanics in Mixed Reality?
Teaching MR mechanics is harder than teaching VR mechanics because you can't rely on a controlled virtual environment. Your tutorial must work in the player's actual room, which you know nothing about. Design tutorials that are environment-agnostic: explain mechanics through diegetic in-world prompts, use the player's hands as anchors for UI panels, and avoid requiring specific room configurations to progress.
From playtesting: players who successfully complete the tutorial in their first session have dramatically higher retention. Invest heavily here. A confused player in MR is more disoriented than a confused player in flat games — the cognitive overhead of the mixed environment amplifies frustration.
How Do You Design UI in Mixed Reality?
UI in MR is one of the hardest unsolved design problems on the platform. Screen-space UI — the standard flat overlay that works perfectly in traditional games — looks wrong in MR because it floats at a fixed depth in front of everything, including real-world objects that should occlude it. World-space UI anchored to a fixed position in the room is better, but breaks down if the player moves or if the anchor becomes inconvenient.
The approach that has worked best for me: hand-anchored UI panels that follow the player's wrist. The non-dominant hand's wrist is a natural billboard that the player can glance at, then lower out of view. Implemented with Meta's hand tracking, this surface hosts your inventory, pause menu, and status information. It is always reachable and never in the way.
For in-world interactive elements — buttons, sliders, crafting interfaces — use depth-occluded world-space objects positioned within the 60cm comfortable reach radius. Ray-casting from the dominant hand with a visible pointer ray is universally understood after Quest's widespread adoption. Pinch-to-select (the hand tracking equivalent of a button press) is more precise than full hand grab for small UI targets.
Whatever approach you choose: test with the depth occlusion layer enabled throughout. UI that clips through real walls or furniture looks immediately broken and destroys the MR illusion faster than almost any other issue.
How Do You Optimise Performance for Mixed Reality on Quest 3?
Quest 3 is a mobile platform running a Snapdragon XR2 Gen 2. The MR passthrough compositing pipeline has a fixed cost on top of your scene rendering budget. Going in with mobile performance assumptions is essential — desktop VR habits will miss frame rate targets on device.
The specific constraints I design around for Quest 3 MR:
- Target 72Hz minimum, 90Hz preferred. The passthrough feed runs at 90Hz; a lower frame rate creates a visible judder as virtual and real content blend at different rates. Dropping to 72Hz is tolerable; anything lower is noticeable and breaks immersion.
- Single-pass stereo rendering. Unity's XR plugin infrastructure supports single-pass instanced rendering, which cuts the draw call cost of stereo rendering roughly in half. Enable it from the XR Management settings and verify it is active in the GPU profiler.
- Limit dynamic shadows. Shadow rendering is expensive on mobile and particularly noticeable in MR where shadows cast by virtual objects onto real surfaces rarely match real-world lighting conditions anyway. Use baked lighting for static elements and reserve dynamic shadows for the one or two virtual objects that need them most.
- The Scene Understanding mesh has a cost. The room mesh you get from Scene Understanding is high-polygon and updates dynamically. Simplify it aggressively using a mesh decimation pass before use, and only update it when the player explicitly requests a room rescan, not continuously.
What Game Genres Work Best in Mixed Reality?
Not every genre translates to MR, and choosing the wrong genre is the fastest route to a frustrating development experience. From my work on the platform and extensive playtesting of commercial MR titles, these genres consistently play to MR's strengths:
Tower defence and base building are the natural fit. The player stands in a central position, places defences on real-world surfaces, and watches enemies enter through real walls and doors. The room becomes the game board. These experiences generate genuine surprise and delight when the game acknowledges specific room features — an enemy crawling under the player's real desk, a defence turret mounted to the real window frame.
Puzzle games benefit from the precision of hand tracking and the intimate scale that MR enables. Puzzles designed to sit on a desk surface — mechanical assemblies, logic circuits, miniature architecture — feel uniquely tactile in MR because the scale relationship between the player's real hands and the virtual objects is physically correct.
Social/multiplayer experiences are an underexplored space with significant potential. Two players sharing the same physical space while interacting with shared virtual content have a unique experience unavailable on any other platform. The technical challenges (shared spatial anchors, latency) are non-trivial but solvable with the current SDK.
Why Is Mixed Reality the Next Frontier for Game Developers?
Mixed Reality on Quest 3 is the most interesting design space I've worked in since the original consumer VR wave in 2016. The colour passthrough, the scene understanding, the depth occlusion — it all comes together into a medium that can do things no other platform can. The design language is still being invented. If you're a developer with VR experience looking for your next challenge, MR is where the frontier is right now.
Need help with vr/xr?
I'm a senior developer with 16+ years experience, including AAA projects at Ubisoft. Let's discuss how I can help with your project.
Start a Conversation