Mixed reality (MR) combines elements of the physical world and digital content to create interactive environments where virtual objects coexist and interact with real-world surroundings. MR lies between augmented reality (AR) and virtual reality (VR): unlike AR, which overlays digital content on reality, and unlike VR, which fully replaces reality, MR anchors virtual objects within the physical space and enables two-way interaction.
1
4
2
3
Spatial coherence
Two-way interaction
Multiuser synchronization
Precise tracking and mapping
- Augmented Reality (AR) overlays digital content onto the real world while keeping the physical environment primary. Typical devices: smartphones, tablets, AR glasses (e.g., smartphone AR apps, HoloLens in limited AR mode).
- Mixed Reality (MR) blends real and virtual elements more deeply, enabling interactive, spatially anchored virtual objects that respond to real-world geometry and user actions. Typical devices: advanced headsets with spatial mapping, hand tracking, and occlusion (e.g., Microsoft HoloLens, Magic Leap).
- Integration with real world
- AR: Digital content typically appears on top of the camera view; limited interaction with physical objects. Occlusion, depth understanding, and physics simulation are often minimal.
- MR: Digital content is spatially anchored and can interact with and be occluded by real-world objects; uses environment understanding (depth sensing, SLAM) for realistic blending.
- Interaction and input
- AR: Touch, gestures via touchscreen, basic device motion. Interaction often 2D or constrained to screen plane.
- MR: Richer inputs — hand tracking, gaze, voice, spatial gestures, and controllers; supports 3D manipulation aligned with real space.
- Spatial awareness
- AR: Limited spatial mapping; relies on marker-based or markerless tracking but often without persistent world understanding.
- MR: Robust spatial mapping, scene reconstruction, persistence across sessions, and multiuser spatial sharing capabilities.
- Visual realism and occlusion
- AR: Simpler compositing; virtual objects rarely cast accurate shadows or get occluded correctly by real objects.
- MR: Realistic lighting, shadows, and occlusion enabled by depth sensing and environmental understanding to increase presence.
- Use cases
- AR: Retail try-on apps, informational overlays, simple navigation, social filters, marketing experiences — low friction and wide reach via smartphones.
- MR: Industrial training, complex assembly guidance, architecture visualization, collaborative design, simulations where precise alignment and interaction with real objects matter.
- Hardware and computational demands
- AR: Lower barrier — works on commodity smartphones and tablets; lower latency and sensor requirements.
- MR: Higher demands — dedicated headsets with depth sensors, IMUs, powerful CPUs/GPUs for real-time spatial computing.
Overlap and spectrum
- AR and MR are not strictly separate; they lie on a continuum. Many systems labeled “AR” may include MR-like features (e.g., occlusion or limited spatial anchoring), and MR experiences often include AR-style overlays for accessibility on mobile devices.
Standards and terminology
- Terminology is not universally agreed upon. Industry often uses AR as an umbrella term; "MR" is typically used to denote a stronger blend of physical and virtual that supports interaction and environmental understanding.
Choosing between AR and MR
- Pick AR when you need broad accessibility, low friction deployment, and simple overlays or visual augmentation for many users.
- Pick MR when tasks require precise spatial alignment, realistic integration, interactive 3D objects, safety-critical workflows, or collaborative spatial scenarios.
Conclusion
- Both technologies augment reality, but differ in depth of integration, interaction capability, and hardware requirements. AR emphasizes accessibility and lightweight overlays; MR emphasizes spatial understanding, interaction fidelity, and immersive blending of real and virtual for complex professional applications.