Hubbry Logo
search
logo
437447

Extended reality

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Types of extended reality
Augmented reality (AR)
Mixed reality (MR)
Virtual reality (VR)
From top to bottom: Augmented reality through a handheld device, augmented reality through a headset, and virtual reality

Extended reality (XR) is both an umbrella term to refer to and interpolate between augmented reality (AR), mixed reality (MR), and virtual reality (VR), as well as to extrapolate (extend) beyond these, e.g. allowing us to see sound waves, radio waves, and otherwise invisible phenomena.[1][2][3] The technology is intended to combine or mirror the physical world with a "digital twin world" able to interact with it,[4][5] giving users an immersive experience by being in a virtual or augmented environment.

XR is rapidly growing beyond an academic discipline, and is now having real-world impact in medicine,[6][7] architecture,[8] education,[9] industry,[10] and is being applied in a wide range of areas such as entertainment, cinema, marketing, real estate, manufacturing,[11] education, maintenance[12] and remote work.[13] Extended reality has the ability to be used for joint effort in the workplace, training, educational purposes, therapeutic treatments, and data exploration and analysis.

Extended reality works by using visual data acquisition that is either accessed locally or shared and transfers over a network and to the human senses. By enabling real-time responses in a virtual stimulus these devices create customized experiences. Advancing in 5G and edge computing – a type of computing that is done "at or near the source of data" – could aid in data rates, increase user capacity, and reduce latency. These applications will likely expand extended reality into the future.

Extended Reality can be applied not only to humans as a subject, but also to technology as a subject, where the subject (whether human or technology) can have its sensory capacity extended by placing it in a closed feedback loop. This form of Extended Intelligence is called veillametrics.[14][15]

In 2018 the BBC launched a research project to capture and document the barriers present in extended reality environments.[16]

The International Institute of MetaNumismatics (INIMEN) studies the applications of extended reality technologies in numismatic research, with a dedicated department.[17]


See also

[edit]

References

[edit]

Sources

[edit]
  • Vinod Baya; Erik Sherman. "The road ahead for augmented reality". pwc.
  • Pereira, Fernando. "Deep Learning-Based Extended Reality: Making Humans and Machines Speak the Same Visual Language." In Proceedings of the 1st Workshop on Interactive eXtended Reality, 1–2. IXR ’22. New York, NY, USA: Association for Computing Machinery, 2022. https://doi.org/10.1145/3552483.3555366.
  • United States Government Accountability Office. Extended Reality Technologies. Science & Tech Spotlight. Washington, D.C: GAO, Science, Technology Assessment, and Analytics, 2022.
  • Boel, Carl, Kim Dekeyser, Fien Depaepe, Luis Quintero, Tom van Daele, and Brenda Wiederhold. Extended Reality: Opportunities, Success Stories and Challenges (Health, Education) : Executive Summary. Luxembourg: Publications Office, 2023. https://op.europa.eu/publication/manifestation_identifier/PUB_KK0722997ENN.
  • Sayler, Kelley M. "Military Applications of Extended Reality." IF 12010. Washington, D.C: Congressional Research Service, 2022.
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Extended reality (XR) is an umbrella term encompassing technologies that blend the physical and digital worlds to create immersive experiences, including virtual reality (VR), which fully immerses users in a simulated environment; augmented reality (AR), which overlays digital information onto the real world; and mixed reality (MR), which allows interactive fusion of real and virtual elements.[1] This spectrum spans the reality-virtuality continuum, enabling users to interact with computer-generated content in ways that extend beyond traditional screens.[2] The roots of XR trace back to the mid-20th century, with pioneering inventions like Morton Heilig's Sensorama device in 1956, an early multi-sensory simulator that combined visuals, sounds, vibrations, and smells to mimic real-world experiences.[3] Subsequent developments in the 1960s and 1970s, driven by military applications such as Ivan Sutherland's head-mounted display in 1968, laid the groundwork for modern VR systems.[4] By the 1990s, AR emerged with projects like Boeing's use of overlay systems for wire harness assembly,[5] while the 2010s saw widespread commercialization through devices like Oculus Rift and Microsoft HoloLens, fueled by advances in computing power, sensors, and graphics.[6] XR's applications span diverse industries, transforming training, design, and interaction paradigms. In healthcare, it enables surgical simulations and patient rehabilitation through immersive VR environments, improving outcomes and reducing risks.[7] Manufacturing leverages AR for real-time assembly guidance and MR for collaborative prototyping, enhancing efficiency and minimizing errors.[8] In education and entertainment, XR facilitates interactive learning modules and virtual worlds, promoting engagement and accessibility.[9] These uses highlight XR's potential for economic impact, with projections estimating a market value exceeding $100 billion by 2026, though challenges like hardware costs and ethical concerns in data privacy persist.[10]

Definition and Scope

Definition

Extended reality (XR) is an umbrella term encompassing virtual reality (VR), augmented reality (AR), mixed reality (MR), and related immersive technologies that blend physical and digital environments to create mediated experiences.[11] These technologies enable users to interact with computer-generated content overlaid on or substituting the real world, fostering environments where digital elements enhance or replace sensory inputs in real time.[12] The conceptual framework for the XR spectrum originates from Paul Milgram and Fumio Kishino's 1994 introduction of the reality-virtuality continuum, which positions XR displays along a scale from entirely real environments to fully virtual ones, with mixed reality occupying the intermediate space where real and virtual objects coexist and interact.[13] This continuum underscores XR's role in merging the physical and digital realms seamlessly, allowing for varying degrees of augmentation depending on the application. Standards bodies, such as the XR Safety Initiative (XRSI), define XR as a fusion of realities comprising technology-mediated experiences via hardware and software that alter users' perception of reality through immersive, interactive simulations.[12] Key characteristics include immersion, where users feel present in the mediated environment; interactivity, enabling real-time manipulation of digital elements; and real-time simulation, ensuring dynamic responses to user actions that mimic or extend natural sensory perception.[14] These traits distinguish XR from traditional media by providing multisensory engagement that simulates three-dimensional or spatiotemporal realities.[15]

Components and Spectrum

Extended reality (XR) encompasses several primary subtypes that vary in the degree of immersion and integration between real and virtual elements. Virtual reality (VR) provides fully immersive digital worlds, where users are isolated from the physical environment and interact solely within a computer-generated simulation.[16] Augmented reality (AR) overlays digital information onto the real world, enhancing the user's perception of their physical surroundings without replacing them.[16] Mixed reality (MR) goes further by enabling interactive blending of real and digital elements, allowing virtual objects to coexist and respond to the physical environment in real time.[16] These subtypes are positioned along the reality-virtuality continuum, a conceptual framework introduced by Milgram and Kishino in 1994 to classify displays that merge real and virtual worlds.[13] The continuum is visualized as a linear spectrum, with the real environment at one end—representing direct or video-captured views of the physical world—and the fully virtual environment at the other, consisting of entirely computer-generated simulations.[13] Intermediate points on this spectrum include augmented reality (closer to the real end, where virtual elements supplement reality) and augmented virtuality (closer to the virtual end, where real elements are incorporated into digital spaces), with mixed reality occupying the central blended region.[13] This diagram illustrates how XR technologies can occupy any position, emphasizing the gradual transition rather than discrete categories.[13] User interactions differ significantly across these subtypes due to their placement on the continuum. In VR, the physical reality is replaced entirely, requiring users to navigate and manipulate solely virtual objects through controllers or gestures, often leading to a sense of presence in an alternate world.[16] AR enhances the real world by adding digital overlays that users view via devices like smartphones or glasses, allowing passive observation or simple interactions without altering the underlying environment.[16] MR supports bidirectional manipulation, where users can interact with and modify both real and virtual elements simultaneously, such as anchoring digital holograms to physical surfaces that respond to touch or movement.[16] Hybrid forms like spatial computing represent evolving integrations within the XR spectrum, combining MR principles with advanced environmental sensing to enable seamless, context-aware interactions between digital content and the physical space.[17] For instance, spatial computing allows virtual objects to be placed persistently in real-world coordinates, facilitating collaborative experiences where multiple users manipulate shared digital elements overlaid on their surroundings.[17]

Historical Development

Early Concepts and Inventions

The foundational concepts of extended reality (XR) trace back to the 19th century, with early efforts focused on creating illusions of depth and immersion through optical and sensory means. In 1838, British physicist Sir Charles Wheatstone invented the stereoscope, a device that presented slightly different images to each eye to exploit binocular vision and produce a three-dimensional perception from two-dimensional drawings or photographs.[18] This breakthrough in stereoscopy laid the groundwork for visual depth simulation in XR by demonstrating how the human brain could synthesize disparate visual inputs into a cohesive 3D scene, influencing subsequent display technologies.[19] By the mid-20th century, inventors began integrating multisensory elements to enhance immersion. Between 1956 and 1962, cinematographer Morton Heilig developed the Sensorama, an electromechanical prototype that combined stereoscopic 3D film projection, stereo sound, wind, vibrations, and even scents to simulate experiences like a motorcycle ride through New York City.[20] Patented in 1962 as U.S. Patent 3,050,870, the Sensorama represented one of the earliest attempts at sensory immersion in virtual environments, emphasizing the role of non-visual cues in creating a holistic perceptual illusion.[21] Heilig's follow-up invention, the Telesphere Mask in 1960, introduced a head-mounted stereoscopic viewer with wide-field vision and audio, further advancing portable immersive displays but without motion tracking.[22] The 1960s marked a pivotal shift toward interactive and tracked XR systems, driven by computing and simulation needs in research and defense. In 1968, computer scientist Ivan Sutherland at the University of Utah created the first head-mounted display (HMD) system, dubbed the "Sword of Damocles" due to its ceiling-suspended mechanical arm, which featured real-time 3D wireframe graphics and head-tracking via a sensing mechanism to update the viewer's perspective.[23] This innovation introduced head-tracking as a core XR concept, allowing the displayed scene to respond dynamically to user movement and foreshadowing interactive virtual worlds.[24] Concurrently, military and space agencies explored simulation for training; NASA's early 1960s work on flight and spacewalk simulators incorporated visual and motion feedback, while the U.S. Air Force's 1966 Visually Coupled Airborne Systems Simulator (VCASS) by Thomas Furness integrated an HMD with head-tracking for pilot training.[25] Douglas Engelbart's 1968 "Mother of All Demos" at the Stanford Research Institute showcased interactive computing via a mouse, windows, and networked collaboration, establishing precursors to XR's user interfaces by demonstrating real-time human-computer symbiosis.[26] These developments collectively pioneered stereoscopy for depth, head-tracking for interaction, and sensory immersion for presence, setting the stage for XR's evolution.

Modern Advancements and Commercialization

In 1987, Jaron Lanier, founder of VPL Research, coined the term "Virtual Reality" to describe immersive computer-generated environments, marking a pivotal moment in conceptualizing XR technologies.[27] The 1990s witnessed an initial commercial boom in VR, driven by heightened public interest and early consumer products, but this period ended in a significant bust due to prohibitive development costs, limited hardware capabilities, and underwhelming user experiences. A notable example was Nintendo's Virtual Boy, launched in 1995 as an affordable VR-like console priced at $180 (equivalent to about $377 in 2025 dollars), which sold only around 770,000 units worldwide before being discontinued after less than a year, exemplifying the era's challenges in achieving mass-market viability.[28][29] The 2010s heralded a revival of XR commercialization, fueled by crowdfunding and corporate investments that addressed prior technical and economic hurdles. The Oculus Rift's Kickstarter campaign in 2012 raised over $2.4 million, demonstrating strong consumer demand for accessible VR hardware and leading to its acquisition by Facebook in 2014 for $2 billion, which accelerated development and integration into social platforms.[30][31] Concurrently, Microsoft unveiled the HoloLens in 2015, an innovative mixed reality headset that blended digital overlays with the physical world, targeting enterprise applications and establishing MR as a distinct commercial segment.[32] A breakthrough for AR came with Pokémon GO in 2016, a mobile game that achieved over 500 million downloads within its first year, significantly boosting public adoption of AR by integrating geolocation and real-world interaction on smartphones.[33] The 2020s have seen XR mature into a mainstream commercial force, with major tech firms driving hardware innovation and ecosystem expansion up to 2025. Apple's Vision Pro, announced in 2023 and launched on February 2, 2024, positioned itself as a premium spatial computing device starting at $3,499, emphasizing high-fidelity visionOS for immersive experiences and broadening XR's appeal beyond gaming.[34][35] Integration with 5G networks has enabled low-latency, high-bandwidth XR applications, while AI advancements have enhanced content generation, user interaction, and personalization, as outlined in 3GPP standards evolution for XR support.[36] Meta's 2021 rebranding from Facebook to emphasize the metaverse further propelled commercialization, with investments exceeding $10 billion annually in VR/AR infrastructure to create interconnected digital spaces.[37] This era's market growth reflects XR's economic impact: valued at USD 142.39 billion in 2023, the global XR market is projected to reach USD 1,069.27 billion by 2030, expanding at a compound annual growth rate (CAGR) of 32.9%, driven by consumer devices, enterprise adoption, and cross-industry applications.[38]

Core Technologies

Hardware

Head-mounted displays (HMDs) serve as the primary output devices in extended reality (XR) systems, providing visual immersion through various form factors. Opaque HMDs, commonly used in virtual reality (VR), fully block external light to create immersive synthetic environments, often employing single display panels per eye with magnifying lenses for high-fidelity rendering.[39] In contrast, see-through HMDs for augmented reality (AR) and mixed reality (MR) overlay digital content onto the real world using lightguide-based near-eye displays (LNEDs) with input/output couplers to maintain transparency and spatial alignment.[39] Modern devices achieve resolutions approaching 4K per eye, such as the Varjo XR-4's 3840×3744 pixels, enabling sharp visuals with reduced screen-door effects, while field of view (FOV) reaches up to 120° horizontal in high-end models like the Varjo XR-4 to approximate human peripheral vision.[40][41] Sensors and tracking technologies are essential for spatial awareness and user interaction in XR hardware. Inertial measurement units (IMUs), including accelerometers and gyroscopes, provide real-time orientation and motion data across devices like the Meta Quest 3 and Apple Vision Pro.[40] Eye-tracking systems, utilizing dedicated infrared cameras—such as the four in the Apple Vision Pro—enable foveated rendering and gaze-based interactions to optimize performance and reduce computational load.[40] Simultaneous Localization and Mapping (SLAM) algorithms, often powered by LiDAR scanners in devices like the Apple Vision Pro or inside-out cameras in the Meta Quest 3, facilitate 6DoF (degrees of freedom) tracking by constructing real-time 3D maps of the environment for precise positioning without external anchors.[40] Input devices enhance user control in XR by bridging physical actions with virtual responses. Handheld controllers, featured in systems like the Meta Quest 3, offer precise 6DoF manipulation with integrated buttons and joysticks for navigation and object interaction.[40] Haptic feedback mechanisms in these controllers simulate tactile sensations through vibrations or force resistance, improving immersion by confirming actions like grasping virtual objects.[42] Gesture recognition, enabled by cameras or LiDAR sensors, allows controller-free inputs such as pinching or pointing, as seen in the Apple Vision Pro's hand-tracking capabilities, which detect natural mid-air movements for intuitive control.[43][40] Computing platforms underpin XR hardware by processing graphics, tracking, and rendering in real time. Standalone platforms, like the Meta Quest 3 powered by Snapdragon XR2 Gen 2 or the Apple Vision Pro with its M2 and R1 chips, integrate all computation on-device for untethered mobility and latencies below 20 ms essential for seamless experiences.[40] Tethered systems, such as the Varjo XR-4 connected to high-end PCs, leverage external processing for superior fidelity but require cables that limit freedom.[40] Edge computing and 5G networks address latency challenges in distributed setups, with 5G Standalone trials demonstrating low-latency streaming via features like Uplink Configured Grant and Low Latency, Low Loss, Scalable Throughput (L4S) for remote rendering in AR glasses.[44] By 2025, XR hardware emphasizes lightweight wearables, exemplified by prototypes like Meta's Orion AR glasses, which integrate holographic displays with a wide FOV in a compact, all-day-wearable form factor weighing under typical HMDs, as well as recent consumer releases such as Samsung's Galaxy XR headset (October 2025) with AI-native immersive features and the updated Apple Vision Pro with M5 chip for enhanced performance.[45][46][47] These advancements include specialized XR processors for higher pixel densities and sensor fusion, alongside neural interfaces like Orion's wristband for gesture inputs, paving the way for consumer-grade smart glasses.[40][45]

Software and Interaction

Software in extended reality (XR) encompasses rendering engines that generate immersive 3D environments in real time, enabling seamless integration of virtual elements with the physical world. Unity and Unreal Engine are prominent rendering engines widely adopted for XR development due to their robust support for real-time 3D graphics. Unity facilitates cross-platform XR applications through its XR Interaction Toolkit, which handles spatial computing and multi-device compatibility. Unreal Engine, particularly version 5, excels in photorealistic rendering via technologies like Nanite for virtualized geometry and Lumen for dynamic global illumination, enhancing visual fidelity in VR and AR scenarios. Ray tracing integration in both engines allows for accurate simulation of light interactions, such as reflections and shadows, contributing to photorealism; for instance, Unity's High Definition Render Pipeline supports hardware-accelerated ray tracing for global illumination in real-time XR experiences. Interaction paradigms in XR prioritize natural and intuitive user engagement to minimize cognitive load and enhance immersion. Gesture-based interactions, leveraging computer vision to detect hand poses and movements, enable controller-free manipulation of virtual objects, as seen in systems that track skeletal hand models for precise grabbing and pointing. Voice commands facilitate hands-free navigation and control, processed through speech recognition APIs that integrate with XR environments for contextual responses. Spatial audio provides directional sound cues, simulating real-world acoustics to guide user attention and improve situational awareness in mixed reality. Haptic feedback models deliver tactile sensations via vibrations or force feedback, with algorithms modeling contact forces to simulate textures and impacts, thereby reinforcing multisensory immersion. Frameworks and software development kits (SDKs) form the backbone of XR application creation, standardizing access to device capabilities across ecosystems. Apple's ARKit offers motion tracking, plane detection, and light estimation for iOS devices, enabling developers to anchor virtual content to real-world surfaces with high accuracy. Google's ARCore provides similar functionalities for Android, including environmental understanding through point cloud mapping and depth API integration for occlusion handling. The OpenXR standard, developed by the Khronos Group, promotes cross-platform compatibility by abstracting hardware-specific APIs, allowing a single XR application to run on diverse devices from multiple vendors without proprietary lock-in. Cross-platform XR deployment is particularly valuable for enterprises, enabling the creation of immersive VR, AR, and MR experiences that operate across a wide range of devices—including Meta Quest, Apple Vision Pro, Microsoft HoloLens, Pico, HTC Vive, Android XR devices, mobile phones, desktops, and web browsers—from a single codebase or build. This approach significantly reduces development costs, improves scalability, and accommodates mixed-device workforces in areas such as training, collaboration, remote maintenance, and simulations. Among development engines, Unity—with its XR Interaction Toolkit and AR Foundation—dominates enterprise XR applications owing to its strong cross-platform support, encompassing devices like Meta Quest, Pico, HoloLens, Vision Pro, Android/iOS AR platforms, desktop, and WebXR. It is particularly suited for training simulations, field service applications, and lighter experiences, offering features such as one-button publishing and integrated multiplayer functionality. Unreal Engine excels in scenarios requiring high-fidelity visuals, intricate simulations, and digital twins, such as in engineering and medical fields. While it performs strongly on PC-based VR and high-end systems, it often necessitates additional optimization for mobile and standalone headsets. The foundational standard for this portability is OpenXR, a royalty-free API developed by the Khronos Group. OpenXR provides unified access to XR runtimes, truly enabling "write once, run many" across various headsets, and is fully integrated into both Unity and Unreal Engine. Specialized platforms cater specifically to enterprise needs. For example, Frontline.io offers an AI-powered, no-code/low-code solution for converting CAD models into digital twins, with deployment across AR, VR, and MR devices including HoloLens, Quest, and Magic Leap, as well as PC, tablet, and mobile; it is widely used in manufacturing, aerospace, and maintenance. Strivr provides a comprehensive XR training platform supporting wireless deployment to thousands of headsets (such as Quest, Pico, Vive, and Vision Pro), complete with content management and MDM integration. VIROO enables multi-device, multi-cloud collaboration across Quest, Pico, Vive, and PC, with support for location-based tracking in team training scenarios. Platforms like EducationXR allow one-build publishing to various systems including Quest, iOS/Android, Windows/Mac, Pico, and SteamVR, with automatic multiplayer support. For managing large-scale deployments, mobile device management (MDM) and unified endpoint management (UEM) solutions are essential. ManageXR (including HP ExtendXR) leads in secure VR/AR fleet management, offering remote updates, device lockdown, and support for large-scale rollouts. Microsoft Intune and Omnissa Workspace ONE integrate XR devices into broader enterprise UEM ecosystems, particularly for Android-based headsets, HoloLens, and mixed environments. Alternative approaches include WebXR for installation-free, browser-based access and cloud streaming technologies, such as NVIDIA CloudXR on AWS, which render demanding applications in the cloud and stream them to client devices. Overall, enterprises prioritize OpenXR for long-term future-proofing, Unity for cost-effective broad reach, and robust MDM solutions for operational security and scalability. The landscape continues to evolve with advancements in Android XR and the introduction of new headset technologies. Artificial intelligence (AI) integration enhances XR software by enabling dynamic content adaptation and perceptual realism. Machine learning models, such as convolutional neural networks, power object recognition by analyzing camera feeds to identify and segment real-world elements, facilitating context-aware augmentation in AR. Adaptive environments leverage AI to adjust virtual scenes based on user behavior, using reinforcement learning to optimize layouts for comfort and engagement. Neural rendering techniques, advanced by 2025, employ generative AI to synthesize photorealistic views from sparse inputs, reducing computational demands while maintaining high-fidelity visuals in resource-constrained XR devices. Data processing in XR involves sensor fusion algorithms that combine inputs from cameras, IMUs, and depth sensors to achieve low-latency rendering essential for preventing motion sickness. These algorithms employ Kalman filters or deep learning-based methods to estimate pose and position in real time, synchronizing virtual overlays with physical movements. Targeting refresh rates of 90 Hz or higher ensures smooth visuals, with fusion pipelines minimizing end-to-end latency to under 20 milliseconds through predictive tracking and edge computing. This integration of sensor data supports stable, responsive XR experiences across varied environments.

Applications

Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), plays a pivotal role in work, education, and entertainment. In professional settings, XR facilitates hybrid work environments through evolving metaverse concepts that enable immersive collaboration, virtual meetings, and enhanced team connectivity, thereby improving productivity in distributed teams.[48][49][50]

Entertainment and Media

Extended reality (XR) has revolutionized entertainment and media by enabling immersive experiences that blend digital and physical worlds, allowing users to engage with content in novel ways. In gaming, XR technologies facilitate deep immersion through virtual reality (VR) and augmented reality (AR), transforming passive play into interactive adventures. Films leverage XR for innovative storytelling, while social platforms foster virtual communities, and production techniques streamline content creation.[51] In gaming, VR titles exemplify XR's capacity for full environmental immersion. Half-Life: Alyx, released in 2020 by Valve, stands as a landmark VR game where players manipulate objects and navigate alien worlds using motion-tracked controllers, enhancing spatial awareness and tension through head-mounted displays.[51] Complementing this, AR integrates digital elements into real-world settings via mobile devices; Pokémon GO, launched in 2016 by Niantic, overlays virtual creatures on users' surroundings, encouraging outdoor exploration and has amassed over 650 million downloads worldwide.[52] Platforms such as Strivr and EducationXR further enhance XR training by enabling scalable deployment and interactive simulations tailored to vocational and skill-based learning. Film and storytelling have adopted XR to expand narrative possibilities beyond traditional screens. 360° video captures panoramic views, immersing viewers in interactive environments where they can explore scenes freely, as seen in VR films that shift from linear plots to participant-driven experiences.[53] Interactive narratives further empower audiences to influence outcomes, fostering branching stories in VR formats. Tools like Adobe Aero enable creators to build AR content without coding, allowing designers to place 3D models and animations in real spaces for enhanced media prototypes.[54][55] Social VR platforms have emerged as hubs for virtual interactions, simulating face-to-face encounters in customizable digital realms. VRChat, launched in early access on Steam in 2017 by VRChat Inc., supports avatar-based meetups where users socialize, attend events, and collaborate in user-generated worlds accessible via VR headsets or desktops.[56] Metaverse environments extend this to large-scale gatherings; platforms like Roblox host XR-enhanced concerts, such as virtual music performances in 2025 that draw global audiences for synchronized, avatar-driven experiences.[57] In media production, XR facilitates virtual sets that reduce post-production needs and enable real-time visualization. The Mandalorian, debuting in 2019 on Disney+, pioneered LED walls in its StageCraft system, where a 270-degree curved video array displays dynamic backgrounds, allowing actors to interact with lit environments on set without green screens.[58] This technique, developed by Industrial Light & Magic, has since proliferated, cutting filming times and enhancing creative control in subsequent seasons and other productions.[59] The XR gaming segment underscores the commercial impact, projected to reach $29.21 billion in market value in 2025, driven by hardware adoption and content innovation that captivates millions.[60]

Education and Training

Extended reality (XR) technologies have revolutionized learning environments by creating immersive virtual classrooms that support remote and collaborative education. Platforms like ENGAGE XR enable students and educators to conduct interactive sessions in fully virtual spaces, fostering real-time discussions, group projects, and experiential activities across VR, AR, and mixed reality devices. This approach has proven particularly effective for bridging geographical barriers, allowing diverse learners to participate as if co-located.[61] Augmented reality (AR) further enhances traditional educational materials by integrating interactive digital overlays into physical textbooks and resources. For instance, McGraw Hill AR allows students to scan pages with mobile devices to visualize abstract concepts, such as geometric shapes in everyday environments or dynamic simulations of scientific processes, promoting deeper conceptual understanding. These tools transform static content into dynamic, explorable experiences that encourage active engagement without requiring specialized hardware beyond smartphones.[62] In vocational and skill-based training, XR provides safe, repeatable simulations of complex procedures, accelerating proficiency development. Boeing's Virtual Airplane Procedures Trainer, introduced in November 2025, leverages VR integrated with Microsoft Flight Simulator to enable pilots to rehearse cockpit operations and emergency protocols in a high-fidelity 3D environment accessible via standard devices. This application reduces training costs and risks while allowing practice anytime, anywhere, based on authentic aircraft data.[63] XR also advances accessibility in education by offering customized supports for students with disabilities, particularly through AR-based visual aids that simplify complex information. AR systems deliver real-time overlays, such as simplified diagrams or step-by-step guides, to assist learners with neurodevelopmental disabilities in processing visual and spatial content more effectively. Similarly, VR environments create low-pressure, personalized simulations that accommodate diverse needs, including sensory sensitivities, thereby promoting inclusive participation in mainstream curricula.[64][65] Empirical studies underscore XR's pedagogical benefits, particularly in enhancing retention and motivation. A PwC report found that VR learners retain information with 75% greater effectiveness than those using traditional classroom methods, attributed to heightened emotional connection and immersion. A 2025 study involving 317 students in El Salvador reported a 35.2% increase in knowledge retention for VR users compared to a 2.6% gain in traditional groups, alongside sustained motivation levels. These findings highlight XR's role in improving long-term recall through multisensory engagement.[66][67] Specialized platforms like Frontline.io support AR-guided maintenance by transforming CAD data into interactive digital twins deployable across multiple device types. The COVID-19 pandemic catalyzed widespread adoption of XR in edtech, shifting focus toward hybrid and remote immersive tools to maintain educational continuity. Post-2020, the XR education market expanded rapidly, growing from $4.40 billion in 2023 to a projected $28.70 billion by 2030 at a 30.7% CAGR, with over 77% of educators reporting increased student curiosity and engagement via XR platforms. This surge reflects institutional investments in scalable solutions for K-12 and higher education, integrating XR into curricula for enhanced interactivity.[66][68]

Healthcare and Industry

Extended reality (XR) technologies have transformed healthcare by enhancing precision in surgical procedures through augmented reality (AR) overlays. For instance, the Microsoft HoloLens has been employed to project 3D anatomical models onto patients during operations, allowing surgeons to visualize internal structures without invasive incisions.[69] In spine surgery, AR systems facilitate accurate pedicle screw placement by integrating preoperative imaging with real-time views, reducing misalignment risks in thoracic and lumbar regions.[70] These applications demonstrate AR's role in improving surgical accuracy and efficiency. Virtual reality (VR) has gained prominence in mental health treatment, particularly for post-traumatic stress disorder (PTSD). In February 2025, XRHealth announced VR-based therapeutic programs targeting PTSD symptoms through immersive exposure scenarios, enabling controlled reenactments of traumatic events to facilitate emotional processing.[71] Similarly, Freespira, an FDA-approved digital therapeutic, uses VR to address panic and PTSD by guiding users through breathing exercises in simulated environments, showing reductions in symptom severity.[72] In rehabilitation, mixed reality (MR) supports physical therapy by blending virtual exercises with real-world movements. Systems like MRehab utilize MR headsets and real tools to simulate daily activities, aiding recovery of speech and hand functions post-stroke or injury.[73] MR-based programs for older adults with sarcopenia have demonstrated improvements in muscle thickness and quality of life compared to traditional methods.[74] XR avatars further advance telemedicine by creating shared virtual spaces where clinicians interact with patient representations, enabling remote assessments and guided exercises.[75] The global XR healthcare market, encompassing AR, VR, and MR applications, reached approximately $3.05 billion in 2025, driven by adoption in therapy and diagnostics.[76] Studies indicate XR training reduces clinical errors by up to 40% in simulated scenarios, establishing its impact on safety.[77] In industry, XR enables digital twins—virtual replicas of physical assets—for optimizing manufacturing processes. Siemens integrates XR with digital twins in its Xcelerator platform to simulate factory operations, allowing engineers to test designs and predict failures in immersive environments.[78] Collaborations like Siemens and Sony's XR headset facilitate real-time visualization of product twins, streamlining engineering from concept to production.[79] AR overlays support remote maintenance by providing technicians with step-by-step guidance superimposed on equipment. In manufacturing, AR apps like those from Taqtile Manifest enable experts to annotate live video feeds, reducing downtime for complex machinery repairs.[80] This approach has been applied in sectors such as energy, where field workers receive holographic instructions to troubleshoot turbines without on-site specialists.[81] Aerospace and automotive industries leverage XR for training in hazardous settings. NASA's XR simulations immerse astronauts in virtual spacewalks and vehicle maneuvers, replicating microgravity conditions to enhance mission preparedness.[82] In automotive assembly, VR-based digital twins allow workers to practice high-risk tasks, such as engine installations, in safe simulated factories.[83]

Challenges and Future Directions

Technical and User Challenges

One major technical challenge in extended reality (XR) systems is latency, which refers to the delay between user input and system response, typically ranging from 10 to 50 milliseconds in various setups and leading to motion sickness or cybersickness. This visually induced motion sickness (VIMS) arises from sensory conflicts between visual cues and vestibular inputs, exacerbated by frame drops or refresh rate instability in head-mounted displays (HMDs). For instance, latencies above 60 ms can cause significant discomfort, while thresholds below 20 ms are generally imperceptible and comfortable for users.[84][85][86] Mitigations for latency and associated sickness include foveated rendering, which prioritizes high-resolution rendering in the user's central field of view (fovea) while reducing peripheral detail to lower computational load and maintain frame rates. This technique, supported by eye-tracking hardware, can significantly reduce rendering demands without noticeable artifacts if latency remains under 20 ms, thereby alleviating VIMS symptoms like nausea and disorientation. Other approaches involve adaptive field-of-view adjustments and biofeedback systems that dynamically tweak parameters based on user physiological signals.[87][88][89] Hardware limitations further hinder XR adoption, particularly in battery life, cost, and accessibility. Standalone XR devices in 2025 typically offer 2-3 hours of continuous use before requiring recharge, constrained by power-intensive components like high-resolution displays and sensors, which limits prolonged sessions in mobile scenarios.[90] Pricing ranges from approximately $500 for entry-level models like the Meta Quest 3 to $3,500 for premium devices such as the Apple Vision Pro, creating barriers for widespread consumer access and enterprise scaling. Accessibility issues affect diverse users, including those with disabilities, as many HMDs lack inclusive design features like adjustable straps for varying head sizes, support for assistive inputs (e.g., voice or gesture alternatives to hand-tracking), or accommodations for visual/motor impairments, often resulting in exclusionary experiences.[91][92][93] Interoperability challenges stem from fragmented ecosystems, such as differing standards between Android XR and iOS (VisionOS), where Android emphasizes open architectures like OpenXR for cross-platform portability, while Apple's closed ecosystem restricts content and hardware compatibility. This leads to incompatible spatial anchors and world representations, preventing seamless sharing of AR experiences or multi-device interactions across platforms, and complicating developer efforts to build universal applications.[94][95] User adoption faces barriers from steep learning curves and physical fatigue, with studies indicating dropout rates of 15-20% due to discomfort during initial sessions. New users often struggle with intuitive controls and spatial navigation, compounded by fatigue from prolonged wear, such as neck strain or eye fatigue, leading to reduced engagement in training or entertainment applications.[96][97][98] Scalability for multi-user XR environments is limited by high bandwidth demands, with 4K streaming at 60 fps requiring up to several hundred Mbps per user for immersive video, straining current networks in dense scenarios. While 5G addresses this partially by supporting 5-10 simultaneous XR users per cell through edge computing and low-latency slicing, challenges persist in wide-area deployments due to capacity trade-offs and potential congestion, preventing full scalability for large-scale collaborative experiences.[99][100]

Ethical and Societal Implications

Extended reality (XR) technologies raise significant privacy concerns due to the extensive data collection involved in eye-tracking and biometric monitoring. Commercial XR devices capture gaze patterns and physiological data that can be analyzed to infer sensitive user activities, such as reading private text or identifying emotional states, potentially leading to unauthorized surveillance. [101] Users often underestimate the risks associated with camera and eye-tracking data compared to microphone inputs, exacerbating vulnerabilities in immersive applications. [102] Cybersecurity threats further compound these issues, as XR hardware is susceptible to hacking that exposes location, identity, and physical details. [103] Equity and access challenges in XR amplify existing societal divides, with the digital divide limiting adoption among low-income and rural populations due to high costs and infrastructure barriers. [104] This uneven distribution risks deepening inequalities, as marginalized groups face reduced opportunities in education and employment reliant on XR. [105] Additionally, gender and racial biases embedded in XR content and algorithms perpetuate discrimination, such as stereotypical representations that exclude diverse identities or amplify harassment in virtual spaces. [106] [107] Psychological effects of XR, particularly in social virtual reality (VR), include potential addiction driven by immersive and interactive features that mimic real-world engagement. [108] High involvement in social VR can exacerbate depression and social isolation among users with low self-esteem, as excessive use displaces real-world interactions and fosters dependency. [109] While XR offers therapeutic benefits like reduced anxiety through exposure, prolonged exposure risks long-term disorientation and diminished interpersonal skills in physical environments. [110] Regulatory responses are emerging to address these implications, with the European Union applying the General Data Protection Regulation (GDPR) to XR data flows and implementing the Data Act in September 2025 to enforce fair data sharing and portability in immersive technologies. [111] [112] The IEEE Global Initiative on Ethics of Extended Reality provides guidelines emphasizing privacy protection, equitable access, and mitigation of biases through ethical assessments of XR systems. [113] [114] On the societal benefits side, XR has enhanced remote collaboration since the 2020 shift to hybrid work, with metaverse concepts evolving XR technologies into integrated hybrid work environments that blend virtual and physical interactions through AR, VR, and MR. These immersive virtual spaces, often featuring avatars and digital twins of physical workspaces, enable meetings that convey non-verbal cues, foster a sense of presence, and improve team cohesion across distances, while supporting scalable training and flexible office setups.[115][116][117] The global XR market is projected to reach approximately $1.07 trillion by 2030, fostering economic shifts through job creation in immersive tech sectors and transforming industries like manufacturing and services. [38]

References

User Avatar
No comments yet.