Hubbry Logo
VisionOSVisionOSMain
Open search
VisionOS
Community hub
VisionOS
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
VisionOS
VisionOS
from Wikipedia

visionOS
Home View in visionOS
DeveloperApple
OS familyUnix-like, based on Darwin (BSD), iOS, mostly based on iPadOS
Working stateCurrent
Initial releaseFebruary 2, 2024
Latest release26.0.1[1] (September 29, 2025; 24 days ago (2025-09-29)) [±]
Latest preview26.1 Beta 4[2] (October 13, 2025; 10 days ago (2025-10-13)) [±]
Marketing targetMixed reality headsets, Apple Vision Pro
Supported platformsARMv8-A
Kernel typeHybrid (XNU)
LicenseProprietary software with open-source components
Official websitedeveloper.apple.com/visionos/
Support status
Supported
Articles in the series
iOS
iPadOS
macOS
watchOS
tvOS

visionOS is a mixed reality operating system derived primarily from iPadOS and its core frameworks (including UIKit, SwiftUI, ARKit and RealityKit), and MR-specific frameworks for foveated rendering and real-time interaction.[3][4] It was developed by Apple exclusively for its Apple Vision Pro mixed reality headset. It was unveiled on June 5, 2023, at Apple's WWDC23 event alongside the reveal of the Apple Vision Pro.[5] The software released on February 2, 2024, shipping with the Apple Vision Pro.[6]

History

[edit]

Apple has reportedly been working on and conceptualizing visionOS throughout the 2010s and early 2020s.[7] Internally codenamed Borealis,[8] it was officially revealed to the public at Apple's WWDC23 event alongside the Vision Pro. Apple stated the two would release in early 2024.[9] App Store guidelines for the operating system similarly state that developers should refer to visionOS software as "spatial computing experiences" or "vision apps", and avoid the use of terms such as "augmented reality" and "mixed reality".[10][11]

During the event, The Walt Disney Company announced plans to develop spatial computing apps for visionOS;[12] Disney+ currently offers features such as streaming of selected titles in stereoscopic 3D, and they also offer 3D environments based on the El Capitan Theatre and locations from Disney-owned franchises.[13]

The operating system was initially planned to be released as xrOS before the name was changed reportedly days before its announcement, after the WWDC23 keynote and developer sessions had already been filmed.[14] References to xrOS are still present throughout visionOS development materials. On January 8, 2024, Apple announced that the Vision Pro with visionOS would be available in the US on February 2, 2024, with pre-orders beginning on January 19.[15]

Developer tools

[edit]

From June 5–12, 2023, Apple released 35 free virtual sessions covering visionOS development as part of WWDC23. On June 21, 2023, Apple released Xcode 15 Beta 2, which was the first Xcode beta to include a software development kit for visionOS and Reality Composer Pro, a tool to create 3D content for visionOS.[16] Xcode 15 launched without these features, which were eventually added in Xcode 15.2 on January 8, 2024,[17][18] the same day that Apple started accepting submissions for the visionOS App Store.[18]

During the WWDC23 keynote, Apple revealed it was working with Unity Technologies to support the Unity engine on visionOS,[19][20] and that Unity would be releasing development tools for 3D games on Vision Pro, which launched in beta on July 19, 2023.[21]

Features

[edit]

visionOS uses a 3D user interface navigated via finger tracking, eye tracking, and speech recognition. For example, the user can click an element by looking at it and pinching two fingers together, move the element by moving their pinched fingers, and scroll by flicking their wrist. Apps are displayed in floating windows that can be arranged in 3D space. Text input is via a virtual keyboard, the Siri virtual assistant, and external Bluetooth peripherals including Magic Keyboard, Magic Trackpad, and gamepads.[22][23]

During visionOS setup, a user can create a digital persona by taking off the headset and scanning their face with it. Users can use this persona during FaceTime calls, and see other participants' personas if they are using visionOS.[24] From visionOS 1.1, the ability to see personas in 3D moving around and interacting with apps was added, dubbed 'Spatial Personas.'[25]

The Photos app supports presenting spatial video, depth-mapped 3D video recorded on Vision Pro or iPhone 15 Pro, by adapting the video's depth to the user's head movements.[26]

At launch, visionOS shipped with 13 pre-installed 3D 360° background animated environments with accompanying ambient sounds, including optional day and night scenes of Yosemite National Park; Haleakalā National Park; Joshua Tree National Park; Mount Hood National Forest; Lake Vrangla near Drammen, Norway; and the Moon.[27]

The Apple TV app offers 3D films; the company announced that over 150 films would be available in 3D at launch, at no additional charge to users that rent or purchase the films on iTunes Store.[28]

WebXR, an API for mixed reality experiences through web browsers, is supported in Safari.[29]

visionOS users can also access their Mac with a resizable 4K virtual display, named 'Mac Virtual Display'. 8K wide and ultra-wide curved display options were made available as additional options alongside the release of visionOS 2.1 in December 2024, along with the ability to route Mac audio to Vision Pro.[30][31]

Supported apps

[edit]

visionOS is backwards compatible with existing iOS and iPadOS apps, which are rendered in windows within the user environment and are automatically compatible with visionOS's input system.[15][32] Although all apps are available on visionOS by default, the developers of iOS and iPadOS apps have the option to opt out of visionOS compatibility. Apple claims that over 1 million apps from iOS and iPadOS are available on visionOS.[15] In addition to iPadOS and iOS apps, over 600 native visionOS apps have been developed specifically for the platform at launch, according to Apple.[33] Apple also claims 100 Apple Arcade games will be compatible with visionOS at launch, the majority of which can make use of visionOS's gamepad compatibility.[23][34][35]

Many popular productivity apps have released fully-optimized visionOS versions, including Microsoft 365 apps (Teams, Word, Excel, PowerPoint, Outlook, etc.), Adobe Lightroom, Slack, Zoom, and Webex.[36] Many streaming apps are also optimized for the platform, such as Max, Disney+, Prime Video, Paramount+, and ESPN.[37]

Some of the most popular entertainment apps for iOS and iPadOS including Netflix, Spotify, and YouTube have not made their iOS or iPadOS apps available to run on visionOS, recommending users instead use their respective websites on Safari.[38] YouTube has announced that a visionOS app is "on [the] roadmap."[39]

Apple-made apps that come preinstalled on visionOS include Mail, Messages, Mindfulness, Apple Music, Apple TV, Notes, Photos, and Freeform.

Reception

[edit]

Pre-launch

[edit]

Before the February 2024 launch, Apple gave controlled demos of the Apple Vision Pro to many technology journalists, some of whom have praised its multitasking capabilities and input methods, while lamenting that the software is only available on expensive hardware.[40] Other reviewers have focused on the software's spatial videos feature and high resolution, while questioning what the software's main use case will be.[41] Speculation from journalists noted ongoing disputes from the developers of popular apps regarding the commissions charged by Apple for in-app purchases through apps distributed on the App Store as a potential cause for many not developing visionOS-native versions.[42]

Release

[edit]

Many reviews praised visionOS even while criticizing the hardware. Some reviews even mentioned features more advanced than the Apple Vision Pro hardware, such as eye and hand tracking.[43] Almost all key reviews praised the window mechanics and experience.[44][43]

Version history

[edit]

Overview

[edit]
Version Initial release date Latest version Build number Latest release date Device end-of-life
visionOS 1 February 2, 2024 Unsupported: 1.3 21O771 July 29, 2024
visionOS 2 September 16, 2024 Unsupported: 2.6 22O785 July 29, 2025
visionOS 26 September 15, 2025 Latest version: 26.0.1 23M341 September 29, 2025 TBA
Preview version: 26.1 Beta 4 23N5042a October 20, 2025
Legend:
Unsupported
Supported
Latest version
Preview version

visionOS 1

[edit]

visionOS 1 is the first major version of visionOS. It was announced in WWDC 2023 on June 5, 2023,[45] and was released on February 2, 2024, pre-installed in the Apple Vision Pro upon its release[46] however, it was actually released on January 23, 2024 with the version number 1.0.1. The last update, visionOS 1.3, was released on July 29, 2024.

Unsupported: Overview of visionOS 1 versions
Version Build Release date Features
Unsupported: 1.0[47] 21N307 February 2, 2024 Hand gestures and typing
Unsupported: 1.0.1[48] 21N311 January 23, 2024 Bug fixes and security updates
Unsupported: 1.0.2[49] 21N323 January 31, 2024 Bug fixes and security updates[50]
Unsupported: 1.0.3[51] 21N333 February 12, 2024 Bug fixes; passcode reset option.
Unsupported: 1.1[52] 21O211 March 7, 2024
Unsupported: 1.1.1[53] 21O224 March 21, 2024 Bug fixes and security updates[54]
Unsupported: 1.1.2[55] 21O231 April 9, 2024 Bug fixes and security updates[56]
Unsupported: 1.2[57] 21O589 June 10, 2024 Bug fixes; security updates; region and language support[58]
Unsupported: 1.3[59] 21O771 July 29, 2024 Bug fixes and security updates [60]

visionOS 2

[edit]

visionOS 2 is the second major version of visionOS. It was announced in WWDC 2024 on June 10, 2024 and was released on September 16, 2024 alongside iOS 18, iPadOS 18, macOS Sequoia, tvOS 18, and watchOS 11.[61][62][63]

Unsupported: Overview of visionOS 2 versions
Version Build Release date Features
Unsupported: 2.0[64] 22N320 September 16, 2024 Photos options added
Unsupported: 2.0.1[65] 22N342 October 3, 2024 Bug fixes and security updates
Unsupported: 2.1[66] 22N581 October 28, 2024 Bug fixes and security updates
Unsupported: 2.1.1[67] 22N591 November 19, 2024 Bug fixes and security updates
Unsupported: 2.2[68] 22N842 December 11, 2024
Unsupported: 2.3[69] 22N896 January 27, 2025
Unsupported: 2.3.1[70] 22N900 February 10, 2025
Unsupported: 2.3.2[71] 22N906 March 11, 2025
Unsupported: 2.4[72] 22O238 March 31, 2025
Unsupported: 2.4.1[73] 22O251 April 16, 2025
Unsupported: 2.5[74] 22O473 May 12, 2025
Unsupported: 2.6[75] 22O785 July 29, 2025

visionOS 26

[edit]

visionOS 26 is the third major version of visionOS. It was announced in WWDC 2025 on June 9, 2025, and was released on September 15, 2025 alongside iOS 26, iPadOS 26, macOS Tahoe, tvOS 26, and watchOS 26.

Latest version: Overview of visionOS 26 versions
Version Build Release date Features
Unsupported: 26.0[76] 23M336 September 15, 2025
Latest version: 26.0.1[77] 23M341 September 29, 2025 Bug fixes
Preview version: 26.1 Beta 4[78] 23N5042a October 20, 2025

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
visionOS is a spatial operating system developed by Apple Inc. for the headset, marking the company's entry into mixed reality computing. Announced on June 5, 2023, as the world's first operating system designed specifically for spatial experiences, visionOS blends digital content with the physical world to enable immersive interactions. It powers the , a wearable device that launched in the United States on February 2, 2024, and has since expanded to additional markets. Built on the foundations of macOS, iOS, and iPadOS, visionOS allows developers to create applications using familiar Apple frameworks and tools, supporting deployment across the company's ecosystem. Users navigate the interface through natural inputs including eye tracking for selection and scrolling, hand gestures for manipulation at up to 90Hz refresh rates, and voice commands via . Key features include spatial widgets that anchor in the user's environment, 3D browsing for immersive web experiences, and the ability to transform photos into spatial memories shareable via SharePlay. The system integrates seamlessly with other Apple devices, such as unlocking an iPhone with a glance and incorporating for enhanced communication, writing, and productivity tools starting with visionOS 2.4 in March 2025. As of visionOS 2.6, released in June 2025, the platform introduces advanced spatial experiences like anchored folders in the Home View, improved accessibility options including Live Recognition for object identification, and support for the upgraded with the M5 chip announced in October 2025. These updates emphasize visionOS's role in redefining computing by providing an infinite canvas for apps, games, and productivity, fostering an ecosystem with over 600 native applications at launch and ongoing expansions.

History and development

Announcement and initial development

Apple's development of visionOS stemmed from its broader (AR) and (VR) initiatives, which began in earnest around under the Technology Development Group led by Mike Rockwell. The operating system evolved from prototypes like the N301 mixed-reality headset and N421 lightweight AR glasses, concepts tied to early explorations of "Apple Glass" as a future AR platform. These efforts ramped up significantly by 2017, with the project initially codenamed xrOS internally before its rebranding. By 2020, approximately 1,000 engineers were dedicated to AR and VR projects, focusing on hardware-software integration to create immersive experiences. visionOS drew foundational influences from macOS and , incorporating shared frameworks such as and RealityKit to enable seamless app development across Apple's ecosystem. The operating system was publicly announced on June 5, 2023, during Apple's (WWDC 2023) keynote, positioned as the software foundation for the spatial computer. This reveal highlighted visionOS's role in blending digital content with the physical world through . Development emphasized co-design between software and the Vision Pro's hardware, powered by an M2 chip running visionOS, paired with dual 4K micro-OLED displays delivering over 23 million pixels and eye-tracking via four infrared cameras for precise gaze-based interactions. At the event, Apple showcased visionOS through live demos of spatial applications, including immersive 3D calls with participant avatars and a Photos app rendering volumetric memories, alongside dynamic environments such as a serene Yosemite or a cosmic to enhance user immersion.

Developer previews and betas

At the 2023 (WWDC), Apple introduced updates to 15 beta, including the visionOS (SDK) to enable developers to build applications for . This release featured extensions to RealityKit for rendering 3D content and immersive experiences, as well as enhancements to for creating adaptive user interfaces that respond to spatial environments. These tools allowed developers to prototype and preview apps using familiar Apple frameworks without immediate access to physical hardware. The first developer beta of visionOS 1.0 was made available on June 21, 2023, bundled with , marking the start of pre-launch testing for enrolled Apple Developer Program members. Subsequent iterative betas followed throughout 2023, including beta 2 on July 25, beta 3 in August, and up to beta 7 in December, with each update focusing on improving system stability, refining , and resolving performance issues in simulated environments. Unlike other Apple operating systems, no public beta was released for visionOS 1.0 in 2023 due to the limited availability of Vision Pro hardware for testing. Developers faced key challenges in testing spatial features, which were addressed through the visionOS simulator integrated into , enabling simulation of 3D spatial environments and various room layouts on Macs. The simulator also handled emulation of passthrough camera feeds, allowing developers to preview mixed reality overlays without real-time hardware access. Specific APIs, such as extensions to ARKit tailored for visionOS, provided tools for anchoring virtual content to the physical world and integrating spatial tracking. In late 2023, Apple transitioned to broader enterprise testing via the AppleSeed for IT program, inviting select organizations to evaluate beta software in controlled work environments ahead of the February 2024 launch. This shift supported validation of visionOS in professional settings, including compatibility with managed devices and security protocols.

Launch and post-launch updates

visionOS was officially launched on February 2, 2024, alongside the availability of the headset in all U.S. locations and online, marking Apple's entry into for consumers. The operating system powered the device's immersive experiences from day one, with core features like spatial multitasking and eye-and-hand tracking integrated seamlessly into the hardware. Initial adoption was strong, with pre-orders surpassing 200,000 units within the first 10 days of availability on , 2024, indicating robust early demand despite the device's . Software updates for visionOS follow an over-the-air (OTA) delivery model akin to and other Apple platforms, allowing users to install patches and enhancements directly on the Vision Pro without needing external hardware. Post-launch stability improvements began promptly, with visionOS 1.0.3 released on February 12, 2024, providing essential bug fixes and introducing a passcode reset option to enhance user and . While specific eye-tracking challenges were reported by early users, subsequent updates like visionOS 1.1 in March 2024 addressed related inaccuracies, such as drift after Persona capture, improving tracking reliability for daily interactions. These patches were crucial for refining the headset's core input methods in real-world scenarios. International expansion accelerated in mid-2024, with Vision Pro and visionOS rolling out to additional markets starting June 28, including China mainland, , , and , supported by localized language and region-specific updates to broaden . Further rollouts continued to other regions throughout the year, aligning software enhancements with global hardware availability. visionOS releases are closely synchronized with Vision Pro firmware updates, ensuring that sensor calibrations for components like cameras and inertial measurement units remain optimized across software versions for consistent performance. This integrated approach minimizes compatibility issues and supports ongoing refinements to spatial tracking and environmental passthrough features. In 2025, visionOS continued to evolve with the release of version 2.4 in , introducing Apple Intelligence features for enhanced communication, writing, and productivity. This was followed by visionOS 2.6 in June, which added anchored folders in the Home View, improved including Live Recognition, and support for advanced spatial experiences. October saw the announcement of an upgraded model featuring the M5 chip and a dual-knit band, further advancing the platform's hardware-software integration.

Technical architecture

Core framework and hardware integration

visionOS is built upon a variant of Apple's established operating system foundation, sharing the Darwin core with and macOS, which utilizes the to manage system resources and hardware abstraction. This architecture enables visionOS to leverage familiar development tools and APIs while extending them for , ensuring efficient multitasking and resource allocation in a mixed-reality environment. The system supports Metal 3, Apple's graphics and compute , which facilitates high-performance rendering of 3D content and visual effects directly on the device's GPU. At the hardware level, visionOS is tightly integrated with the Apple Vision Pro's components, including dual micro-OLED displays that collectively deliver over 23 million pixels for immersive visuals exceeding 4K resolution per eye. The operating system processes input from 12 outward-facing cameras, each with 12-megapixel resolution, to enable high-fidelity color passthrough that blends the real world with digital overlays in real time. This integration initially relied on the Apple M2 chip, featuring an 8-core CPU, 10-core GPU, and 16-core Neural Engine, paired with 16 GB of unified memory to handle complex spatial computations without compromising responsiveness. In October 2025, Apple released an upgraded Apple Vision Pro with the M5 chip, featuring a 10-core CPU (4 performance cores and 6 efficiency cores), 10-core GPU with hardware-accelerated ray tracing, 16-core Neural Engine, and 16 GB unified memory, enhancing performance for AI-driven spatial experiences and efficiency in a wearable form factor. The spatial rendering engine in visionOS combines RealityKit for authoring and rendering 3D content with AVFoundation for handling mixed-reality video streams, allowing developers to create seamless immersive experiences. RealityKit manages entity-based 3D scenes, physics simulations, and animations, while AVFoundation supports immersive video playback in formats like MV-HEVC, enabling spatial audio and video integration. is optimized through the chip's cores and the external battery pack, providing up to 2 hours of untethered use or extended sessions when connected, balancing performance with thermal constraints in a wearable form factor, with the M5 upgrade offering improved . System-level APIs further enhance hardware synergy, with SceneKit providing tools for importing, manipulating, and rendering 3D assets in spatial contexts, though RealityKit is increasingly recommended for new visionOS development. Core Motion fuses data from the device's sensors, including head and via cameras and LEDs, to deliver precise motion estimates for intuitive interactions and scene stabilization. These APIs ensure low-latency tracking, with the Neural Engine accelerating fusion algorithms to maintain fluid rendering even during rapid movements.

Security and privacy features

visionOS incorporates robust and features tailored to its environment, emphasizing on-device processing and user consent to protect sensitive data such as eye and hand movements. These mechanisms build on Apple's established model while addressing unique challenges like real-time environmental awareness and biometric interactions. Eye and hand tracking data is processed entirely on-device to prevent leakage, with the Secure Enclave ensuring isolation of biometric information. Gaze data remains private until a user explicitly selects content via a like a finger tap, and it is not shared with apps or Apple otherwise. Hand setup data for is also stored locally and only shared with apps upon explicit permission, with no automatic upload to the without user consent. This on-device approach minimizes risks in spatial interactions. App sandboxing in visionOS extends iOS protections by isolating applications to prevent interference with the system or other apps, using unique home directories and runtime checks like (ASLR). For spatial elements, apps running in cannot access surroundings data or head orientation, while those in Full Space require user permission to access environmental data limited to a 5-meter radius, enforcing spatial boundaries that restrict apps from intruding on the user's physical environment. Entitlements further limit access to system resources, ensuring apps cannot escalate privileges. Optic ID provides biometric authentication through iris scanning, using infrared cameras and LEDs to create encrypted mathematical representations of the iris stored exclusively in the Secure Enclave. This data never leaves the device or backs up to iCloud, and apps receive only a success or failure signal without accessing raw . Integrated with passkeys for passwordless sign-ins, Optic ID secures device unlocking, purchases, and app access, leveraging Apple silicon's hardware isolation for protection. Passthrough security relies on on-device mapping of surroundings via cameras and , with the resulting scene mesh encrypted using the device passcode to safeguard real-time AR overlays. Apps must obtain explicit permission to access camera or feeds for passthrough features, and visual indicators (green for camera/mic use, orange for mic alone) alert users to active access, preventing unauthorized environmental scanning. This framework protects against unintended data exposure in mixed-reality scenarios. Update mechanisms in visionOS use digitally signed and software to verify integrity and prevent tampering during installation. Apple issues regular updates addressing vulnerabilities, such as the 2024 GAZEploit exploit in eye-tracking that enabled keystroke inference (CVE-2024-40865), which was patched in visionOS 1.3 by suspending the view when the is active. Automatic update options ensure devices receive these protections promptly, with incident responses including rapid CVE disclosures and mitigations. The Secure Enclave's chip-level isolation further bolsters overall hardware by segregating sensitive operations.

Compatibility with other Apple platforms

visionOS integrates seamlessly with other Apple platforms, enabling users to leverage apps, data, and features across devices such as , , and Mac. This interoperability is facilitated through compatibility layers, shared services, and Continuity features, allowing for a unified without requiring users to switch devices frequently. and apps can run on visionOS via a dedicated that simulates an iPad-like environment, adapting the app to the spatial interface by rendering it in a stereoscopic . Developers build these compatible versions by linking against the in 15 or later and enabling the "Designed for iPad" runtime destination, making the apps automatically available in the visionOS upon updating the Apple Developer Program agreement. However, certain features like Core Motion sensors, location services, HealthKit access, and camera functionalities are unavailable in this mode, and apps must use adaptive layouts to handle the spatial display. For macOS productivity, visionOS supports Mac Virtual Display, which virtualizes a 4K external display for any compatible Mac running 14 or later, allowing users to interact with full macOS apps in a spatial alongside native visionOS content. This feature streams the Mac's screen wirelessly, supporting multiple virtual displays for enhanced multitasking, though it requires both devices to be signed into the same account. Continuity features extend this integration by enabling fluid transitions between devices. Handoff allows starting tasks like calls on and continuing them on , , or Mac, provided the devices meet minimum software requirements such as or 13. Universal Clipboard supports copying text, images, or files on one device and pasting them on another, including into spatial views on visionOS, with or 10.12 as prerequisites. Additional capabilities include Instant Hotspot for tethering to an 's cellular data ( 8.1 or later) and text message forwarding from to visionOS ( 8.1 or later). Mirror My View uses to project the visionOS interface onto a Mac screen for demonstrations, requiring 12 or later. Data synchronization across platforms is handled primarily through , which ensures seamless sharing of photos, notes, files, and even spatial annotations created in visionOS apps. For instance, passwords and passkeys sync via iCloud Keychain, keeping credentials updated across , , Mac, and Vision Pro when enabled in settings. facilitates quick transfers of 3D assets, photos, videos, and documents between visionOS and nearby Apple devices supporting , , or macOS 10.10, operating over and for proximity-based sharing. Cross-device APIs further enhance interoperability, with HealthKit providing shared access to fitness and health data stored on , , or , allowing visionOS apps to read and write metrics like workouts or state of mind in a spatial context while maintaining privacy controls. CarPlay extensions enable navigation apps on visionOS to integrate in-VR driving directions, extending iPhone-based functionality into immersive environments without invoking full CarPlay code in compatible apps. Despite these integrations, limitations exist: there is no direct access to the App Store on visionOS; instead, compatible apps appear in the visionOS store, but developers must opt-in to create native visionOS versions for full spatial optimization. Background execution modes for location or are restricted in compatible apps, and core kernel similarities with other platforms do not extend to unrestricted feature parity.

User interface and experience

Spatial computing paradigms

Spatial computing in visionOS represents a that seamlessly integrates into the physical environment, enabling users to interact with virtual elements as if they were part of the real world. This is achieved through advanced techniques such as spatial anchors, which fix virtual objects to specific locations in the user's physical space for persistent placement, and occlusion, where real-world objects realistically obscure virtual ones to maintain and immersion. These mechanisms, powered by RealityKit and ARKit frameworks, allow developers to create experiences that blend (AR) overlays with the user's surroundings without disrupting spatial awareness. Central to this paradigm are volumetric environments, where windows and volumes serve as flexible containers for content that can be resized, rotated, and pinned to real-world surfaces or locations. Unlike traditional flat interfaces, these volumetric windows enable viewing 3D content from multiple angles and support dynamic scaling to fit user needs, fostering an infinite workspace free from bezels or screen edges that constrain traditional displays. Users can arrange multiple windows in a boundless 3D canvas, positioning them at varying depths and orientations to enhance productivity and immersion. As of visionOS 2.6 (June 2025), spatial widgets integrate into the user's environment, anchoring in place with customizable frame thickness, color, and depth for persistent access to information like clocks or . The Home View supports anchored folders for organizing apps, and the Control Center has been redesigned for quicker management of settings such as Guest User and Focus modes. visionOS supports a spectrum of reality modes to cater to different levels of immersion: full (VR) through Immersive Spaces that envelop the user in entirely digital environments; mixed reality (MR) via passthrough in Shared or Full Spaces, where high-fidelity camera feeds overlay AR elements with depth mapping for realistic integration; and lighter AR overlays that augment the physical world without full occlusion. Depth mapping ensures virtual objects respect real-world , such as planes and surfaces detected by the device's sensors. Enhancing these visual paradigms is spatial audio, which creates immersive 3D soundscapes by simulating directional and distance-based audio cues using head-related transfer functions (HRTF) for binaural rendering. The system employs a six-microphone array with directional to capture and process environmental sounds in real time, allowing audio to adapt dynamically to the user's head movements and room acoustics via ray tracing. This integration ensures that sound sources appear anchored in the spatial environment, complementing visual elements for a cohesive experience. To deliver smooth interactions, visionOS targets a 90 Hz as the baseline for low-latency rendering, with support for higher rates like 96 Hz, 100 Hz, and 120 Hz depending on content. Eye-tracking capabilities enable , which prioritizes high-resolution detail in the user's gaze direction while optimizing peripheral areas, reducing computational load and enhancing performance without perceptible lag.

Gestures and input methods

visionOS primarily facilitates user interaction through a combination of , hand gestures, and voice commands, enabling hands-free and intuitive navigation in spatial environments without the need for physical controllers. Users target elements by gazing at them and confirm actions via subtle hand movements, creating a seamless blend of natural inputs that prioritize comfort and precision. This input paradigm supports both indirect manipulation, where users interact with virtual objects from a distance, and direct touch interactions in closer proximity, as outlined in Apple's . Eye tracking serves as the foundational selection mechanism in visionOS, allowing users to direct their gaze toward app icons, buttons, or content to highlight and prepare them for activation. Upon initial setup, users calibrate eye tracking by following on-screen instructions in the Settings app under Eyes & Hands, which involves looking at specific points to map gaze accurately; this process can be redone at any time for optimal performance. Selection occurs through gaze combined with a hand gesture rather than dwell-time alone, ensuring deliberate interactions while maintaining responsiveness in dynamic spatial scenes. As of visionOS 2.6, the Look to Scroll feature enables customizable eye-based scrolling for navigation. Hand gestures provide the primary means for confirming selections and manipulating content, tracked using the device's outward-facing cameras and ARKit framework without requiring handheld controllers. Core gestures include pinching the thumb and together to select or tap an item after gazing at it, pinching and dragging to reposition windows or objects, and flicking the wrist after a pinch to through lists or dismiss panels. Additional gestures support zooming via two-handed pinch and drag, through circular pinches, and contextual menus by pinching and holding, all designed for ergonomic use in mixed reality. These inputs leverage models within ARKit for robust 3D hand pose estimation, achieving fluid tracking up to 90 Hz in immersive applications for enhanced responsiveness. Starting with visionOS 2.6, visionOS supports additional input methods including the , which provide 6DoF motion tracking, finger touch detection, and haptic feedback, and the Muse accessory for precise pointing in collaboration scenarios. Voice integration enhances hands-free control through , which responds to natural language commands tailored for , such as opening apps, sending messages, or adjusting environments— for instance, users can say "Siri, open the Notes app" or "Siri, remind me to leave in 20 minutes" to perform tasks without manual input. activation occurs by saying "Siri" followed by the request, with support for setting up features like via voice prompts. This system integrates seamlessly with gesture-based navigation, allowing verbal overrides for complex or repetitive actions in shared or multitasking scenarios. For text input and precise cursor control, visionOS supports virtual keyboards that users can type on directly using pinch gestures to tap keys, alongside compatibility with Bluetooth peripherals like the Magic Keyboard, , or third-party mice. Users connect these devices via settings, enabling traditional input methods for productivity tasks while the headset provides adaptive visibility of keyboards in immersive spaces through features like Keyboard Awareness. In contexts, fallbacks such as Switch Control allow navigation via head movements or switches when standard eye and hand inputs are unavailable, promoting inclusive use. Input accuracy benefits from ARKit's hand tracking, which delivers sub-centimeter precision in joint detection through on-device , though environmental factors like can influence performance.

Accessibility and customization

visionOS incorporates robust accessibility features tailored to spatial computing, enabling users with visual, motor, and other impairments to navigate and interact with 3D environments effectively. , the built-in , delivers audible descriptions of elements in the user's view, including battery status, incoming calls, open applications, and spatial layouts such as app windows and surroundings. When users open apps or shift perspectives in 3D space, VoiceOver plays directional audio cues to indicate changes in the environment, facilitating orientation without visual reliance. This gesture-based system supports single-hand interactions like pinching and swiping to select and explore items, with adjustable speaking rates and pitches for personalized audio feedback. Introduced in visionOS 2.6, Live Recognition uses the device's cameras and on-device to provide real-time descriptions of surroundings, identify and locate objects (e.g., keys or doors), read printed text, and detect faces for VoiceOver users. For users with low vision, visionOS offers dynamic zoom and magnification tools that scale spatial elements in real time. The Zoom feature provides full-screen magnification or a resizable window lens, adjustable via the Digital Crown or controller, with options to include apps, passthrough video of surroundings, or both, and lock depth to hand interactions for stable focus in 3D. Maximum zoom levels and border colors can be customized in settings, allowing seamless enlargement of virtual content or real-world views captured by the device's cameras. Complementing this, color filters enable adjustments for visual impairments by altering hues, intensity, and contrast across the interface and passthrough video; options include , inverted colors, or specific tints to reduce strain or enhance . Features like Reduce and Reduce Transparency further tone down bright elements and solidify overlays, promoting clearer perception in mixed-reality contexts. Customization in visionOS extends to personalizing interfaces and environments for comfort and efficiency. Users can select light or dark modes for compatible apps, adjusting text size, boldness, and window scaling to suit preferences, with temporary two-handed resizing for dynamic layouts. Widgets, which float in the user's personal spatial volume, allow customization of frame thickness, color, and depth to blend with surroundings, enabling persistent placement in defined areas like desks or walls for quick access to information. Environment themes adapt to these modes, supporting immersive VR spaces with toggled lighting to match user-selected appearances. Assistive technologies in visionOS prioritize inclusive input methods for motor challenges. Switch Control facilitates navigation through item or point scanning, where users trigger selections via external switches or built-in options like head tracking, which moves the pointer based on subtle head movements for precise control in spatial interfaces. This extends to brain-computer interfaces for severe mobility limitations, integrating with the system's gesture adaptations. For tactile feedback, braille display integration works seamlessly with over , supporting international tables, contracted/uncontracted output, and Nemeth Code for math; users can input commands, edit text with auto-conversion, and access note-taking tools directly on the display. In visionOS 2.6, users can save eye and hand calibration data, vision prescriptions, and accessibility settings to an running 18.6 or later for easy transfer to shared Vision Pro devices, and unlock the iPhone using while wearing the headset. To accommodate shared use, visionOS supports multiple user profiles through Guest User mode, where temporary setups capture eye and hand data for sessions without overwriting the owner's configuration. Guests create their own Personas—digital avatars—for interactions, but access is restricted from the owner's Optic ID, , and sensitive data, with customizable app permissions to maintain privacy. Owners can monitor sessions via mirroring and end them remotely, ensuring controlled shared experiences in collaborative or family settings. As of visionOS 2.6, Personas feature volumetric rendering for more realistic depth, full side profiles, and over 1,000 variations of glasses and accessories. The system also supports shared spatial experiences, allowing multiple Vision Pro users to collaborate in the same immersive environment via , whether in-room or remote.

Key features

Immersive environments and multitasking

visionOS provides users with a library of pre-built immersive environments designed to enhance focus and ambiance during use. These include scenic worlds such as , , and the surface, which can be selected to create a calming backdrop while interacting with apps. Additionally, the Memories environment leverages spatial photos and videos from the user's to generate personalized, photo-based scenes that relive captured moments in three dimensions. For customization, developers and users can import bespoke environments created with Reality Composer Pro, Apple's tool for building interactive 3D content that integrates seamlessly into the system. Multitasking in visionOS revolves around an infinite spatial canvas that supports unlimited floating windows, allowing users to position and scale apps freely beyond physical room constraints. Windows exhibit spatial snapping, automatically aligning to nearby surfaces or other elements for organized layouts, which extends the underlying paradigms by enabling intuitive placement in mixed reality. For focused experiences, Immersive Spaces provide a theater-like mode where a single app expands to fill the user's field of view, minimizing distractions while maintaining environmental context. Window management emphasizes natural hand gestures for control. Users resize windows by pinching the corner handles and dragging, with the system dynamically scaling content to ensure legibility regardless of distance or size. Arrays of windows can be arranged across the infinite canvas, supporting complex workflows with multiple apps visible simultaneously; while direct linking for synced scrolling across windows is not natively implemented, users can manually align related content for efficient reference. This approach fosters by treating the surrounding as an extensible desktop. To heighten immersion across setups, visionOS incorporates shared spatial audio, where sound from multiple apps emanates from their respective window positions, creating a layered auditory environment that enhances spatial awareness. Occlusion handling ensures realistic overlaps, with virtual windows and objects respecting depth to appear behind closer elements, including passthrough real-world items via materials like OcclusionMaterial. Performance remains consistent even with demanding multitasking, as the system targets a 90 frames per second and employs dynamic scaling to adapt resolution based on content proximity and thermal conditions, supporting fluid operation across more than 10 open windows without perceptible lag.

Personas and collaboration tools

Personas in visionOS serve as digital 3D avatars that represent users during remote interactions, generated on-device through a brief scan using the headset. These avatars capture a user's expressions and hand movements in real time via the device's sensors, enabling natural and expressive communication in virtual spaces. Users can customize their Personas, including adjustments to lighting and backgrounds, to enhance personalization during calls or shared sessions. Spatial Personas extend this functionality by rendering avatars as life-sized, volumetric figures within the user's physical environment, facilitating immersive eye contact and spatial awareness during interactions. Introduced in visionOS 1.1 and refined in subsequent updates, Spatial Personas appear opt-in during compatible sessions, allowing participants to move freely and engage as if co-located. In visionOS 2.6, enhancements to hair, eyelashes, complexion, and side-profile views further improve realism and expressivity. The visionOS 2.6.1 update in November 2025 includes minor refinements to these features along with bug fixes. FaceTime on visionOS integrates for life-sized video calls, where participants using can view each other as Spatial Personas, simulating direct eye contact through gaze correction algorithms. This setup supports SharePlay, enabling synchronized activities such as co-watching movies, playing games, or browsing media together in a shared virtual space. For non-Vision Pro users, traditional 2D video feeds are displayed alongside spatial elements, ensuring compatibility while prioritizing immersive experiences for headset wearers. Collaboration tools in visionOS emphasize shared spatial canvases for creative and professional workflows. The Freeform app functions as a 3D whiteboard, allowing multiple users to draw, add notes, and manipulate infinite boards in real time via SharePlay or nearby connections. Participants can import 3D objects, photos, or files, fostering collaborative brainstorming where annotations appear volumetrically in the shared environment. In the Photos app, SharePlay sessions enable joint viewing of spatial photos and videos, with users able to navigate and discuss content together, enhancing remote storytelling and review processes. Remote access features provide secure ways to share the device and experiences. Guest User mode allows controlled access, where the owner selects permitted apps and limits session duration, with guests completing a temporary eye and hand setup upon donning the headset. In visionOS 2.4 and later, integration with nearby iPhones or iPads streamlines guest , returning the device to the owner's configuration automatically after use. For enterprise scenarios, visionOS supports virtual meetings through apps like Zoom, which leverage spatial audio, Personas, and diagramming tools to create immersive sessions with 3D annotations and shared diagrams. Enterprise APIs further enable customized spatial experiences, such as enhanced access for professional diagramming in collaborative environments. Privacy measures in social features prioritize user control and data minimization. Persona creation and processing occur entirely on-device, preventing transmission of raw biometric data. Sharing Spatial Personas requires explicit opt-in during calls, with users able to toggle between 2D and 3D representations. During or SharePlay sessions, surroundings captured via passthrough cameras are not shared externally; instead, the system blurs or isolates the user's environment to protect bystanders and maintain focus on the avatar. Settings allow granular control over features like microphone, camera, and Persona access for individual apps, ensuring opt-in consent for all social interactions.

Health and productivity integrations

visionOS integrates with Apple's HealthKit framework starting from version 2, enabling developers to build spatial health and fitness applications that leverage the device's expansive canvas for immersive experiences, such as guided meditations or wellness visualizations. This extension allows seamless synchronization of health data across Apple ecosystems, including metrics like captured by an during sessions, which can be accessed and displayed in visionOS apps for real-time monitoring without leaving the spatial environment. While visionOS does not natively implement eye wellness reminders based on headset usage, its built-in eye-tracking system—powered by high-precision infrared cameras—supports accessibility features that promote comfortable viewing, and developers can incorporate HealthKit data to create custom alerts for prolonged sessions in third-party wellness apps. The Mindfulness app, native to visionOS, further supports behavioral health by offering spatial breathing exercises and reflections to build emotional resilience, integrating with HealthKit for mood logging and progress tracking. For productivity, on visionOS introduces spatial browsing, which transforms standard web pages into interactive 3D environments, allowing users to explore articles, videos, and content with depth and immersion for enhanced focus during research or reading tasks. The app facilitates spatial sketches through gesture-based handwriting and drawing in the user's physical space, with Apple Intelligence features like Image Wand converting rough sketches into polished images or diagrams directly within the app. Focus modes in visionOS adapt notifications to minimize interruptions during immersive work, with options to silence non-essential alerts, summarize grouped notifications, and schedule sessions for deep concentration, such as allowing only work-related communications while engaging in spatial multitasking. Although native time tracking for individual sessions is handled through analytics for overall usage patterns, Focus integration enables automated logging of focused periods to help users review productivity over time. The headset employs advanced , combining data from its scanner, cameras, and inertial measurement units to enable precise environmental mapping and hand-eye tracking, which supports apps in creating responsive experiences like adaptive workouts. Introduced in later updates, developer tools allow for fatigue-aware features through motion and gaze analysis, though no built-in EEG-like monitoring exists; instead, apps can use sensor data for indirect detection of user strain. Enterprise tools in visionOS include robust VPN support for secure spatial access to corporate networks, configurable via device management profiles to enable per-app tunneling and remote workflows in mixed reality. The scanner facilitates document scanning and AR measurements natively through the Notes app for capturing and annotating physical documents in 3D space, while ARKit APIs power precise measurements of objects and rooms for professional applications like or inventory.

Software ecosystem

Native and third-party apps

visionOS includes a suite of native applications optimized for , enhancing user interaction within mixed reality environments. The Photos app supports spatial Memories, allowing users to relive captured moments in 3D by transforming standard photos into immersive scenes with depth and effects, introduced in visionOS 2. Apple TV+ delivers immersive video content through 180-degree 8K recordings with Spatial Audio, enabling viewers to experience films and series as if positioned inside the scene, with series spanning genres like documentary and music premiering regularly. The Fitness app integrates guided workouts with elements, overlaying instructional visuals and progress tracking in the user's physical space to support activities like and . Third-party developers have contributed notable applications that leverage visionOS capabilities for productivity and creativity. offers apps such as OneNote and Teams, enabling 3D spatial layouts for and collaboration, where users can arrange multiple documents or meeting participants in volumetric windows around their environment. provides native tools like Lightroom with Firefly AI integration, allowing spatial editing of photos through generative features that extend into 3D canvases, though full creative suites like Photoshop remain limited to compatible modes. The visionOS ecosystem spans diverse app categories, fostering engagement across entertainment, education, and utilities. In gaming, spatial puzzles such as Loóna and Puzzling Places immerse players in 3D dioramas and photorealistic assemblies, using hand gestures to manipulate pieces in shared or full immersion modes. Entertainment options include VR concerts via AmazeVR, delivering 8K interactive performances from artists like , where users engage with holographic stages and effects in personal spatial audio. Utility apps feature spatial calculators like Spatial Calculation, which project volumetric math interfaces for multi-step computations floating in the user's view, enhancing precision for engineering tasks. Educational applications offer AR dissections through tools like Visible Body, providing interactive 3D anatomical models for studies, compatible with visionOS for layered overlays on real-world references. The for visionOS launched with over 600 native spatial apps in February 2024, growing to more than 2,000 by mid-2024 and reaching more than 3,000 native titles as of October 2025, reflecting robust developer adoption across categories like and gaming. To optimize performance and battery life, many apps incorporate , dynamically adjusting resolution based on eye gaze to reduce computational load while maintaining high-fidelity visuals in the user's focal area.

App development tools

App development for visionOS primarily utilizes 15 and later versions, which integrate the visionOS SDK to enable building, testing, and deployment of spatial applications. This SDK provides comprehensive support for creating immersive experiences, including tools for integrating 3D content and handling spatial interactions directly within the Xcode environment. Developers can leverage Xcode's built-in editor, debugger, and asset management features to streamline the process of authoring apps that run on . Apple offers extensive official resources on its developer website to support visionOS app development, including the Get Started guide, detailed documentation, introductory sample code projects, and featured examples. The Get Started guide covers creating projects in Xcode with SwiftUI and RealityKit. Introductory samples focus on fundamentals such as spatial UI with SwiftUI, gesture interactions, 3D content creation, and immersion through windows, volumes, and immersive spaces. Featured sample projects demonstrate advanced concepts, including Diorama for designing and previewing scenes using Reality Composer Pro, Petite Asteroids for building a volumetric game with RealityKit, interactive 3D models, and collaborative experiences using Group Activities. These resources, along with WWDC session videos and performance analysis documentation, assist developers in mastering spatial computing paradigms. For 3D content creation and import, Unity offers official support through its PolySpatial package, available in Unity 2022 LTS and later for Pro, Enterprise, and Industry subscribers, allowing developers to export projects to visionOS while utilizing Unity's authoring tools for games and interactive experiences. Similarly, provides visionOS support starting from version 5.4 in experimental form, with full integration in 5.5 via C++ projects, enabling the import of high-fidelity 3D assets and rendering pipelines optimized for . Key frameworks underpinning visionOS app development include ARKit, which facilitates anchoring virtual content to the real world through asynchronous data providers for scene understanding, object tracking, and environmental sensing in immersive spaces. RealityKit employs an Entity-Component-System (ECS) architecture, where entities serve as containers for components that define behaviors and properties, and systems process updates across multiple entities to simulate physics, animations, and interactions efficiently in 3D scenes. Reality Composer Pro complements RealityKit by providing a graphical tool for composing, editing, and previewing 3D scenes and assets. This ECS paradigm promotes modularity, allowing developers to build scalable simulations without tightly coupling data and logic. Testing visionOS apps occurs via the Simulator on macOS, which emulates spatial environments, room layouts, lighting conditions, and input methods like and gestures to validate app behavior without hardware. For hardware validation, apps deploy directly to devices, supporting on-device debugging and real-time iteration. Performance analysis relies on Instruments, Apple's profiling tool, featuring the RealityKit Trace template to measure rendering efficiency, frame rates, and latency in spatial rendering pipelines, helping developers optimize for smooth 90Hz experiences. Design guidelines for visionOS emphasize spatial principles outlined in Apple's , which recommend layouts that respect users' physical surroundings, intuitive gesture mappings, and depth-based content placement to create natural, immersive interfaces. The App Review process, governed by Apple's guidelines, prioritizes by requiring developers to disclose practices, obtain explicit user consents for sensitive features like camera access, and avoid manipulative tracking to ensure apps align with platform security standards. In 2024, visionOS 2 introduced new volumetric APIs, enhancing with depth-aligned layouts and tools for capturing and rendering spatial video, enabling developers to integrate immersive media more seamlessly into apps. With visionOS 2.6 in June 2025, additional spatial APIs were added, supporting advanced experiences on the upgraded with M5 chip. These updates, previewed in developer betas alongside betas since the initial SDK release, continue to evolve the toolkit for richer spatial content creation.

App Store and distribution

The visionOS App Store serves as an integrated spatial storefront within the operating system, allowing users to discover and download native visionOS applications designed for experiences on . It features immersive navigation options, including search initiated through gaze-based selection combined with hand gestures or voice commands via , enabling seamless interaction in a three-dimensional environment. App previews incorporate 3D elements, such as volumetric representations of icons and interactive demonstrations of spatial content, to give users a sense of the app's immersive potential before purchase or download. Distribution models for visionOS apps mirror those of the broader , encompassing free downloads, one-time paid purchases, recurring subscriptions, and in-app purchases tailored to spatial elements like custom environments or 3D assets. Developers can offer subscriptions for ongoing access to updated spatial content, while in-app purchases enable of premium features such as enhanced immersive scenes or virtual objects. These models are managed through Connect, ensuring secure transactions and compliance with Apple's guidelines for . For enterprise use, visionOS supports the Volume Purchase Program (VPP) via , allowing organizations to bulk acquire and distribute apps at scale for business purposes without individual user accounts. Additionally, developers and testers can sideload apps directly onto registered Vision Pro devices using for development and beta testing, bypassing the public while adhering to provisioning profiles and certificates. App curation in the visionOS App Store emphasizes featured sections highlighting exclusive spatial experiences optimized for extended reality (XR), such as volumetric apps and immersive productivity tools. Age ratings, updated in 2025 to include categories like 4+, 9+, 13+, 16+, and 18+, account for immersive content by evaluating factors including realistic violence, user-generated interactions, and psychological immersion, helping parents and users assess suitability in a spatial context. Developers gain access to App Analytics in App Store Connect, which provides over a dozen metrics specific to visionOS apps, covering user acquisition, —such as session duration and interactions within spatial environments—and performance. These tools help optimize apps by revealing patterns in how users explore 3D spaces, though detailed spatial-specific data like precise dwell times may require integration with in-app tracking.

Version history

visionOS 1 (initial release)

visionOS 1.0, the inaugural version of Apple's spatial operating system, was released on February 2, 2024, alongside the launch of the headset, marking the minimum and only compatible hardware at the time. The build number for this initial release was 21N305, derived from the final pre-release beta. Designed to blend digital content with the user's physical environment, visionOS 1.0 emphasized principles, enabling seamless interactions through , hand gestures, and voice commands without traditional controllers. Central to visionOS 1.0 were its core spatial (UI) elements, which positioned windows and apps in a around the user, adjustable via the Digital Crown for immersion levels. The system launched with over 600 native apps optimized for spatial experiences, including first-party titles like , , and , alongside third-party offerings such as and Disney+ adaptations that leveraged volumetric video and immersive environments. The initial system introduced photorealistic digital avatars, scanned using the Vision Pro's TrueDepth camera during setup, to represent users in video calls and collaborative sessions while preserving and facial expressions. Post-launch updates rapidly addressed stability concerns. visionOS 1.0.1, released on January 23, 2024, provided early fixes for developers testing ahead of availability. This was followed by visionOS 1.0.2 on January 31, 2024, which patched a critical vulnerability potentially exploited in the wild and included general security enhancements. In March 2024, visionOS 1.1 introduced improvements to rendering for better hair, makeup, and EyeSight display accuracy, alongside fixes for cursor positioning and Mac Virtual Display connectivity issues. It also added an option to reset the device passcode if forgotten, enhancing user recovery without external tools. visionOS 1 deprecated several legacy features to streamline spatial development, including gesture-based presentation in UISplitViewController and support for non-PNG/ textures in USDZ models, limiting compatibility to standard formats. Older ARKit versions were unsupported, requiring developers to migrate to visionOS-native RealityKit and ARKit integrations for optimal performance. The system delivered stable performance benchmarks, supporting per eye at a 90 Hz on the Vision Pro's micro-OLED displays, ensuring smooth rendering of spatial content with low latency hand and . Launch-period issues, such as intermittent app crashes, window glitching, and unresponsiveness after extended use, emerged in early user reports starting February 2024, often tied to high computational loads or initial . These were systematically resolved across the 1.x update timeline through April 2024, with patches for third-party app stability and system-level , culminating in more reliable multitasking and reduced crash frequency.

visionOS 2 (major updates)

visionOS 2 was announced at Apple's (WWDC) on June 10, 2024, and released to the public on September 16, 2024, as build 22N320. This major update built on the foundations of visionOS 1 by introducing enhanced capabilities and developer tools, expanding the platform's utility for both personal and professional use. A standout feature was the integration of spatial photos and videos captured natively on models, viewable in immersive 3D within the redesigned app on . Additionally, algorithms enabled the conversion of existing 2D photos from a user's into spatial photos, adding depth and dimension for a more lifelike experience; these could be shared via SharePlay alongside spatial Personas. Another key addition was Mac Virtual Display, which emulated an external monitor setup by expanding to an ultrawide view equivalent to two 4K displays side-by-side, with support for mouse input and visibility of the Magic Keyboard—though full implementation arrived later in 2024. Updates to core interactions included improved hand tracking accuracy through new intuitive gestures, such as pinching to access the Home View, Control Center, or volume controls, reducing reliance on for common tasks. Developers gained new APIs tailored for enterprise (AR) applications, including volumetric APIs for 3D object rendering, TabletopKit for shared interactive experiences, HealthKit integrations for wellness tracking, and Object Tracking for precise environmental awareness. A subsequent point release, visionOS 2.0.1 on October 3, 2024, addressed bugs such as YouTube video playback freezes in and issues with Web Extension data management. Performance enhancements allowed for smoother multitasking with support for additional simultaneous apps and windows, enabling more flexible arrangements in shared spaces. Travel Mode received an expansion to include train travel, incorporating algorithms to reduce by stabilizing the interface and Environments during movement, thus supporting productivity or entertainment on the go. These changes collectively improved the device's efficiency in dynamic scenarios, such as commuting. Subsequent updates in the 2.x series included visionOS 2.4, released on March 31, 2025, which introduced Apple Intelligence features such as Writing Tools, Image Playground, and enhanced communication and productivity tools. visionOS 2.5, released on May 12, 2025, brought improvements and bug fixes, including a new Vision tab in the for discovering immersive content.

visionOS 2.6 and later (announced developments)

At Apple's (WWDC) on June 9, 2025, the company announced visionOS 2.6, a significant point release in the visionOS 2 series. This update, available in beta for developers starting June 9, 2025, and released publicly on September 15, 2025, introduced enhanced spatial widgets that anchor into users' physical environments, allowing customization of elements like frame width, color, and depth for apps such as Clock, , and . These widgets enable seamless integration of information into mixed reality spaces, building on the immersive foundations of prior versions while emphasizing generative AI for creating lifelike spatial scenes from photos, viewable across , Spatial Gallery, and . visionOS 2.6 deepens AI integrations through expanded Apple Intelligence features, including updates to Image Playground for generating spatial content and support for additional languages such as French, German, Italian, Japanese, Korean, and variants of English and Spanish. advancements include Look to Scroll, which uses built-in eye-tracking for hands-free navigation with adjustable speeds, alongside enterprise-focused tools like the for secure data handling and support for Muse controllers in collaborative workflows. Developer enhancements feature new for volumetric content creation, integration with 180° and 360° cameras from brands like and , and WidgetKit for custom spatial widgets, fostering broader app ecosystem growth. visionOS 2.6.1 was released on November 3, 2025, including improvements and bug fixes such as expansion of the app to and updates to Spatial Gallery. As of November 2025, visionOS 2.6.2 is in developer beta. Beyond these updates, Apple announced an upgraded with the M5 chip on October 15, 2025, available starting October 22, 2025, featuring improved performance for and a new Dual Knit Band for comfort. Apple has outlined a roadmap emphasizing lighter hardware compatibility and enterprise expansions, with analyst reports indicating plans for multiple head-mounted devices starting in 2027, including smart glasses to drive adoption. Teased features point to advanced haptics and cross-device ecosystems, potentially converging with rumored Apple Glass wearables by 2026-2027. These developments aim to address adoption barriers such as high pricing, with no new headsets anticipated in 2026 to allow focus on software maturity and content ecosystem refinement.

Reception and legacy

Critical reviews and awards

Upon its launch in February 2024, visionOS received widespread acclaim from critics for its innovative and immersive capabilities, though tempered by concerns over hardware integration. The Verge's described the experience as "magic, until it's not," praising the headset's exceptional video passthrough and hand-eye tracking for creating a seamless blend of digital and physical worlds, but criticizing the device's weight and short battery life as barriers to prolonged use. Similarly, CNET's Scott Stein highlighted the UI's groundbreaking innovations, such as intuitive window management and volumetric content display, calling it a "mind-blowing look at an unfinished future," while noting the system's reliance on foundations limited its standalone potential. Prior to its launch, at CES 2024, the Vision Pro was honored as a Best of Innovation Honoree in the / category for its pioneering mixed-reality interface. Additionally, Apple's 2024 Design Awards featured multiple native visionOS apps in the Visual Excellence and Interaction categories, such as Gentler Streak for its adaptive spatial fitness tracking and oko for inclusive audio descriptions, underscoring the platform's support for high-quality third-party development. Critics pointed to notable drawbacks, including a steep for gesture-based and privacy implications of the passthrough camera . Reviews frequently cited the initial difficulty in mastering eye and hand controls, which required significant adaptation despite their precision, leading to frustration for new users. Privacy concerns arose from the device's constant environmental scanning, with experts questioning the security of eye-tracking data and potential for unintended in shared spaces, even as Apple emphasized on-device processing. Aggregated review scores for the Vision Pro and visionOS hovered around 85/100, reflecting strong approval for multitasking features like resizable windows and spatial audio, but mixed feedback on features during early rollout. IGN awarded an 8/10, lauding the platform's productivity potential while docking points for comfort issues. WIRED praised the device's capabilities but criticized its high price and limited app ecosystem at launch. With the release of visionOS 2 in September 2024, subsequent 2025 reviews praised enhancements to battery optimization and tools, addressing prior pain points. Updates improved power efficiency through better app suspension and thermal management, extending effective usage sessions. Features like spatial and shared spatial volumes boosted collaboration scores, enabling more natural group interactions in virtual environments. In October 2025, Apple announced an upgraded Vision Pro with the M5 chip, which early reviews praised for improved efficiency and battery life, further solidifying visionOS's position in spatial computing.

Market performance and adoption

The Apple Vision Pro, powered by visionOS, achieved initial sales of approximately 200,000 units in the first quarter of 2024, with cumulative shipments reaching around 450,000 by the end of the year. By mid-2025, cumulative sales had grown to approximately 600,000 units globally, according to analyst estimates. Active visionOS devices numbered roughly 500,000 during this period, accounting for returns and device attrition. Adoption has been driven primarily by enterprise applications in sectors such as design and medicine, where the device's capabilities enable immersive , collaborative prototyping, and surgical simulations. For consumers, in-store demos at Apple retail locations have played a key role in building awareness and trial, allowing potential users to experience visionOS features firsthand and contributing to purchase decisions. Key barriers to broader include the device's $3,499 starting , which positions it as a premium product inaccessible to many consumers. Additionally, early comfort issues, such as and fit leading to headaches or , resulted in high initial return rates, estimated at 20-30% within the 14-day window. Growth accelerated following international expansion in mid-2024, with shipments in non-U.S. markets accounting for nearly 90% of volume in the latter half of the year and contributing to a 30% year-over-year increase in select quarters. By mid-2025, the visionOS featured over 2,500 native applications, with download figures estimated in the millions based on developer reports. Usage metrics indicate an average of 4-6 hours per day among active enterprise and enthusiast users, highlighting engagement in productivity and entertainment scenarios despite overall market constraints. Despite these figures, the relatively small user base has presented ongoing challenges for third-party developers. Early reports from 2024 indicated that many apps struggled to exceed 1,000 downloads, raising concerns about long-term viability and leading to some projects being abandoned or scaled back due to insufficient economic incentives. Developers have also reported persistent technical difficulties, including application crashes, eye tracking inaccuracies (such as drift), and the need for extensive performance optimization in spatial computing environments. Many have utilized the visionOS simulator in Xcode to develop and test applications without physical hardware, mitigating some access barriers posed by the device's high cost. These challenges have contributed to slower ecosystem growth and persistent concerns about developer engagement.

Influence on XR industry

The release of visionOS has spurred significant innovations across the (XR) landscape, particularly in mixed reality passthrough capabilities. Meta, in response to the high-fidelity color passthrough introduced in , accelerated enhancements to its Quest series, including improved mixed reality features in Quest 3 and subsequent models to better blend virtual and physical environments. This competitive pressure has elevated industry standards for environmental awareness in standalone headsets, making passthrough a core expectation rather than an optional add-on. visionOS has also influenced emerging XR platforms, notably Google's Android XR, which was developed as a direct counter to Apple's paradigm. Launched in late with partners like for the XR headset, Android XR incorporates lessons from visionOS's gesture-based interactions and spatial UI, while emphasizing cross-device compatibility to challenge Apple's integrated approach. Apple's advancements in visionOS have contributed to web-based XR standards through enhanced support, enabling more immersive spatial web experiences. In visionOS 2, became enabled by default in , allowing developers to create 3D spatial content without native apps, while visionOS 2.6 introduced the <model> element for inline 3D models and custom spatial environments. These extensions push the spatial web toward broader interoperability, influencing how browsers handle XR on other platforms. The terminology of gained prominence following visionOS's debut, shifting industry discourse from "mixed reality" or "XR" toward a focus on intuitive, environment-integrated experiences that prioritize productivity over immersion alone. This rebranding, championed by Apple, has permeated marketing and development strategies across the sector. Concurrently, XR funding saw a resurgence, with $1.2 billion invested in startups in 2024 and approximately $1.1 billion year-to-date in 2025, according to PitchBook data. In comparisons with competitors, visionOS stands out for its seamless integration with Apple's ecosystem but contrasts sharply with Meta's Horizon OS, which powers Quest devices with a more entertainment-oriented, controller-based interface and broader app library from the Meta Quest Store. Against Microsoft's , which targets industrial applications with holographic overlays via , visionOS offers superior display resolution and eye-tracking for consumer-grade , though it lacks HoloLens's emphasis on enterprise collaboration tools. visionOS's closed , reliant on Apple's and proprietary frameworks, prioritizes security and polish but limits hardware partnerships, unlike the open alternatives in Android XR or Horizon OS that support third-party manufacturers. visionOS's 2025 impacts include deepened collaborations, such as Unity's general availability support for visionOS development, enabling cross-platform porting of 3D assets and apps to without full rewrites. This has facilitated over 2,500 visionOS apps by late 2025, bridging Apple's platform with Unity's multi-device tools and accelerating XR .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.