Recent from talks
Contribute something
Nothing was collected or created yet.
Azure Kinect
View on Wikipedia| Developer | Microsoft |
|---|---|
| Released | March 2020 |
| Introductory price | $399.00 |
| Discontinued | October 2023 |
| Operating system | Windows, Linux |
| Camera | 12 megapixel RGB camera; 1 megapixel-depth camera |
| Platform | Microsoft Azure |
| Predecessor | Kinect |
| Website | azure |
The Azure Kinect DK is a discontinued developer kit and PC peripheral which employs the use of artificial intelligence sensors for computer vision and speech models, and is connected to the Microsoft Azure cloud.[1][2] It is the successor to the Microsoft Kinect line of sensors.
The kit includes a 12 megapixel RGB camera supplemented by 1 megapixel-depth camera for body tracking, a 360-degree seven-microphone array and an orientation sensor.[3][4][5] The sensor is based on the depth sensor presented during 2018 ISSCC.[6]
While the previous iterations of Microsoft's Kinect primarily focused on gaming, this device is targeted towards other markets such as logistics, robotics, health care, and retail.[4] With the kit, developers can create applications connected to Microsoft's cloud and AI technologies.[7] The Azure Kinect is used in volumetric capture workflows through the use of software tools that can connect many Azure Kinects into one volumetric capture rig, allowing users to create interactive virtual reality experiences with human performances.[8][9]
The Azure Kinect was announced on February 24, 2019, in Barcelona at the MWC.[10] It was released in the US in March 2020, and in the UK, Germany, and Japan in April 2020.[11]
Microsoft announced that the Azure Kinect hardware kit would be discontinued in October 2023, and referred users to third party suppliers for spare parts.[12]
References
[edit]- ^ "Buy the Azure Kinect developer kit – Microsoft". Microsoft Store. Archived from the original on 2019-02-26. Retrieved 2019-02-25.
- ^ "Azure Kinect DK – Develop AI Models | Microsoft Azure". azure.microsoft.com. Archived from the original on 2019-03-23. Retrieved 2019-03-23.
- ^ Thorp-Lancaster, Dan (2019-07-11). "Azure Kinect developer kit hits general availability". Windows Central. Archived from the original on 2019-07-18. Retrieved 2019-07-18.
- ^ a b "Microsoft Lures Investors With Azure Kinect & Partner Updates". finance.yahoo.com. Archived from the original on 2019-07-18. Retrieved 2019-07-18.
- ^ Lardinois, Frederick (July 2019). "Microsoft's $399 Azure Kinect AI camera is now shipping in the US and China". TechCrunch. Archived from the original on 2023-03-13. Retrieved 2019-07-18.
- ^ mattzmsft. "Depth camera whitepaper - ISSCC 2018 - Mixed Reality". docs.microsoft.com. Archived from the original on 2019-05-10. Retrieved 2019-05-10.
- ^ Ranger, Steve (February 25, 2019). "What is Microsoft's Azure Kinect DK?". ZDNet. Archived from the original on 2020-03-07. Retrieved 2019-07-18.
- ^ Legkov, Az Balabanian & Petr. "Volumetric Video, Depth Cams, and Filmmaking using Depthkit, with Alexander Porter - 081". Research VR Podcast - The Science & Design of Virtual Reality. Archived from the original on 2019-08-06. Retrieved 2019-08-06.
- ^ "Announcing Azure Kinect support in Depthkit!". Depthkit. 11 July 2019. Archived from the original on 6 August 2019. Retrieved 6 August 2019.
- ^ "Microsoft unveils next-generation HoloLens headset and $399 'Azure Kinect' camera for developers". GeekWire. February 24, 2019. Archived from the original on February 25, 2019. Retrieved February 25, 2019.
- ^ "Microsoft Store - Azure Kinect". Archived from the original on 2020-10-27. Retrieved 2020-12-06.
- ^ "Microsoft kills Kinect again". August 21, 2023.
External links
[edit]Azure Kinect
View on GrokipediaOverview
Description
The Azure Kinect Developer Kit (DK) is a spatial computing developer kit designed to combine depth sensing, artificial intelligence capabilities, and integration with Azure cloud services, enabling the development of applications in computer vision and human-computer interaction.[1] It provides developers with tools to create AI-driven solutions that leverage spatial data for enhanced perception and interaction in real-world environments.[1] Priced at $399 USD at launch, the kit targets developers and commercial businesses building sophisticated AI models, particularly in fields such as robotics and healthcare, where it supports applications like patient monitoring and automated navigation systems.[2][1][7] Evolving from the original Kinect sensor, the Azure Kinect shifts focus to professional, non-gaming use cases, offering a compact form factor for enterprise-level AI development while integrating seamlessly with Azure services for scalable cloud-based processing.[8][1]Key Features
The Azure Kinect Developer Kit (DK) features multi-modal sensing capabilities, combining depth, color, infrared, and audio capture in a single device to enable precise 3D spatial mapping and environmental understanding. This integration allows developers to acquire synchronized data from time-of-flight depth sensing for spatial reconstruction, RGB video for visual details, infrared for low-light operations, and a microphone array for audio localization, supporting applications in robotics, human-computer interaction, and augmented reality.[1] AI acceleration is provided through specialized software development kits (SDKs) that leverage the device's sensor data for advanced processing, including real-time body tracking of multiple individuals with 3D joint estimation, object detection via custom models, and speech recognition using integrated audio streams. These capabilities utilize machine learning models run on the host system, drawing from the high-quality, low-latency inputs captured by the device to facilitate on-the-edge AI applications without relying solely on cloud resources. High-fidelity data streams are a core strength, offering synchronized capture across sensors at up to 30 frames per second (FPS) for depth imaging and 4K resolution for RGB video, ensuring temporal alignment essential for dynamic scene analysis and multi-view fusion. This performance enables robust handling of complex environments, such as crowded spaces or fast-moving subjects, while minimizing latency in data acquisition.[9] The modular design enhances extensibility for custom AI pipelines, with configurable sensor modes, open-source SDKs, and support for external synchronization, allowing developers to tailor workflows for specialized tasks like volumetric capture or hybrid edge-cloud processing. It also integrates seamlessly with Azure services for scalable AI deployment.Development and Release
Announcement and Development
The development of the Azure Kinect originated as an evolution of the Kinect v2 sensor technology, transitioning Microsoft's focus from consumer gaming peripherals to enterprise-grade AI and computer vision tools.[10] This shift emphasized integration with cloud-based AI services, achieved through close collaboration between Microsoft's hardware engineering teams and the Azure cloud platform group to enable seamless data processing and analytics at the edge.[11] A significant early milestone came in February 2018, when Microsoft researchers presented a prototype time-of-flight (ToF) depth sensor at the IEEE International Solid-State Circuits Conference (ISSCC), highlighting its high-resolution, low-noise capabilities for real-time 3D mapping.[12] This technology, developed in partnership with semiconductor experts, laid the groundwork for the device's advanced spatial sensing. Building on this, Microsoft announced Project Kinect for Azure in May 2018 during its Build developer conference, unveiling a developer-oriented sensor package that combined depth imaging with onboard compute and connectivity for AI prototyping.[11] The Azure Kinect was formally announced on February 24, 2019, at the Mobile World Congress (MWC) in Barcelona, Spain, where Microsoft showcased it alongside the HoloLens 2 as part of a broader push into mixed reality and intelligent devices.[13] Microsoft's strategic vision framed the device as a foundational tool for spatial computing, designed to empower developers in creating AI-driven applications for industries like healthcare, retail, and robotics, extending beyond the original Kinect's entertainment roots.[1]Launch and Availability
The Azure Kinect Developer Kit (DK) reached general availability in the United States and China on July 15, 2019, following preorders that began in February 2019 to enable early developer access.[2][14] Availability expanded to the United Kingdom, Germany, and Japan in April 2020, driven by strong initial market interest from developers building AI-driven computer vision and speech models.[15][1] The kit was positioned exclusively for developers, with no consumer version produced, and included the Azure Kinect SDK for seamless integration and rapid prototyping of applications.[1] It was offered for direct purchase at $399 through the Microsoft Store and Azure portal, with global distribution later supported by authorized partners to reach additional regions.[1][16] Early reception highlighted its appeal for enterprise use cases in areas like healthcare and manufacturing, contributing to quick adoption among AI researchers and integrators.[15]Discontinuation
In August 2023, Microsoft announced the end of production for the Azure Kinect Developer Kit through an official statement on its Mixed Reality blog, marking the conclusion of direct hardware manufacturing by the company.[17] The hardware discontinuation took effect in October 2023, after which the device was available for purchase only until existing stocks were depleted.[17] This decision reflected Microsoft's strategic shift toward a partner ecosystem for hardware production, enabling third-party manufacturers to license and build upon the Kinect's time-of-flight depth-sensing technology for broader customization and availability.[17] By focusing on software intellectual property licensing rather than hardware sales, Microsoft aimed to sustain the technology's ecosystem without maintaining low-volume direct production.[18] For ongoing support, the Azure Kinect SDK adhered to Microsoft's Modern Lifecycle Policy, providing security and reliability updates until its retirement on August 16, 2024, with no new features developed thereafter.[19] Microsoft recommended sourcing spare parts from third-party suppliers and ensured continued access to the SDK and related tools via GitHub for existing users to maintain their deployments.[17]Hardware
Sensors and Cameras
The Azure Kinect DK incorporates several advanced sensors designed to capture multimodal data for spatial computing and AI applications. These include a depth camera, an RGB camera, an infrared camera, a microphone array, and an inertial measurement unit (IMU), each contributing to comprehensive environmental perception without relying on external lighting for core functions.[9] The depth camera employs time-of-flight (ToF) technology to generate 3D depth maps, illuminating the scene with modulated near-infrared light from an integrated emitter and measuring the phase shift of the reflected light to determine distances up to 5.46 meters. This active sensing approach enables robust 3D mapping in low-light conditions by calculating depth through indirect ToF principles, such as amplitude-modulated continuous-wave detection, which supports applications like object reconstruction and gesture recognition.[20][9] Complementing the depth data, the RGB camera provides high-resolution color imaging that can be aligned and overlaid onto the depth maps, adding visual texture and detail for enhanced scene understanding in computer vision tasks. This integration allows developers to fuse color information with 3D geometry, facilitating applications such as augmented reality overlays and facial analysis.[9] The infrared (IR) camera, integrated within the depth sensing system, operates in narrow-angle and wide-angle modes to capture IR imagery, enabling enhanced depth perception in environments with varying lighting by providing raw IR data that can be used passively without the ToF emitter or actively for depth computation. These modes allow flexibility in field-of-view selection to balance detail and coverage for tasks like motion tracking in cluttered spaces.[9] The microphone array consists of a seven-membrane circular configuration that supports 360-degree audio capture, incorporating beamforming techniques to isolate and enhance voice signals from specific directions while suppressing noise. This setup enables spatial audio processing for applications like voice command recognition and acoustic source localization.[9] The IMU includes an accelerometer and a gyroscope to track device orientation and motion, providing real-time data on linear acceleration and angular velocity for stabilizing sensor outputs and compensating for device movement in dynamic scenarios.[9]Technical Specifications
The Azure Kinect DK features a time-of-flight depth camera with a 1-megapixel resolution, supporting frame rates up to 30 FPS across various modes.[21] It operates in narrow field-of-view (NFOV) and wide field-of-view (WFOV) configurations, with depth ranges from 0.25 m to 5.46 m depending on the mode; for instance, the NFOV 2x2 binned mode covers 0.5–5.46 m, while the WFOV 2x2 binned mode spans 0.25–2.88 m.[21] Depth accuracy includes a systematic error of less than 11 mm + 0.1% of distance and a random error of ≤17 mm.[21] The RGB camera provides up to 12 MP resolution at 4096×3072 pixels, with support for 3840×2160 at 30 FPS, and a field of view of 90° horizontal by 59° vertical in 16:9 aspect ratio.[21] It supports HDR and multiple formats including MJPEG, YUY2, and NV12, enabling high-quality color imaging aligned with depth data.[21] The infrared (IR) camera, integrated with the depth sensor, delivers 1 MP resolution at 1024×1024 pixels and up to 30 FPS in passive IR mode, with FOV options matching the depth camera: narrow at 75°×65° and wide at 120°×120°.[21]| Component | Specification |
|---|---|
| Microphones | 7-channel circular array, USB Audio Class 2.0 compliant |
| Sensitivity | -22 dBFS at 94 dB SPL, 1 kHz |
| Signal-to-Noise Ratio (SNR) | >65 dB |
| Acoustic Overload Point | 116 dB |
| Sampling Rate | 48 kHz (16-bit) |
