Hubbry Logo
Event cameraEvent cameraMain
Open search
Event camera
Community hub
Event camera
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Event camera
Event camera
from Wikipedia
A Prophesee event camera.
A Prophesee event camera.

An event camera, also known as a neuromorphic camera,[1] silicon retina,[2] or dynamic vision sensor,[3] is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional (frame) cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur, and staying silent otherwise.

Functional description

[edit]

Event camera pixels independently respond to changes in brightness as they occur.[4] Each pixel stores a reference brightness level, and continuously compares it to the current brightness level. If the difference in brightness exceeds a threshold, that pixel resets its reference level and generates an event: a discrete packet that contains the pixel address and timestamp. Events may also contain the polarity (increase or decrease) of a brightness change, or an instantaneous measurement of the illumination level,[5] depending on the specific sensor model. Thus, event cameras output an asynchronous stream of events triggered by changes in scene illumination.

Comparison of the data produced by an event camera and a conventional camera.

Event cameras typically report timestamps with a microsecond temporal resolution, 120 dB dynamic range, and less under/overexposure and motion blur[4][6] than frame cameras. This allows them to track object and camera movement (optical flow) more accurately. They yield grey-scale information. Initially (2014), resolution was limited to 100 pixels.[citation needed] A later entry reached 640x480 resolution in 2019.[citation needed] Because individual pixels fire independently, event cameras appear suitable for integration with asynchronous computing architectures such as neuromorphic computing. Pixel independence allows these cameras to cope with scenes with brightly and dimly lit regions without having to average across them.[7] It is important to note that, while the camera reports events with microsecond resolution, the actual temporal resolution (or, alternatively, the bandwidth for sensing) is on the order of tens of microseconds to a few milliseconds, depending on signal contrast, lighting conditions, and sensor design.[8]

Typical image sensor characteristics
Sensor Dynamic

range (dB)

Equivalent

framerate (fps)

Spatial

resolution (MP)

Human eye 30–40 200-300* -
High-end DSLR camera (Nikon D850) 44.6[9] 120 2–8
Ultrahigh-speed camera (Phantom v2640)[10] 64 12,500 0.3–4
Event camera[11] 120 50,000 – 300,000** 0.1–1

* Indicates human perception temporal resolution, including cognitive processing time. **Refers to change recognition rates, and varies according to signal and sensor model.

Types

[edit]

Temporal contrast sensors (such as DVS[4] (Dynamic Vision Sensor), or sDVS[12] (sensitive-DVS)) produce events that indicate polarity (increase or decrease in brightness), while temporal image sensors[5] indicate the instantaneous intensity with each event. The DAVIS[13] (Dynamic and Active-pixel Vision Sensor) contains a global shutter active pixel sensor (APS) in addition to the dynamic vision sensor (DVS) that shares the same photosensor array. Thus, it has the ability to produce image frames alongside events. The CSDVS (Center Surround Dynamic Vision Sensor) adds a resistive center surround network to connect adjacent DVS pixels.[14][15] This center surround implements a spatial high-pass filter to further reduce output redundancy. Many event cameras additionally carry an inertial measurement unit (IMU).

Retinomorphic sensors

[edit]
Left: schematic cross-sectional diagram of photosensitive capacitor. Center: circuit diagram of retinomorphic sensor, with photosensitive capacitor at top. Right: Expected transient response of retinomorphic sensor to application of constant illumination.

Another class of event sensors are so-called retinomorphic sensors. While the term retinomorphic has been used to describe event sensors generally,[16][17] in 2020 it was adopted as the name for a specific sensor design based on a resistor and photosensitive capacitor in series.[18] These capacitors are distinct from photocapacitors, which are used to store solar energy,[19] and are instead designed to change capacitance under illumination. They (dis)charge slightly when the capacitance is changed, but otherwise remain in equilibrium. When a photosensitive capacitor is placed in series with a resistor, and an input voltage is applied across the circuit, the result is a sensor that outputs a voltage when the light intensity changes, but otherwise does not.

Unlike other event sensors (typically a photodiode and some other circuit elements), these sensors produce the signal inherently. They can hence be considered a single device that produces the same result as a small circuit in other event cameras. Retinomorphic sensors have to-date[as of?] only been studied in a research environment.[20][21][22][23]

Algorithms

[edit]
Night run reconstruction
A pedestrian runs in front of car headlights at night. Left: an image taken with a conventional camera, exhibiting severe motion blur and underexposure. Right: an image reconstructed by combining the left image with events from an event camera.[24]

Image reconstruction

[edit]

Image reconstruction from events has the potential to create images and video with high dynamic range, high temporal resolution, and reduced motion blur. Image reconstruction can be achieved using temporal smoothing, e.g. high-pass or complementary filter.[24] Alternative methods include optimization[25] and gradient estimation[26] followed by Poisson integration. It has been also shown that the image of a static scene can also be recovered from noise events only by analyzing their correlation with scene brightness.[27]

Spatial convolutions

[edit]

The concept of spatial event-driven convolution was postulated in 1999[28] (before the DVS), but later generalized during EU project CAVIAR[29] (during which the DVS was invented) by projecting event-by-event an arbitrary convolution kernel around the event coordinate in an array of integrate-and-fire pixels.[30] Extension to multi-kernel event-driven convolutions[31] allows for event-driven deep convolutional neural networks.[32]

Motion detection and tracking

[edit]

Segmentation and detection of moving objects viewed by an event camera can seem to be a trivial task, as it is done by the sensor on-chip. However, these tasks are difficult, because events carry little information[33] and do not contain useful visual features like texture and color.[34] These tasks become even more challenging given a moving camera,[33] because events are triggered everywhere on the image plane, produced by moving objects and the static scene (whose apparent motion is induced by the camera's ego-motion). Some of the recent[when?] approaches to solving this problem include the incorporation of motion-compensation models[35][36] and traditional clustering algorithms.[37][38][34][39]

Potential applications

[edit]

Potential applications include most tasks classically fitting conventional cameras, but with emphasis on machine vision tasks (such as object recognition, autonomous vehicles, and robotics.[22]). The US military is[as of?] considering infrared and other event cameras because of their lower power consumption and reduced heat generation.[7]

Considering the advantages the event camera possesses, compared to conventional image sensors, it is considered fitting for applications requiring low power consumption and latency, and where it is difficult to stabilize the camera's line of sight. These applications include the aforementioned autonomous systems, but also space imaging, security, defense, and industrial monitoring. Research into color sensing with event cameras is[when?] underway,[40] but it is not yet[when?] convenient for use with applications requiring color sensing.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
An event camera, also known as a dynamic vision (DVS), is a neuromorphic imaging device that asynchronously captures per-pixel changes in logarithmic brightness as discrete events, rather than producing full intensity at fixed time intervals like conventional cameras. Each event encodes the coordinates (x, y), a precise t (with resolution), and polarity p indicating whether the change represents an increase or decrease in brightness, triggered only when the relative intensity change exceeds a configurable threshold (typically 10-50%). This bio-inspired design mimics the asynchronous signaling of ganglion cells in biological vision systems, resulting in sparse, data-efficient output that focuses on scene dynamics such as moving edges. Event cameras provide key advantages over frame-based sensors, including a dynamic range greater than 120 dB (compared to ~60 dB for standard cameras), sub-millisecond latency (as low as 15 μs in early designs), high on the order of microseconds, and elimination of motion blur due to their event-driven nature. They also consume low power, often around 10 mW at the die level and under 100 mW for full systems, making them ideal for resource-constrained embedded applications. These characteristics enable robust performance in challenging conditions, such as high-speed motion, extreme lighting variations, and low-light environments, where traditional cameras suffer from saturation, blurring, or high data rates. The foundational principles of event cameras emerged from neuromorphic engineering in the late 1980s, with early silicon retinas developed by and Mahowald at Caltech using address-event representation (AER) protocols to transmit asynchronous data. The first practical implementation, the 128×128 DVS, was introduced in 2008 by Patrick Lichtsteiner, Christoph Posch, and Tobi Delbrück at and the University of Zurich's Institute of Neuroinformatics, achieving 120 dB and 15 μs latency through independent per-pixel change detection circuits. Subsequent advancements have included higher resolutions (up to HD and beyond), integrated processing, and commercial products from firms like iniVation, Prophesee, and , fostering growth in event-based algorithms. As of 2025, the event camera market is projected to grow significantly, reaching USD 6.19 billion by 2032, driven by applications in automotive and . Applications of event cameras span , where they enable low-latency tasks like , (SLAM), and agile drone navigation; challenges such as high-dynamic-range reconstruction, estimation, and ; and emerging fields like neuromorphic computing for efficient edge AI. has produced extensive datasets (e.g., DDD17, MVSEC) and software frameworks (e.g., jAER, libcaer) to support development, with ongoing efforts addressing challenges like event noise and the need for specialized models.

Introduction

Definition and Basic Principles

An event camera is a neuromorphic imaging sensor that asynchronously records per-pixel brightness changes rather than capturing full frames at fixed intervals. Unlike traditional cameras, it outputs a stream of discrete events triggered only by significant local intensity variations, enabling high temporal resolution and low latency in dynamic scenes. The basic principles of event cameras rely on pixel-independent operation, where each pixel continuously monitors its own logarithmic intensity and generates an event independently when a brightness change exceeds a predefined threshold. This threshold is typically parameterized by a contrast sensitivity value CC, such that an event is triggered if log(I(t)I(t0))>C\left| \log \left( \frac{I(t)}{I(t_0)} \right) \right| > C
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.