Recent from talks
Nothing was collected or created yet.
GStreamer
View on Wikipedia| GStreamer | |
|---|---|
A simple pipeline with gst-launch | |
| Developer | GStreamer Team |
| Initial release | 11 January 2001[1] |
| Stable release | |
| Repository | |
| Written in | C[2] |
| Operating system | BSDs, OpenSolaris, Linux, Android, macOS, iOS, Windows, OS/400 |
| Type | Multimedia framework |
| License | LGPL-2.1-or-later[3] |
| Website | gstreamer |
GStreamer is a pipeline-based multimedia framework that links together a wide variety of media processing systems to complete complex workflows. For instance, GStreamer can be used to build a system that reads files in one format, processes them, and exports them in another. The formats and processes can be changed in a plug and play fashion.
GStreamer supports a wide variety of media-handling components, including simple audio playback, audio and video playback, recording, streaming and editing. The pipeline design serves as a base to create many types of multimedia applications such as video editors, transcoders, streaming media broadcasters and media players.
It is designed to work on a variety of operating systems, e.g. the BSDs, OpenSolaris, Linux, Android, macOS, iOS, Windows, OS/400.
GStreamer is free and open-source software subject to the terms of the LGPL-2.1-or-later[3] and is being hosted at freedesktop.org.
Distribution and adoption
[edit]The GNOME desktop environment, a heavy user of GStreamer, has included GStreamer since GNOME version 2.2 and encourages GNOME and GTK applications to use it. Other projects also use or support it, such as the Phonon media framework and the Songbird media player. It is also used in the WebKit browser engine.[4]
GStreamer also operates in embedded devices like the Jolla Phone, the Palm Pre,[5] Tizen and the Nokia 770, N800, N810, N900 and N9 Internet Tablets running the Maemo operating system.
In addition to source code releases, the GStreamer project provides binary builds for Android, iOS, OSX and Windows.[6]
The LIGO Laboratory make use of GStreamer to simulate and analyze gravitational wave data. The GStreamer interface is called GstLAL. [7]
Software architecture
[edit]


GStreamer is written in the C programming language with the type system based on GObject and the GLib 2.0 object model.
Language bindings
[edit]A library written in one programming language may be used in another language if bindings are written; GStreamer has a range of bindings for various languages such as Go, Python, Rust, Vala, C++, Perl, GNU Guile, C# and Ruby.
Overview
[edit]GStreamer processes media by connecting a number of processing elements into a pipeline. Each element is provided by a plug-in. Elements can be grouped into bins, which can be further aggregated, thus forming a hierarchical graph. This is an example of a filter graph.
Elements communicate by means of pads. A source pad on one element can be connected to a sink pad on another. When the pipeline is in the playing state, data buffers flow from the source pad to the sink pad. Pads negotiate the kind of data that will be sent using capabilities.
The diagram to the right could exemplify playing an MP3 file using GStreamer. The file source reads an MP3 file from a computer's hard-drive and sends it to the MP3 decoder. The decoder decodes the file data and converts it into PCM samples which then pass to the sound-driver. The sound-driver sends the PCM sound samples to the computer's speakers.
Plug-ins
[edit]GStreamer uses a plug-in architecture which makes the most of GStreamer's functionality implemented as shared libraries.[8] GStreamer's base functionality contains functions for registering and loading plug-ins and for providing the fundamentals of all classes in the form of base classes. Plug-in libraries get dynamically loaded to support a wide spectrum of codecs, container formats, input/output drivers and effects.
Plug-ins can be installed semi-automatically when they are first needed. For that purpose distributions can register a backend that resolves feature-descriptions to package-names.
Since version 0.9, the plug-ins come grouped into three sets (named after the film The Good, the Bad and the Ugly).[9]
| Plug-in set name | Description |
|---|---|
| Good | This package contains the GStreamer plug-ins from the "good" set, a set of high quality plug-ins under the LGPL license.[10] |
| Bad | GStreamer Bad Plug-ins comprises a set of plug-ins not up-to-par compared to the rest. They might closely approach good-quality plug-ins, but they lack something: perhaps a good code review, some documentation, a set of tests, a real live maintainer, or some actual wide use.[11] |
| Ugly | This package contains plug-ins from the "ugly" set, a set of good-quality plug-ins that might pose distribution problems.[12] |
Individual distributions may further sub-classify these plug-ins: for example Ubuntu groups the "bad" and "ugly" sets into the "Universe" or the "Multiverse" components.
In addition, there is a GStreamer FFmpeg plug-in (called gst-libav for historic reasons[13]) that extends the number of supported media formats.
Video acceleration
[edit]
There are various SIP blocks that can do the computations to decode certain video codecs, such as PureVideo, UVD, QuickSync Video, TI Ducati and more. Such needs to be supported by the device driver, which in turn provides one or multiple interfaces, like VDPAU, VAAPI, Distributed Codec Engine or DXVA to end-user software like MPlayer to access this hardware and offload computation to it.
- It is possible to use Video Coding Engine with GStreamer through the OpenMAX IL wrapper plugin gst-omx.[14] This is for example possible on the Raspberry Pi.[15]
- The SIP core present on some Texas Instruments SoCs is also accessible through GStreamer: gst-dmai, gst-openmax, gst-dsp.[16]
- VDPAU and VAAPI are supported with GNOME Videos >= 2.28.0 and GStreamer >= 0.10.26 since 2010[17]
- Broadcom Crystal HD is supported[18]
Media formats
[edit]The Good, Bad and Ugly GStreamer plugins mentioned earlier provide, alongside processing elements/filters of all kinds, support for a wide variety of file formats, protocols and multimedia codecs. In addition to those, support for more than a hundred compression formats (including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, H.264, RealVideo, MP3, WMV, etc.[19]) is transparently provided through the gst-libav plug-in.
History and development
[edit]Early days
[edit]Erik Walthinsen founded the GStreamer project in 1999. Many of its core design ideas came from a research project at the Oregon Graduate Institute.[20] Wim Taymans joined the project soon thereafter and greatly expanded on many aspects of the system. Many other software developers have contributed since then.
The first major release was 0.1.0 which was announced on 11 January 2001.[1] Not long after, GStreamer picked up its first commercial backer. Towards the end of January 2001, they hired Erik Walthinsen to develop methods for embedding GStreamer in smaller (cell phone-class) devices. Another RidgeRun employee, Brock A. Frazier, designed the GStreamer logo. RidgeRun later struggled financially and had to lay off its staff, including Erik Walthinsen. GStreamer progress was mostly unaffected.
The project released a series of major releases with 0.2.0 coming out in July 2001, 0.4.0 in September 2002, and 0.8.0 in March 2004. During that period the project also changed its versioning strategy and while the first releases were simply new versions, later on the middle number started signifying release series. This meant the project did release a string of 0.6.x and 0.8.x releases which was meant to stay binary compatible within those release series. Erik Walthinsen more or less left GStreamer development behind during this time, focusing on other ventures.
All release series, the project face difficulties. Every series is not very popular in the Linux community mostly because of stability issues and a serious lack of features compared to competing projects like Xine, MPlayer, and VLC. The project also suffers a lack of leadership as Wim Taymans, the project lead since Erik Walthinsen had left, had largely stopped participating.
The 0.10 series
[edit]In 2004, a new company was founded, Fluendo, which wanted to use GStreamer to write a streaming server Flumotion and also provide multimedia solutions for GStreamer. During this time, Fluendo hired most of the core developers including Wim Taymans and attracted the support of companies such as Nokia and Intel to bring GStreamer to a professional level and drive community adoption.
With Wim Taymans back at the helm, the core of GStreamer was redesigned and became what is the current 0.10.x series, which had its first release (0.10.0) in December 2005.[21] It has maintained API and ABI compatibility since.
With a new stable core in place, GStreamer gained in popularity in 2006, being used by media players including Totem, Rhythmbox and Banshee with many more to follow. It was also adopted by corporations such as Nokia, Motorola, Texas Instruments, Freescale, Tandberg, and Intel.
In 2007, most of the core GStreamer developers left Fluendo, including GStreamer maintainer Wim Taymans who went on to co-found Collabora Multimedia together with other GStreamer veterans, while others joined Sun Microsystems, Oblong Industries, and Songbird.
Between June 2012 and August 2014, GStreamer 0.10 was also distributed by Collabora and Fluendo as a multiplatform SDK,[22] on the third-party gstreamer.com website (rather than gstreamer.freedesktop.org for the upstream community project). The goal was to provide application developers with a SDK that would be functionally identical on Windows, Mac OS X, iOS, and Android. The SDK initiative aimed to facilitate the commercial adoption of the GStreamer project, as it provided a standardized entry point to developing multimedia applications with GStreamer, without needing to build the entire platform by oneself. Users of the SDK also benefited from documentation Archived 2012-06-16 at the Wayback Machine, tutorials and instructions specific to that SDK.
The 1.x series
[edit]GStreamer 1.0 was released on September 24, 2012.[23] The 1.x series is parallel installable to GStreamer 0.10 to ease the transition, and provides many architectural advantages over the 0.10 series.[24] Generally speaking, GStreamer 1.0 brought significant improvements for:
- Embedded processors support, lower power consumption, offloading work to specialized hardware units (such as DSPs)
- Hardware accelerated video decoding/encoding using GPUs
- Zero-copy memory management (avoiding unnecessary roundtrips between the CPU and GPU) for better performance and lower power consumption
- Dynamic pipelines
- API and code cleanups
Beyond the technical improvements, the 1.x series is also defined by a new release versioning scheme. As the GStreamer roadmap explains,[25] all 1.x.y versions carry a -1.0 API version suffix and have a stable API/ABI. The API/ABI can only be broken by a new major release series (i.e.: 2.x); however, there are currently no plans for a 2.0 release series. Until then, the new version numbering scheme can be used to predict the intended use of each release. The roadmap cites some examples:
- 1.0.0, 1.0.1, 1.0.2, 1.0.3... stable release and follow-up bug-fix releases
- 1.1.0, 1.1.1, 1.1.2, 1.1.3... pre-releases, development version leading up to 1.2.0
- 1.2.0, 1.2.1, 1.2.2, 1.2.3... stable release and follow-up bug-fix releases
- 1.3.0...
- 1.4.0...
- etc.
In March 2013, the GStreamer project maintainers issued a statement[26] to clarify that the 0.10 series is no longer maintained. The statement reasserted the GStreamer project's willingness to help application and plugin developers migrate to the new technology, and hinted that those for whom switching to the 1.x series was still considered impossible could seek assistance from various consulting companies.
1.2 added support for DASH adaptive streaming, JPEG 2000 images, VP9 and Daala video, and decoding-only support for WebP.
Version 1.14 was released on March 19, 2018,[27] adding support for WebRTC, AV1, Nvidia NVDEC, and Secure Reliable Transport, among other changes.
Version 1.22 was released on January 23, 2023,[28] adding improved support for AV1, in addition to support for HLS, DASH and Microsoft Smooth Streaming for adaptive bitrate streaming.
See also
[edit]- List of software that uses GStreamer
- OggConvert – a simple GUI front-end
- GNOME SoundConverter – a GUI front-end based on GStreamer and GTK for transcoding digital audio files
- Pitivi – a video editor based on GStreamer
References
[edit]- ^ a b "GStreamer "Slipstream" 0.1.0 released". 11 January 2001. Archived from the original on 11 November 2012. Retrieved 3 November 2010.
- ^ "GStreamer", Ohloh Analysis Summary, Ohloh, archived from the original on 2014-06-26, retrieved 2016-11-06
- ^ a b "What are the exact licensing terms for GStreamer and its plugins?". freedesktop.org. Archived from the original on 2021-06-07. Retrieved 2021-06-07.
- ^ "Igalia Multimedia". Archived from the original on 2021-09-01. Retrieved 2021-09-01.
- ^ "webOS and GStreamer". webOShelp. Archived from the original on 22 March 2009. Retrieved 25 July 2009.
- ^ "GStreamer: Download". gstreamer.freedesktop.org. Archived from the original on 2015-05-10. Retrieved 15 May 2015.
- ^ "GstLAL Project Page". Wiki.ligo.org. Archived from the original on 23 April 2020. Retrieved 6 May 2019.
- ^ Overview of the modules, Gstreamer.freedesktop.org, archived from the original on 25 January 2012, retrieved 8 February 2012
- ^ GStreamer 0.9 development series - Hung by a Thread, Gstreamer.freedesktop.org, archived from the original on 27 January 2013, retrieved 24 February 2013
- ^ "GStreamer Good Plug-ins". Archived from the original on 2019-02-07. Retrieved 2019-02-05.
- ^ "GStreamer Bad Plug-ins". Archived from the original on 2019-02-07. Retrieved 2019-02-05.
- ^ "GStreamer Ugly Plug-ins". Archived from the original on 2019-02-07. Retrieved 2019-02-05.
- ^ "subprojects/gst-libav/README.md · 47ac79d7b8cc078f4890d0ce21f47e1c1af2c736 · GStreamer / gstreamer · GitLab". GitLab. 24 September 2021. Retrieved 2022-07-12.
- ^ "GStreamer OpenMAX IL wrapper plugin". gstreamer.freedesktop.org. Archived from the original on 2017-07-10. Retrieved 2017-07-21.
- ^ "Gstreamer 1.0 for raspbian". GRaspberrypi.org. Archived from the original on 2017-07-10. Retrieved 2017-07-21.
- ^ "GStreamer Plug-ins for TI hardware". Processors.wiki.ti.com. Archived from the original on 2017-06-28. Retrieved 2017-07-21.
- ^ "Fluendo Codec Pack Release 11 bring VDPAU and VAAPI support". 2010-03-25. Archived from the original on 2014-06-24.
- ^ Debian Webmaster. "Debian - Details of package gstreamer0.10-crystalhd in wheezy". Packages.debian.org. Archived from the original on 2017-06-29. Retrieved 2017-07-21.
- ^ "subprojects/gst-libav/ext/libav/gstavcodecmap.c · 47ac79d7b8cc078f4890d0ce21f47e1c1af2c736 · GStreamer / gstreamer · GitLab". GitLab. 16 January 2022. Retrieved 2022-07-12.
- ^ Edge, Jake (26 October 2010). "GStreamer: Past, present, and future". LWN.net. Retrieved 15 May 2022.
- ^ "GStreamer 0.10.0 stable release - Announcement of the first release in 0.10 stable series". gstreamer.freedesktop.org. Archived from the original on 2017-07-07. Retrieved 2017-07-21.
- ^ "GStreamer documentation". Docs.gstreamer.com. Archived from the original on 2016-10-28. Retrieved 2017-07-21.
- ^ "GStreamer 1.0 released". gstreamer.freedesktop.org. Archived from the original on 2017-07-07. Retrieved 2017-07-21.
- ^ "GStreamer 1.0 and 0.10". Lwn.net. Archived from the original on 2017-06-13. Retrieved 2017-07-21.
- ^ "ReleasePlanning2013 - gstreamer Wiki". Archived from the original on 2013-08-15. Retrieved 2013-09-16.
- ^ "GStreamer 0.10 no longer maintained". Lists.freedesktop.org. 11 March 2013. Archived from the original on 2017-07-10. Retrieved 2017-07-21.
- ^ "GStreamer 1.14 release notes". Archived from the original on 2018-03-20. Retrieved 2018-09-08.
- ^ "GStreamer 1.22 release notes". gstreamer.freedesktop.org. Retrieved 2023-05-18.
External links
[edit]GStreamer
View on GrokipediaIntroduction
Overview
GStreamer is an open-source, pipeline-based multimedia framework designed for constructing directed acyclic graphs (DAGs) of media-handling components, enabling developers to build complex streaming media applications.[2] It facilitates the processing and manipulation of audio, video, and other data flows with low overhead, supporting a wide range of multimedia tasks through its modular architecture.[2] The framework's primary functions include audio and video playback, recording, streaming, transcoding, and non-linear editing, making it suitable for applications from simple media players to advanced broadcasting systems.[1] GStreamer is licensed under the GNU Lesser General Public License (LGPL) version 2.1 or later, ensuring broad usability while promoting open-source contributions.[7] Written primarily in the C programming language for portability and performance, it leverages GObject from GLib to provide object-oriented features.[7] GStreamer supports multiple operating systems, including Linux, Windows, macOS, Android, iOS, and BSD variants, allowing cross-platform development.[7] Its initial release occurred on 11 January 2001. At its core, GStreamer uses pipelines composed as DAGs of interconnected elements, where data flows through pads—specialized ports on elements—that negotiate media types and handle buffering for efficient processing.[8] This design enables dynamic construction and reconfiguration of media graphs at runtime.[2]Key Features
GStreamer distinguishes itself through its highly modular architecture, which enables developers to construct complex multimedia pipelines from reusable components. Central to this is an extensive ecosystem comprising over 1,000 plugins that provide comprehensive support for various media processing tasks, including codecs for formats like H.264 and VP9, demuxers for container formats such as MP4 and MKV, muxers for output generation, and effects for audio and video manipulation.[9] This modularity allows for flexible combinations of elements to handle diverse workflows, from simple playback to advanced editing, without requiring custom code for common operations.[8] The framework excels in real-time processing, making it suitable for low-latency applications such as live streaming and video conferencing. It supports live sources that produce data synchronized to a pipeline clock, with mechanisms to manage latency—typically around 20-33 milliseconds for audio and video—through buffer compensation and dynamic adjustments to handle network jitter or varying processing delays.[10] This ensures smooth, real-time performance even in multi-source pipelines combining live and non-live elements.[10] Hardware acceleration is seamlessly integrated, leveraging APIs like VA-API for Intel/AMD graphics, VDPAU for NVIDIA on Linux, and NVENC for NVIDIA encoding to offload computationally intensive tasks. This support extends to modern formats including H.264, VP8/VP9, and AV1, enabling efficient decoding, encoding, and processing on GPUs while maintaining compatibility with direct memory access techniques like DMA-BUF for zero-copy operations.[11] Such integrations reduce CPU load and improve performance in resource-constrained environments.[11] Dynamic pipeline manipulation at runtime provides robust control over media flows, including precise seeking to specific timestamps, pausing via state transitions to the PAUSED mode for prerolling, and error recovery through bus message handling and element flushing. These capabilities, facilitated by functions likegst_element_seek() with flags for accuracy and flushing, allow applications to adapt pipelines on-the-fly without interrupting playback.[12] Building on its pipeline architecture, this feature ensures resilience in interactive or variable-bandwidth scenarios.[12]
GStreamer's cross-platform portability spans major operating systems including Linux, Android, iOS, macOS, and Windows, with a thread-safe design that leverages multi-core processors through fully multithreaded element processing and task management.[7] This enables efficient parallel data handling across threads while maintaining synchronization via clocks and timestamps.[8]
Advanced protocol support further enhances its streaming capabilities, including RTSP for controlled media delivery over TCP/UDP, WebRTC for peer-to-peer communication with ICE consent mechanisms, and adaptive streaming via DASH and HLS with low-latency extensions like LL-HLS.[13][11] These features facilitate high-quality, adaptive bitrate delivery in networked environments.[11]
History
Origins and Early Development
GStreamer was founded in 1999 by Erik Walthinsen, drawing from a research project at the Oregon Graduate Institute, to create a unified open-source multimedia framework as an alternative to the fragmented tools available for Linux, particularly aiming to enhance multimedia support in the GNOME desktop environment.[14][15] The project's first public release, version 0.0.9, appeared on October 31, 1999, primarily for developers to explore the code, with Walthinsen announcing it on the GNOME mailing list.[14] This was followed by the first major release, 0.1.0 ("Slipstream"), on January 10, 2001, which introduced foundational pipeline concepts for linking media processing components.[16] Shortly thereafter, RidgeRun Inc. provided the first commercial backing by hiring Walthinsen, focusing initial efforts on embedded systems.[17] Early development occurred amid challenges from the absence of a cohesive multimedia infrastructure in GNOME, prompting integration under the Ximian umbrella (acquired by Novell in 2003), where key contributors like Wim Taymans advanced the core design.[18] Jan Schmidt joined as a developer around 2002, contributing to early stability efforts.[19] The plugin-based extensibility, a core design choice from the outset, allowed modular growth without deep architectural overhauls. Pre-0.10 milestones included the 0.5 series in late 2002, which provided initial stability for basic applications, and the 0.8 series in 2004, which expanded the element library and refined capability negotiation protocols for better format handling across pipelines.[20][21] These releases marked progress toward a robust framework suitable for broader adoption in desktop and streaming use cases.Major Release Series
The GStreamer 0.10 series, first released on December 5, 2005, represented the project's first stable API and ABI after several years of intensive development, enabling reliable integration into desktop multimedia applications and achieving widespread adoption due to its thread-safe design and enhanced functionality.[22] This series maintained backward compatibility within its versions until support ended in March 2013, serving as the foundation for many Linux distributions and media players during its active period.[23] The transition to the 1.x series culminated in the release of GStreamer 1.0.0 on September 24, 2012, which involved a significant rewrite to improve overall performance through more efficient buffer and event allocation, better Unicode support via enhanced language bindings, and capabilities for gapless audio playback in pipelines.[24] This major version was backward-incompatible with the 0.10 series but designed to coexist in parallel installations, facilitating a smooth migration for developers and users while introducing flexible memory handling and refined caps negotiation mechanisms.[24] The 1.2 series, released on September 24, 2013, built upon the 1.0 foundation by adding support for adaptive streaming protocols like DASH and Microsoft Smooth Streaming, VP9 video encoding and decoding, and WebP image decoding, alongside improved integration for Android platforms through enhanced media handling.[25] These additions expanded GStreamer's versatility for modern web and mobile multimedia workflows, with further refinements in video and audio processing elements.[25] Subsequent releases from the 1.4 series (July 21, 2014) through 1.12 (May 4, 2017) focused on codec advancements and performance optimizations, including initial HEVC (H.265) encoding and decoding support in 1.6 via the x265 encoder and libde265 decoder, along with RTP payloading for H.265 streams.[26] Enhancements to WebM handling arrived in the same series through splitmuxsink and splitmuxsrc elements for chunked recording and seamless playback of split files, while low-latency modes were bolstered by deadline-based processing in GstAggregator for live audio/video mixing.[26] The 1.14 release on March 19, 2018, introduced native WebRTC support for real-time bidirectional audio/video streaming compatible with web browsers, as well as experimental AV1 codec decoding to prepare for emerging royalty-free video standards.[27] Throughout these series, GStreamer's plugin ecosystem evolved with formalized classifications into "Good," "Bad," and "Ugly" sets to guide users and distributors: "Good" plugins offer high-quality, well-tested code under LGPL licensing suitable for broad inclusion; "Bad" plugins provide useful but under-reviewed or undocumented functionality; and "Ugly" plugins deliver reliable features yet carry potential licensing or patent concerns that may complicate distribution.[28] This structure, originating in the early stable releases, ensures flexibility in deployment while prioritizing code quality and legal compliance.Recent Developments
The GStreamer 1.20 series, released on February 3, 2022, introduced significant enhancements for modern video codecs and machine learning integration. It added hardware-accelerated AV1 decoding support through elements likevaapiav1dec and msdkav1dec, along with a new av1parse parser, enabling more efficient AV1 handling across platforms.[29] The series also debuted the onnx plugin, allowing seamless application of ONNX neural network models to video streams for tasks like inference and processing.[29] Additionally, improved support for Apple Silicon was achieved via Cerbero's cross-compilation to ARM64 macOS with universal binaries and fixes for plugin loading on ARM64 systems.[29]
Building on this, the 1.22 series, released on January 23, 2023, advanced adaptive streaming and codec capabilities. It featured new adaptive demuxers for HLS, DASH, and Microsoft Smooth Streaming, offering better performance, stream selection, and buffering for dynamic bitrate adaptation.[30] Hardware decoding for VP9 and AV1 was expanded with support via VAAPI, AMF, D3D11, NVCODEC, QSV, and MediaSDK, including 12-bit VP9 formats for higher quality.[30] The Rust bindings reached greater maturity, with Rust plugins now included in macOS and Windows/MSVC binaries, and approximately 33% of project commits written in Rust, facilitating safer development.[30]
The 1.24 series, released on March 4, 2024, emphasized reliability and emerging technologies. It incorporated multiple security fixes for demuxers (e.g., MP4, Matroska, Ogg), subtitle parsers, and decoders to address vulnerabilities.[11] WebRTC functionality was bolstered with ICE consent freshness per RFC 7675, a new webrtcsrc element, and signallers for LiveKit and AWS Kinesis Video Streams.[11] For AI and machine learning, the GstNVIDIA plugin was added, enabling GPU-accelerated inference through elements like nvjpegenc and CUDA memory sharing via cudaipcsrc/sink.[11]
The 1.26 series began with the initial release of 1.26.0 on March 11, 2025. This version optimized 8K video workflows with H.266/VVC codec support (including VA-API hardware decoding), Vulkan Video enhancements, and CUDA-based AV1 encoding via NVCODEC.[5] It also improved Interoperable Master Format (IMF) handling through closed caption advancements like H.264/H.265 extractors, cea708overlay, and SMPTE 2038 metadata support.[5] Streaming stability was refined with bug fixes for HLS/DASH retries, RTSP synchronization modes, and WebRTC raw payload handling.[5]
The latest stable release in the 1.26 series is GStreamer 1.26.8, issued on November 10, 2025.[5]
GStreamer has increasingly shifted toward implementing new elements in Rust to leverage its memory safety guarantees and performance benefits, as evidenced by dedicated Rust plugins like those for closed captions in the 1.26 series and the growing proportion of Rust-based contributions.[5][30]
Architecture
Core Components
GStreamer elements serve as the fundamental building blocks of media pipelines, acting as modular components that handle specific tasks in the processing of multimedia data.[31] These elements are categorized based on their functionality: sources generate data streams, such as thefilesrc element which reads media files from disk; sinks consume data to render or output it, exemplified by xvimagesink for displaying video frames; filters transform or manipulate data, like videoconvert which handles format conversions between different video color spaces; demuxers separate multiplexed streams into individual components, often creating dynamic pads for each extracted stream; and muxers combine multiple streams into a single container format.[31] Each element operates as a black box, encapsulating its internal logic while exposing standardized interfaces for interconnection within a pipeline.[31]
Elements communicate through pads, which function as input and output interfaces for data flow.[32] Sink pads receive data, while source pads emit it, with pads classified as always-present, sometimes (dynamically created or destroyed, such as in demuxers), or on-request (generated as needed, like in tee elements).[32] Capabilities, or caps, define the media formats supported by these pads, consisting of structured descriptions that specify properties like media type, dimensions, and encoding— for instance, "video/x-raw, format=AYUV, width=(int)384, height=(int)288".[32] Caps enable format negotiation, ensuring compatibility between connected elements by filtering and matching supported types during pipeline setup.[32]
Bins and pipelines provide organizational structures for composing elements into functional units.[33] A bin is a container that groups multiple elements, managing their collective state changes and propagating bus messages for events like errors or end-of-stream signals.[33] Pipelines, as specialized top-level bins, orchestrate the entire media workflow, synchronizing operations across elements and handling states such as playing, paused, or stopped via functions like gst_element_set_state().[33] The pipeline's bus serves as a central messaging system, allowing applications to monitor and respond to asynchronous events from any contained element.[33]
GStreamer's scheduling and threading model facilitates efficient data flow through push and pull modes.[34] In push mode, upstream elements actively send data downstream via source pads, suitable for live or constant-rate streams, where a chain function processes incoming buffers.[34] Pull mode, conversely, allows downstream elements to request data from upstream sinks as needed, ideal for seekable or on-demand sources, using a pull_range() mechanism to fetch specific byte ranges.[34] The framework automatically manages threads by creating streaming tasks from a thread pool, assigning them to pads based on mode selection during activation, with elements like queues enforcing thread boundaries for parallelism and buffering.[35] This model ensures scalability without requiring explicit thread handling from applications, adapting to pipeline topology for optimal performance.[35]
The negotiation process dynamically resolves format agreements between elements at pipeline construction or reconfiguration.[36] It begins with downstream elements querying upstream pads for supported caps via CAPS queries, prompting the upstream to select and propose a compatible format, which is then propagated as a CAPS event.[36] Elements respond with ACCEPT_CAPS queries to validate proposals, potentially triggering renegotiation through RECONFIGURE events if conditions change, such as format transformations in converters.[36] For fixed negotiation, sources like demuxers output predetermined caps; transform elements map input to output formats directly; and dynamic cases, such as encoders, iterate over downstream preferences to find intersections.[36] This iterative, query-driven approach ensures seamless interoperability while respecting each element's capabilities.[36]
Plugins and Elements
GStreamer's plugin system is built around dynamically loaded shared libraries, enabling modular extension of its multimedia capabilities without recompiling the core framework. These plugins encapsulate elements, the fundamental processing units, and are loaded at runtime based on the requirements of a pipeline. The core set of plugins, part of the gst-plugins-base package, is always included in standard installations to provide essential functionality, while additional plugins are distributed in separate packages to allow selective inclusion depending on licensing, stability, or hardware needs.[8][37] Plugins are classified into distinct sets to reflect their quality, licensing, and potential legal implications. The gst-plugins-good set contains well-maintained, stable plugins licensed under the LGPL, ensuring broad compatibility and reliability for common use cases. In contrast, gst-plugins-bad includes functional but potentially unstable or lower-quality plugins that may introduce risks such as incomplete features or security vulnerabilities. The gst-plugins-ugly set comprises high-quality plugins that are otherwise suitable but pose distribution challenges due to patent encumbrances, such as those involving H.264 video encoding.[38][28] Elements within these plugins number over 1,600 across more than 230 plugins, covering a wide array of multimedia processing tasks. Key types include decoders, which convert compressed media into raw formats (e.g.,avdec_h264 for H.264 video); encoders, which compress raw data (e.g., x264enc for H.264 output); parsers, which analyze and segment streams for further processing (e.g., h264parse); and protocol handlers, which manage network transport (e.g., rtpsource for RTP streams). These elements integrate seamlessly into pipelines, with each plugin potentially providing multiple elements tailored to specific roles in data flow.[39][31][9]
For video acceleration, GStreamer relies on specialized plugins to leverage hardware capabilities, reducing CPU load for encoding and decoding. The vaapi plugin supports Video Acceleration API for Intel and AMD GPUs, enabling efficient processing of formats like H.264 and VP9. The v4l2 plugin interfaces with Video4Linux2 on Linux systems to access hardware codecs directly from the kernel, such as for H.264 decoding on embedded devices. Additionally, the nvenc plugin provides NVIDIA GPU acceleration for high-performance encoding, supporting H.264 and H.265 via the NVENC hardware interface. These plugins allow pipelines to automatically negotiate hardware paths when available, optimizing performance for resource-constrained environments.[40][41]
GStreamer's media format support is extensive through its plugins, encompassing a broad range of containers, codecs, and metadata handling to accommodate diverse multimedia workflows. Containers like MP4 (via qtmux and qtmoov), MKV (via matroskamux), and OGG (via oggmux) are natively handled for multiplexing and demultiplexing streams. Codec support includes audio formats such as AAC (via faad or avdec_aac) and Opus (via opusdec), as well as video codecs like H.265/HEVC (via x265enc or hardware variants). Metadata extraction and embedding are managed by elements like id3v2mux for tags in audio files. Where native support has gaps, such as for proprietary or less common formats, plugins integrate external libraries like FFmpeg through the gst-libav module, providing decoders (e.g., avdec_vp9) and encoders that fill these voids while maintaining pipeline compatibility.[42][9][43]
Language Bindings
GStreamer offers a range of language bindings to enable developers to build multimedia applications without directly using the core C API, promoting easier integration across programming ecosystems. Official bindings leverage GObject Introspection (GI) for seamless access to the GStreamer API in higher-level languages, including Python through PyGObject, JavaScript via GJS, and Vala as part of the Vala project.[44][45][46] These GI-based bindings automatically handle object lifecycle management and type conversions, reducing boilerplate code and errors associated with manual memory handling in C.[44] Community-maintained bindings extend support to additional languages, such as Rust via the gst-rs crate, which provides full API coverage starting with GStreamer 1.18 and emphasizes safety through Rust's ownership model.[47][48] Go bindings are available through the go-gst project, offering idiomatic wrappers for pipeline construction and element manipulation.[49] Node.js wrappers, often built on GI via libraries like node-gtk, allow JavaScript developers to interface with GStreamer for server-side streaming tasks.[50] A key advantage of these bindings is simplified development in managed languages, where automatic memory management prevents common issues like leaks or dangling pointers; for instance, in Python, a basic video test pipeline can be launched concisely withGst.parse_launch("videotestsrc ! videoconvert ! autovideosink"), enabling rapid prototyping without explicit resource cleanup.[44][51] This approach makes plugin elements, such as sources and sinks, directly accessible via high-level constructs.
However, bindings may not expose every low-level C API detail, limiting fine-grained control over internal operations like custom buffer allocation.[52] Interpreted languages like Python and JavaScript introduce performance overhead due to runtime interpretation and bridging costs, making them less suitable for latency-critical real-time processing compared to compiled bindings in Rust or C++.[53]
The bindings have evolved significantly within the GStreamer 1.x series, with GI support maturing alongside the 1.0 release to provide stable, autogenerated interfaces that align with the framework's plugin architecture.[44] This progression has fostered cross-language development, with updates in subsequent releases like 1.18 enhancing coverage and compatibility for community efforts.[54]
Adoption and Applications
Operating Systems and Distributions
GStreamer is primarily developed and maintained on Linux, where it enjoys native support across major distributions through standard package managers. It is included by default in all major Linux distributions, with installation typically handled via tools likednf on Fedora, apt on Ubuntu and Debian, and zypper on openSUSE.[55] For the most up-to-date releases, the project recommends using fast-moving distributions such as Fedora, non-LTS Ubuntu versions, Debian sid, or openSUSE, as stable releases in long-term support distributions often lag behind upstream development.[55] For instance, Ubuntu 22.04 LTS ships GStreamer 1.20.3, while the latest upstream version as of November 2025 is 1.26.8, released on November 10, 2025.[56] [5] In desktop environments, GStreamer serves as the default multimedia backend for GNOME applications and is the recommended backend for KDE's Phonon framework via the Phonon-GStreamer module, enabling seamless audio and video handling in environments like Kubuntu.[57]
Beyond Linux desktops, GStreamer provides native support for Windows and macOS through official binary installers. On Windows 7 or later, it supports Microsoft Visual Studio 2019 or newer for MSVC builds, with separate runtime and development installers available in 32-bit and 64-bit variants using the Release CRT configuration.[58] [59] For macOS 10.13 (High Sierra) or later, official framework installers are provided as .pkg files for both runtime and development use, targeting specific SDK versions like 1.24 for macOS 10.13; alternatively, it can be installed via Homebrew with brew install gstreamer, though mixing Homebrew and official installers is discouraged due to potential plugin conflicts.[59] [60]
For mobile and embedded platforms, GStreamer extends its portability with targeted builds. On Android, it integrates via the Android NDK, allowing developers to build applications using Gradle or the NDK directly, with official tutorials and universal binary releases for arm64-v8a and x86_64 architectures.[61] [62] For iOS, the GStreamer SDK provides static libraries compatible with Xcode and the iOS SDK (version 6.0 or later), enabling integration into apps for iPhone and iPad, with dedicated tutorials for initialization and media handling.[63] [64] In embedded and IoT contexts, GStreamer supports real-time operating systems and custom builds through frameworks like Yocto Project and OpenEmbedded, where the meta-gstreamer1.0 layer facilitates integration into resource-constrained devices for multimedia pipelines in Linux-based RTOS environments.[65]
Despite its cross-platform design, GStreamer faces portability challenges arising from platform-specific plugins and hardware dependencies. Elements like those in the DirectShow plugin suite are exclusive to Windows, providing bridges to Microsoft’s media APIs for capture and playback but unavailable on Linux or macOS, requiring developers to use conditional pipelines or abstractions for multi-platform compatibility.[66] [67] Similar issues occur with hardware-accelerated plugins, such as VA-API on Linux or VideoToolbox on macOS, which demand platform-tailored configurations to avoid fallback to software rendering.[67]
To address these challenges and simplify deployment, the GStreamer project offers official binary SDKs that support cross-compilation workflows. Tools like Cerbero enable building full distributions for target platforms from a host machine (Linux, macOS, or Windows), producing static or shared libraries suitable for embedded cross-compilation, while Meson-based gst-build facilitates native and cross builds with minimal dependencies.[68] [69] These SDKs, combined with pre-built binaries for Android, iOS, Windows, and macOS, allow developers to target diverse architectures without starting from source, streamlining integration into projects like those in the Notable Projects and Devices section.[59]
Notable Projects and Devices
GStreamer serves as the core multimedia framework in GNOME's Totem video player, enabling playback of various audio and video formats through its plugin-based architecture.[70] Similarly, the GNOME Sound Recorder application relies on GStreamer for capturing and processing audio from microphones, supporting formats such as OGG, MP3, and FLAC.[71] In the KDE ecosystem, Dragon Player utilizes GStreamer as a backend via the Phonon multimedia framework, providing simple video playback capabilities while leveraging hardware acceleration where available.[57] For web and browser integrations, WebKitGTK employs GStreamer as its primary backend for HTML5 media playback, handling video and audio rendering in GTK-based applications like Epiphany. In mobile and embedded systems, GStreamer powers multimedia features on the Jolla Phone running Sailfish OS, facilitating audio and video handling in a Linux-based mobile platform.[72] The Palm Pre smartphone, operating on webOS, integrated GStreamer for media playback and streaming, including support for codecs like WMA.[73] Samsung devices using Tizen OS leverage GStreamer as the foundational multimedia framework for camera capture, playback, and streaming across profiles like TV and wearable.[74] Nokia's N-Series devices, such as the N900 and N810 internet tablets running Maemo, utilized GStreamer for advanced multimedia applications, including video decoding and network streaming.[75] In scientific computing, the LIGO Scientific Collaboration employs GstLAL, a GStreamer-based pipeline, for real-time analysis of gravitational wave data from detectors, enabling low-latency signal processing workflows.[76] Additional integrations include Clutter-GStreamer, which embeds GStreamer pipelines into Clutter's graphical scene graph for synchronized multimedia in user interfaces.[77] On Raspberry Pi hardware, various media player applications, such as those for streaming and local playback, incorporate GStreamer to utilize the platform's GPU for efficient video decoding and encoding. Additionally, Hailo TAPPAS, an open-source suite for edge AI applications from Hailo AI, utilizes GStreamer pipelines for low-latency video capture and AI inference on Raspberry Pi devices, supporting zero-copy operations for efficient processing.[78][79] Commercially, Texas Instruments' DaVinci processors integrate GStreamer through specialized plugins that enable hardware-accelerated video encoding and decoding on embedded systems like OMAP and DM64x devices.[80]Use Cases
GStreamer is widely employed in media playback scenarios, enabling the development of custom video and audio players for both video-on-demand (VoD) services and live broadcasts. Developers construct pipelines that handle decoding, rendering, and synchronization across various formats, often utilizing protocols like RTSP for real-time streaming or UDP for efficient multicast distribution. For instance, pipelines can ingest streams from network sources and output to displays or files, supporting seamless playback of high-definition content without local storage. This flexibility makes it suitable for applications requiring adaptive bitrate streaming to manage bandwidth variability in live events.[81][82] In transcoding and editing workflows, GStreamer serves as a robust alternative to tools like FFmpeg for batch conversion of media files, allowing users to define pipelines that demux, transcode, and mux content across formats such as MP4 to WebM or AVI to HLS segments. Its editing services facilitate non-linear video manipulation, including timeline-based clipping, transitions, and effects layering, which streamline post-production tasks by processing media in a modular, pipeline-driven manner. Quantitative benchmarks demonstrate its efficiency, with transcoding pipelines achieving real-time performance on standard hardware for 1080p video, reducing processing times compared to sequential file operations.[83][84] For embedded systems, GStreamer excels in resource-constrained environments like surveillance cameras and automotive infotainment, where it manages video capture, encoding, and overlay tasks with minimal overhead. In surveillance applications, pipelines capture feeds from sensors using elements like v4l2src, encode them in formats such as MJPEG or H.264 for storage or transmission, and support real-time monitoring at resolutions up to 1080p on low-power devices. Automotive use cases involve integrating route guidance overlays onto video streams, decoding multimedia for in-vehicle displays, and ensuring low-latency playback to enhance user experience in navigation systems. These implementations leverage hardware acceleration to maintain efficiency, with encoding rates exceeding 30 FPS on embedded processors.[85][86] Integration with AI and machine learning frameworks extends GStreamer's capabilities to real-time video analysis, incorporating plugins that interface with TensorFlow, ONNX, or TensorRT for tasks like object detection. Pipelines can process live feeds by inserting inference elements after decoding, applying models to detect and track objects such as vehicles or faces, and outputting annotated streams with metadata. For example, in computer vision applications, GStreamer pipelines achieve inference latencies under 10 ms on GPU-accelerated hardware, enabling edge-based processing without cloud dependency. This is particularly valuable for dynamic environments requiring immediate analysis, such as traffic monitoring. Plugins briefly reference elements like those in DeepStream or GSTInference for seamless model deployment.[87][88][89] In professional broadcasting, GStreamer supports end-to-end workflows for live production, including IMF packaging for archival and multi-channel audio mixing for immersive outputs like Dolby Atmos. Broadcasters utilize its pipelines to ingest multiple feeds, apply real-time effects, transcode for distribution, and package content compliant with standards like SMPTE ST 2067-2, ensuring high-quality delivery across OTT and traditional channels. Optimized plugins handle 4K/UHD streams with frame-accurate synchronization, while AI extensions enable automated quality control, such as detecting anomalies in sports footage at over 400 FPS. This modular approach reduces latency in live events, with workflows processing multi-camera setups efficiently on server-grade hardware.[90][91][92]Development and Community
Contributing and Tools
Developers contribute to GStreamer primarily through its GitLab instance at gitlab.freedesktop.org/gstreamer, where bug reports, feature requests, and merge requests (MRs) are submitted.[93] To report bugs, users create issues with details including the GStreamer version, operating system, reproduction steps, and debug logs generated via commands likeGST_DEBUG=*:6, prefixing summaries with component names such as element-name: or plugin-name:.[93] For patches, contributors fork the repository, create a feature branch, and submit MRs targeting the main branch, using concise commit messages that reference issues with Fixes #123 and adhering to C99 coding style enforced by tools like gst-indent-1.0.[93] Plugin development occurs in C as the primary language, with support for Rust; new plugins are added to subprojects/gst-plugins-bad and require updates to meson.build files.[93] Public API additions must include Since: 1.XX tags in documentation, and changes are restricted in stable branches to maintain compatibility, with backports labeled accordingly.[93] The project migrated from Bugzilla to GitLab after 2018 for streamlined issue tracking and collaboration.
GStreamer's build system relies on Meson for fast, portable compilation across platforms, configured via meson setup <build_directory> after cloning the mono repository at gitlab.freedesktop.org/gstreamer/gstreamer.git, followed by [ninja](/page/Ninja) -C <build_directory> to build.[69] Cerbero serves as a cross-platform build aggregator to create native SDKs and packages for targets like Windows (MinGW/MSVC/UWP), macOS (iOS frameworks), and Linux (Android), supporting both native and cross-compilation for plugin development with dependencies.[68] The unified mono repository, formerly known as gst-build, enables cloning a single repository for all core modules and subprojects, simplifying development workflows over separate module clones.[94][69]
Testing integrates unit tests via the Check framework, invocable with make check or make check-valgrind to detect leaks and errors in elements, using utilities like GstHarness for black-box element simulation.[95][96] GstValidate provides validation suites for elements and pipelines, monitoring compliance with GStreamer rules such as segment propagation, with tools like gst-validate for individual tests and gst-validate-launcher for running comprehensive suites like check.gst*.[97] These frameworks ensure robust behavior across components, supporting scenario-based testing for real-world pipelines.[98]
Documentation resources include developer guides in the Plugin Writer's Guide and Application Development Manual, API references for core libraries and plugins, and tutorial pipelines demonstrating basic to advanced usage, all hosted at gstreamer.freedesktop.org.[99][100][101][102]
The GStreamer community engages through mailing lists such as gstreamer-devel@lists.freedesktop.org for development discussions and gstreamer-announce@lists.freedesktop.org for updates, real-time chat on IRC channel #gstreamer at irc.oftc.net, and the annual GStreamer Conference, which features talks and hackfests for contributors.[103][104][105] Development is supported by funding from companies including Collabora, Centricular, and Igalia, which sponsor conferences and contribute core engineering efforts.[106][105]