Recent from talks
Nothing was collected or created yet.
Direct Rendering Manager
View on Wikipedia| Original authors | kernel.org & freedesktop.org |
|---|---|
| Developers | kernel.org & freedesktop.org |
| Written in | C |
| Type | |
| License | |
| Website | dri |
The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. DRM exposes an API that user-space programs can use to send commands and data to the GPU and perform operations such as configuring the mode setting of the display. DRM was first developed as the kernel-space component of the X Server Direct Rendering Infrastructure,[1] but since then it has been used by other graphic stack alternatives such as Wayland and standalone applications and libraries such as SDL2 and Kodi.
User-space programs can use the DRM API to command the GPU to do hardware-accelerated 3D rendering and video decoding, as well as GPGPU computing.
Overview
[edit]The Linux kernel already had an API called fbdev, used to manage the framebuffer of a graphics adapter,[2] but it couldn't be used to handle the needs of modern 3D-accelerated GPU-based video hardware. These devices usually require setting and managing a command queue in their own memory to dispatch commands to the GPU and also require management of buffers and free space within that memory.[3] Initially, user-space programs (such as the X Server) directly managed these resources, but they usually acted as if they were the only ones with access to them. When two or more programs tried to control the same hardware at the same time, and set its resources each one in its own way, most times they ended catastrophically.[3]
The Direct Rendering Manager was created to allow multiple programs to use video hardware resources cooperatively.[4] The DRM gets exclusive access to the GPU and is responsible for initializing and maintaining the command queue, memory, and any other hardware resource. Programs wishing to use the GPU send requests to DRM, which acts as an arbitrator and takes care to avoid possible conflicts.
The scope of DRM has been expanded over the years to cover more functionality previously handled by user-space programs, such as framebuffer managing and mode setting, memory-sharing objects and memory synchronization.[5][6] Some of these expansions were given specific names, such as Graphics Execution Manager (GEM) or kernel mode-setting (KMS), and the terminology prevails when the functionality they provide is specifically alluded. But they are really parts of the whole kernel DRM subsystem.
The trend to include two GPUs in a computer—a discrete GPU and an integrated one—led to new problems such as GPU switching that also needed to be solved at the DRM layer. In order to match the Nvidia Optimus technology, DRM was provided with GPU offloading abilities, called PRIME.[7]
Software architecture
[edit]
The Direct Rendering Manager resides in kernel space, so user-space programs must use kernel system calls to request its services. However, DRM doesn't define its own customized system calls. Instead, it follows the Unix principle of "everything is a file" to expose the GPUs through the filesystem name space, using device files under the /dev hierarchy. Each GPU detected by DRM is referred to as a DRM device, and a device file /dev/dri/cardX (where X is a sequential number) is created to interface with it.[8][9] User-space programs that want to talk to the GPU must open this file and use ioctl calls to communicate with DRM. Different ioctls correspond to different functions of the DRM API.
A library called libdrm was created to facilitate the interface of user-space programs with the DRM subsystem. This library is merely a wrapper that provides a function written in C for every ioctl of the DRM API, as well as constants, structures and other helper elements.[10] The use of libdrm not only avoids exposing the kernel interface directly to applications, but presents the usual advantages of reusing and sharing code between programs.

DRM consists of two parts: a generic "DRM core" and a specific one ("DRM driver") for each type of supported hardware.[11] DRM core provides the basic framework where different DRM drivers can register and also provides to user space a minimal set of ioctls with common, hardware-independent functionality.[8] A DRM driver, on the other hand, implements the hardware-dependent part of the API, specific to the type of GPU it supports; it should provide the implementation of the remaining ioctls not covered by DRM core, but it may also extend the API, offering additional ioctls with extra functionality only available on such hardware.[8] When a specific DRM driver provides an enhanced API, user-space libdrm is also extended by an extra library libdrm-driver that can be used by user space to interface with the additional ioctls.
API
[edit]The DRM core exports several interfaces to user-space applications, generally intended to be used through corresponding libdrm wrapper functions. In addition, drivers export device-specific interfaces for use by user-space drivers and device-aware applications through ioctls and sysfs files. External interfaces include: memory mapping, context management, DMA operations, AGP management, vblank control, fence management, memory management, and output management.
DRM-Master and DRM-Auth
[edit]There are several operations (ioctls) in the DRM API that either for security purposes or for concurrency issues must be restricted to be used by a single user-space process per device.[8] To implement this restriction, DRM limits such ioctls to be only invoked by the process considered the "master" of a DRM device, usually called DRM-Master. Only one of all processes that have the device node /dev/dri/cardX opened will have its file handle marked as master, specifically the first calling the SET_MASTER ioctl. Any attempt to use one of these restricted ioctls without being the DRM-Master will return an error. A process can also give up its master role—and let another process acquire it—by calling the DROP_MASTER ioctl.
The X Server—or any other display server—is commonly the process that acquires the DRM-Master status in every DRM device it manages, usually when it opens the corresponding device node during its startup, and keeps these privileges for the entire graphical session until it finishes or dies.
For the remaining user-space processes there is another way to gain the privilege to invoke some restricted operations on the DRM device called DRM-Auth. It is basically a method of authentication against the DRM device, in order to prove to it that the process has the DRM-Master's approval to get such privileges. The procedure consists of:[12]: 13
- The client gets a unique token—a 32-bit integer—from the DRM device using the GET_MAGIC ioctl and passes it to the DRM-Master process by whatever means (normally some sort of IPC; for example, in DRI2 there is a DRI2Authenticate request that any X client can send to the X Server.[13])
- The DRM-Master process, in turn, sends back the token to the DRM device by invoking the AUTH_MAGIC ioctl.
- The device grants special rights to the process file handle whose auth token matches the received token from the DRM-Master.
Graphics Execution Manager
[edit]Due to the increasing size of video memory and the growing complexity of graphics APIs such as OpenGL, the strategy of reinitializing the graphics card state at each context switch was too expensive, performance-wise. Also, modern Linux desktops needed an optimal way to share off-screen buffers with the compositing manager. These requirements led to the development of new methods to manage graphics buffers inside the kernel. The Graphics Execution Manager (GEM) emerged as one of these methods.[6]
GEM provides an API with explicit memory management primitives.[6] Through GEM, a user-space program can create, handle and destroy memory objects living in the GPU video memory. These objects, called "GEM objects",[14] are persistent from the user-space program's perspective and don't need to be reloaded every time the program regains control of the GPU. When a user-space program needs a chunk of video memory (to store a framebuffer, texture or any other data required by the GPU[15]), it requests the allocation to the DRM driver using the GEM API. The DRM driver keeps track of the used video memory and is able to comply with the request if there is free memory available, returning a "handle" to user space to further refer the allocated memory in coming operations.[6][14] GEM API also provides operations to populate the buffer and to release it when it is not needed anymore. Memory from unreleased GEM handles gets recovered when the user-space process closes the DRM device file descriptor—intentionally or because it terminates.[16]
GEM also allows two or more user-space processes using the same DRM device (hence the same DRM driver) to share a GEM object.[16] GEM handles are local 32-bit integers unique to a process but repeatable in other processes, therefore not suitable for sharing. What is needed is a global namespace, and GEM provides one through the use of global handles called GEM names. A GEM name refers to one, and only one, GEM object created within the same DRM device by the same DRM driver, by using a unique 32-bit integer. GEM provides an operation flink to obtain a GEM name from a GEM handle.[16][12]: 16 The process can then pass this GEM name (32-bit integer) to another process using any IPC mechanism available.[12]: 15 The GEM name can be used by the recipient process to obtain a local GEM handle pointing to the original GEM object.
Unfortunately, the use of GEM names to share buffers is not secure.[12]: 16 [17][18] A malicious third-party process accessing the same DRM device could try to guess the GEM name of a buffer shared by two other processes, simply by probing 32-bit integers.[19][18] Once a GEM name is found, its contents can be accessed and modified, violating the confidentiality and integrity of the information of the buffer. This drawback was overcome later by the introduction of DMA-BUF support into DRM, as DMA-BUF represents buffers in userspace as file descriptors, which may be shared securely.
Another important task for any video-memory management system besides managing the video-memory space is handling the memory synchronization between the GPU and the CPU. Current memory architectures are very complex and usually involve various levels of caches for the system memory and sometimes for the video memory too. Therefore, video-memory managers should also handle the cache coherence to ensure the data shared between CPU and GPU is consistent.[20] This means that often video-memory management internals are highly dependent on hardware details of the GPU and memory architecture, and thus driver-specific.[21]
GEM was initially developed by Intel engineers to provide a video-memory manager for its i915 driver.[20] The Intel GMA 9xx family are integrated GPUs with a Uniform Memory Architecture (UMA), where the GPU and CPU share the physical memory, and there is not a dedicated VRAM.[22] GEM defines "memory domains" for memory synchronization, and while these memory domains are GPU-independent,[6] they are specifically designed with an UMA memory architecture in mind, making them less suitable for other memory architectures like those with a separate VRAM. For this reason, other DRM drivers have decided to expose to user-space programs the GEM API, but internally they implemented a different memory manager better suited for their particular hardware and memory architecture.[23]
The GEM API also provides ioctls for control of the execution flow (command buffers), but they are Intel-specific, to be used with Intel i915 and later GPUs.[6] No other DRM driver has attempted to implement any part of the GEM API beyond the memory-management specific ioctls.
Translation Table Maps
[edit]Translation Table Maps (TTM) is the name of the generic memory manager for GPUs that was developed before GEM.[5][14] It was specifically designed to manage the different types of memory that a GPU might access, including dedicated Video RAM (commonly installed in the video card) and system memory accessible through an I/O memory management unit called the Graphics Address Remapping Table (GART).[5] TTM should also handle the portions of the video RAM that are not directly addressable by the CPU and do it with the best possible performance, considering that user-space graphics applications typically work with large amounts of video data. Another important matter was to maintain the consistency between the different memories and caches involved.
The main concept of TTM are the "buffer objects", regions of video memory that at some point must be addressable by the GPU.[5] When a user-space graphics application wants access to a certain buffer object (usually to fill it with content), TTM may require relocating it to a type of memory addressable by the CPU. Further relocations—or GART mapping operations—could happen when the GPU needs access to a buffer object but it isn't in the GPU's address space yet. Each of these relocation operations must handle any related data and cache-coherency issues.[5]
Another important TTM concept is fences. Fences are essentially a mechanism to manage concurrency between the CPU and the GPU.[24] A fence tracks when a buffer object is no longer used by the GPU, generally to notify any user-space process with access to it.[5]
The fact that TTM tried to manage all kind of memory architectures, including those with and without a dedicated VRAM, in a suitable way, and to provide every conceivable feature in a memory manager for use with any type of hardware, led to an overly complex solution with an API far larger than needed.[24][14] Some DRM developers thought that it wouldn't fit well with any specific driver, especially the API. When GEM emerged as a simpler memory manager, its API was preferred over the TTM one. But some driver developers considered that the approach taken by TTM was more suitable for discrete video cards with dedicated video memory and IOMMUs, so they decided to use TTM internally, while exposing their buffer objects as GEM objects and thus supporting the GEM API.[23] Examples of current drivers using TTM as an internal memory manager but providing a GEM API are the radeon driver for AMD video cards and the nouveau driver for NVIDIA video cards.
DMA Buffer Sharing and PRIME
[edit]The DMA Buffer Sharing API (often abbreviated as DMA-BUF) is a Linux kernel internal API designed to provide a generic mechanism to share DMA buffers across multiple devices, possibly managed by different types of device drivers.[25][26] For example, a Video4Linux device and a graphics adapter device could share buffers through DMA-BUF to achieve zero-copy of the data of a video stream produced by the first and consumed by the latter. Any Linux device driver can implement this API as exporter, as user (consumer) or both.
This feature was exploited for the first time in DRM to implement PRIME, a solution for GPU offloading that uses DMA-BUF to share the resulting framebuffers between the DRM drivers of the discrete and the integrated GPU.[27]: 13 An important feature of DMA-BUF is that a shared buffer is presented to user space as a file descriptor.[14][12]: 17 For the development of PRIME two new ioctls were added to the DRM API, one to convert a local GEM handle to a DMA-BUF file descriptor and another for the exact opposite operation.
These two new ioctls were later reused as a way to fix the inherent unsafety of GEM buffer sharing.[12]: 17 Unlike GEM names, file descriptors can not be guessed (they are not a global namespace), and Unix operating systems provide a safe way to pass them through a Unix domain socket using the SCM_RIGHTS semantics.[14][28]: 11 A process that wants to share a GEM object with another process can convert its local GEM handle to a DMA-BUF file descriptor and pass it to the recipient, which in turn can get its own GEM handle from the received file descriptor.[12]: 16 This method is used by DRI3 to share buffers between the client and the X Server[29] and also by Wayland.
Kernel Mode Setting
[edit]
In order to work properly, a video card or graphics adapter must set a mode—a combination of screen resolution, color depth and refresh rate—that is within the range of values supported by itself and the attached display screen. This operation is called mode-setting,[30] and it usually requires raw access to the graphics hardware—i.e. the ability to write to certain registers of the video card display controller.[31][32] A mode-setting operation must be performed before starting to use the framebuffer, and also when the mode is required to change by an application or the user.
In the early days, user-space programs that wanted to use the graphical framebuffer were also responsible for providing the mode-setting operations.[3] Thus, they needed to run with privileged access to the video hardware. In Unix-type operating systems, the X Server was the most prominent example. Its mode-setting implementation lived in the DDX driver for each specific type of video card.[33] This approach, later referred to as User space Mode-Setting or UMS,[34][35] poses several issues.[36][30] It not only breaks the isolation that operating systems should provide between programs and hardware, raising both stability and security concerns, but also could leave the graphics hardware in an inconsistent state if two or more user space programs try to do the mode-setting at the same time. To avoid these conflicts, the X Server became in practice the only user space program that performed mode-setting operations; the remainder user space programs relied on the X Server to set the appropriate mode and to handle any other operation involving mode-setting. Initially the mode-setting was performed exclusively during the X Server startup process, but later the X Server gained the ability to do it while running.[37] The XFree86-VidModeExtension extension was introduced in XFree86 3.1.2 to let any X client request modeline (resolution) changes to the X Server.[38][39] VidMode extension was later superseded by the more generic XRandR extension.
However, this was not the only code doing mode-setting in a Linux system. During the system booting process, the Linux kernel must set a minimal text mode for the virtual console (based on the standard modes defined by VESA BIOS extensions).[40] Also the Linux kernel framebuffer driver contained mode-setting code to configure framebuffer devices.[2] To avoid mode-setting conflicts, the XFree86 Server—and later the X.Org Server—handled the case when the user switched from the graphical environment to a text virtual console by saving its mode-setting state, and restoring it when the user switched back to X.[41] This process caused an annoying flicker in the transition, and also can fail, leading to a corrupted or unusable output display.[42]
The user space mode setting approach also caused other issues:[43][42]
- The suspend/resume process has to rely on user space tools to restore the previous mode. One single failure or crash of one of these programs could leave the system without a working display due to a modeset misconfiguration, and therefore unusable.
- It was also impossible for the kernel to show error or debug messages when the screen was in a graphics mode—for example when X was running—since the only modes the kernel knew about were the VESA BIOS standard text modes.
- A more pressing issue was the proliferation of graphical applications bypassing the X Server and the emergence of other graphics stack alternatives to X, extending the duplication of mode-setting code across the system even further.
To address these problems, the mode-setting code was moved to a single place inside the kernel, specifically to the existing DRM module.[36][37][44][42][43] Then, every process—including the X Server—should be able to command the kernel to perform mode-setting operations, and the kernel would ensure that concurrent operations don't result in an inconsistent state. The new kernel API and code added to the DRM module to perform these mode-setting operations was called Kernel Mode-Setting (KMS).[30]
Kernel Mode-Setting provides several benefits. The most immediate is of course the removal of duplicate mode-setting code, from both the kernel (Linux console, fbdev) and user space (X Server DDX drivers). KMS also makes it easier to write alternative graphics systems, which now don't need to implement their own mode-setting code.[42][43] By providing centralized mode management, KMS solves the flickering issues while changing between console and X, and also between different instances of X (fast user switching).[41][44] Since it is available in the kernel, it can also be used at the beginning of the boot process, saving flickering due to mode changes in these early stages.
The fact that KMS is part of the kernel allows it to use resources only available at kernel space such as interrupts.[45] For example, the mode recovery after a suspend/resume process simplifies a lot by being managed by the kernel itself, and incidentally improves security (no more user space tools requiring root permissions). The kernel also allows the hotplug of new display devices easily, solving a longstanding problem.[45] Mode-setting is also closely related to memory management—since framebuffers are basically memory buffers—so a tight integration with the graphics memory manager is highly recommended. That's the main reason why the kernel mode-setting code was incorporated into DRM and not as a separate subsystem.[44]
To avoid breaking backwards compatibility of the DRM API, Kernel Mode-Setting is provided as an additional driver feature of certain DRM drivers.[46] Any DRM driver can choose to provide the DRIVER_MODESET flag when it registers with the DRM core to indicate that supports the KMS API.[8] Those drivers that implement Kernel Mode-Setting are often called KMS drivers as a way to differentiate them from the legacy—without KMS—DRM drivers.
KMS has been adopted to such an extent that certain drivers which lack 3D acceleration (or for which the hardware vendor doesn't want to expose or implement it) nevertheless implement the KMS API without the rest of the DRM API, allowing display servers (like Wayland) to run with ease.[47][48]
KMS device model
[edit]KMS models and manages the output devices as a series of abstract hardware blocks commonly found on the display output pipeline of a display controller. These blocks are:[49]
- CRTCs: each CRTC (from CRT Controller[50][33]) represents a scanout engine of the display controller, pointing to a scanout buffer (framebuffer).[49] The purpose of a CRTC is to read the pixel data currently in the scanout buffer and generate from it the video mode timing signal with the help of a PLL circuit.[51] The number of CRTCs available determines how many independent output devices the hardware can handle at the same time, so in order to use multi-head configurations at least one CRTC per display device is required.[49] Two—or more—CRTCs can also work in clone mode if they scan out from the same framebuffer to send the same image to several output devices.[51][50]
- Connectors: a connector represents where the display controller sends the video signal from a scanout operation to be displayed. Usually, the KMS concept of a connector corresponds to a physical connector (VGA, DVI, FPD-Link, HDMI, DisplayPort, S-Video, ...) in the hardware where an output device (monitor, laptop panel, ...) is permanently or can temporarily be attached. Information related to the current physically attached output device—such as connection status, EDID data, DPMS status or supported video modes—is also stored within the connector.[49]
- Encoders: the display controller must encode the video mode timing signal from the CRTC using a format suitable for the intended connector.[49] An encoder represents the hardware block able to do one of these encodings. Examples of encodings—for digital outputs—are TMDS and LVDS; for analog outputs such as VGA and TV out, specific DAC blocks are generally used. A connector can only receive the signal from one encoder at a time,[49] and each type of connector only supports some encodings. There also might be additional physical restrictions by which not every CRTC is connected to every available encoder, limiting the possible combinations of CRTC-encoder-connector.
- Planes: a plane is not a hardware block but a memory object containing a buffer from which a scanout engine (a CRTC) is fed. The plane that holds the framebuffer is called the primary plane, and each CRTC must have one associated,[49] since it is the source for the CRTC to determine the video mode—display resolution (width and height), pixel size, pixel format, refresh rate, etc. A CRTC might have also cursor planes associated to it if the display controller supports hardware cursor overlays, or secondary planes if it's able to scan out from additional hardware overlays and compose or blend "on the fly" the final image sent to the output device.[33]
Atomic Display
[edit]In recent years there has been an ongoing effort to bring atomicity to some regular operations pertaining the KMS API, specifically to the mode setting and page flipping operations.[33][52] This enhanced KMS API is what is called Atomic Display (formerly known as atomic mode-setting and atomic or nuclear pageflip).
The purpose of the atomic mode-setting is to ensure a correct change of mode in complex configurations with multiple restrictions, by avoiding intermediate steps which could lead to an inconsistent or invalid video state;[52] it also avoids risky video states when a failed mode-setting process has to be undone ("rollback").[53]: 9 Atomic mode-setting allows one to know beforehand if certain specific mode configuration is appropriate, by providing mode testing capabilities.[52] When an atomic mode is tested and its validity confirmed, it can be applied with a single indivisible (atomic) commit operation. Both test and commit operations are provided by the same new ioctl with different flags.
Atomic page flip on the other hand allows to update multiple planes on the same output (for instance the primary plane, the cursor plane and maybe some overlays or secondary planes) all synchronized within the same VBLANK interval, ensuring a proper display without tearing.[53]: 9,14 [52] This requirement is especially relevant to mobile and embedded display controllers, that tend to use multiple planes/overlays to save power.
The new atomic API is built upon the old KMS API. It uses the same model and objects (CRTCs, encoders, connectors, planes, ...), but with an increasing number of object properties that can be modified.[52] The atomic procedure is based on changing the relevant properties to build the state that we want to test or commit. The properties we want to modify depend on whether we want to do a mode-setting (mostly CRTCs, encoders and connectors properties) or page flipping (usually planes properties). The ioctl is the same for both cases, the difference being the list of properties passed with each one.[54]
Render nodes
[edit]In the original DRM API, the DRM device /dev/dri/cardX is used for both privileged (modesetting, other display control) and non-privileged (rendering, GPGPU compute) operations.[9] For security reasons, opening the associated DRM device file requires special privileges "equivalent to root-privileges".[55] This leads to an architecture where only some reliable user space programs (the X server, a graphical compositor, ...) have full access to the DRM API, including the privileged parts like the modeset API. Other user space applications that want to render or make GPGPU computations should be granted by the owner of the DRM device ("DRM Master") through the use of a special authentication interface.[56] Then the authenticated applications can render or make computations using a restricted version of the DRM API without privileged operations. This design imposes a severe constraint: there must always be a running graphics server (the X Server, a Wayland compositor, ...) acting as DRM-Master of a DRM device so that other user space programs can be granted the use of the device, even in cases not involving any graphics display like GPGPU computations.[55][56]
The "render nodes" concept tries to solve these scenarios by splitting the DRM user space API into two interfaces – one privileged and one non-privileged – and using separate device files (or "nodes") for each one.[9] For every GPU found, its corresponding DRM driver—if it supports the render nodes feature—creates a device file /dev/dri/renderDX, called the render node, in addition to the primary node /dev/dri/cardX.[56][9] Clients that use a direct rendering model and applications that want to take advantage of the computing facilities of a GPU, can do it without requiring additional privileges by simply opening any existing render node and dispatching GPU operations using the limited subset of the DRM API supported by those nodes—provided they have file system permissions to open the device file. Display servers, compositors and any other program that requires the modeset API or any other privileged operation must open the standard primary node that grants access to the full DRM API and use it as usual. Render nodes explicitly disallow the GEM flink operation to prevent buffer sharing using insecure GEM global names; only PRIME (DMA-BUF) file descriptors can be used to share buffers with another client, including the graphics server.[9][56]
Hardware support
[edit]
The Linux DRM subsystem includes free and open-source drivers to support hardware from the 3 main manufacturers of GPUs for desktop computers (AMD, NVIDIA and Intel), as well as from a growing number of mobile GPU and System on a chip (SoC) integrators. The quality of each driver varies highly, depending on the degree of cooperation by the manufacturer and other matters.
| Driver | Since kernel | Supported hardware | Vendor support | Status/Notes |
|---|---|---|---|---|
| radeon | 2.4.1 | AMD (formerly ATi) Radeon GPU series with the architectures TeraScale and GCN 1st & 2nd gen. Including models from R100/200/300/400, Radeon X1000, HD 2000/3000/4000/5000/6000/7000/8000, R5/R7/R9 200/300 series and Kaveri APUs. | Yes | Active |
| i915 | 2.6.9 | Intel GMA 830M, 845G, 852GM, 855GM, 865G, 915G, 945G, 965G, G35, G41, G43, G45 chipsets. Intel HD and Iris Graphics HD Graphics 2000/3000/2500/4000/4200/4400/4600/P4600/P4700/5000, Iris Graphics 5100, Iris Pro Graphics 5200 integrated GPUs. | Yes | Active |
| nouveau | 2.6.33[58][59] | NVIDIA Tesla, Fermi, Kepler, Maxwell based GeForce GPUs, Tegra K1, X1 SoC | Partial | Active |
| exynos | 3.2[60] | Samsung ARM-based Exynos SoCs | ||
| vmwgfx | 3.2 (from staging)[61] | Virtual GPU for the VMware SVGA2 | virtual driver | |
| gma500 | 3.3 (from staging)[62][63] | Intel GMA 500 and other Imagination Technologies (PowerVR) based graphics GPUs | experimental 2D KMS-only driver | |
| ast | 3.5[64] | ASpeed Technologies 2000 series | experimental | |
| mgag200 | 3.5[65] | Matrox MGA-G200 server display engines | KMS-only | |
| shmobile | 3.7[66] | Renesas SH Mobile | ||
| tegra | 3.8[67] | Nvidia Tegra20, Tegra30 SoCs | Yes | Active |
| omapdrm | 3.9[68] | Texas Instruments OMAP5 SoCs | ||
| rcar-du | 3.11[69] | Renesas R-Car SoC Display Units | ||
| msm | 3.12[70][71] | Qualcomm's Adreno A2xx/A3xx/A4xx GPU families (Snapdragon SOCs)[72] | ||
| armada | 3.13[73][74] | Marvell Armada 510 SoCs | ||
| bochs | 3.14[75] | Virtual VGA cards using the Bochs dispi vga interface (such as QEMU stdvga) | virtual driver | |
| sti | 3.17[76][77] | STMicroelectronics SoC stiH41x series | ||
| imx | 3.19 (from staging)[78][79] | Freescale i.MX SoCs | ||
| rockchip | 3.19[78][80] | Rockchip SoC-based GPUs | KMS-only | |
| amdgpu[57] | 4.2[81][82] | AMD Radeon GPU series with the architectures GCN 3rd & 4th gen. Including models from Radeon Rx 200/300/400/500[83] series and Carrizo and Bristol & Stoney Ridge APUs. | Yes | Active |
| virtio | 4.2[84] | Virtual GPU driver for QEMU based virtual machine managers (like KVM or Xen) | virtual driver | |
| vc4 | 4.4[85][86][87] | Raspberry Pi's Broadcom BCM2835 and BCM2836 SoCs (VideoCore IV GPU) | ||
| etnaviv | 4.5[88][89][90] | Vivante GPU cores found in several SoCs such as Marvell ARMADA and Freescale i.MX6 Series | ||
| sun4i | 4.7[91][92] | Allwinner SoCs (ARM Mali-400 GPU) | ||
| kirin | 4.7[93][92] | HiSilicon Kirin hi6220 SoC (ARM Mali 450-MP4 GPU) | ||
| mediatek | 4.7[94][92] | MediaTek MT8173 SoC (Imagination PowerVR GX6250 GPU) | ||
| hibmc | 4.10[95] | HiSilicon hi1710 Huawei iBMC SoC (Silicon Image SM750 GPU core[96]) | KMS-only | |
| vkms | 4.19[97][98] | Software-only model of a KMS driver that is useful for testing and for running X (or similar) on headless machines. | virtual driver, experimental | |
| lima | 5.2[99][100] | ARM Mali 4xx GPUs | ||
| panfrost | 5.2[101][100] | ARM Mali Txxx (Midgard) and Gxx (Bifrost) GPUs | ||
| vboxvideo | 5.2 (from staging)[102][100] | Virtual GPU driver for VirtualBox (VBoxVGA GPU) | virtual driver | |
| hyperv_drm | 5.14[103][104] | Virtual GPU driver for Hyper-V synthetic video device | virtual driver | |
| simpledrm | 5.14[105][106] | GPU Driver for firmware-provided framebuffers (UEFI GOP, VESA BIOS Extensions, embedded systems) | KMS-only | |
| ofdrm | 6.2[107][108] | GPU Driver for Open Firmware framebuffers | KMS-only | |
| loongson | 6.6[109][110] | GPU Driver for Loongson GPUs and SoCs | ||
| powervr | 6.8[111][112] | Imagination Technologies PowerVR (Series 6 and later) & IMG Graphics GPUs | ||
| xe | 6.8[113][114] | Intel Xe series GPUs (Gen12 integrated GPUs, Intel Arc discrete GPUs) | Yes | experimental |
| panthor | 6.10[115][116] | ARM Mali Gxxx (Valhall) GPUs | ||
| efidrm | 6.16[117][118] | GPU Driver for EFI framebuffers (UEFI GOP) | KMS-only | |
| vesadrm | 6.16[119][118] | GPU Driver for VESA framebuffers (VESA BIOS Extensions) | KMS-only |
There is also a number of drivers for old, obsolete hardware detailed in the next table for historical purposes.
| Driver | Since kernel | Supported hardware | Status/Notes |
|---|---|---|---|
| gamma | 2.3.18 | 3Dlabs GLINT GMX 2000 | Removed since 2.6.14[120] |
| ffb | 2.4 | Creator/Creator3D (used by Sun Microsystems Ultra workstations) | Removed since 2.6.21[121] |
| tdfx | 2.4 | 3dfx Banshee/Voodoo3+ | Removed since 6.3[122] |
| mga | 2.4 | Matrox G200/G400/G450 | Removed since 6.3[123] |
| r128 | 2.4 | ATI Rage 128 | Removed since 6.3[124] |
| i810 | 2.4 | Intel i810 | Removed since 6.3[125] |
| sis | 2.4.17 | SiS 300/630/540 | Removed since 6.3[126] |
| i830 | 2.4.20 | Intel 830M/845G/852GM/855GM/865G | Removed since 2.6.39[127] (replaced by i915 driver) |
| via | 2.6.13[128] | VIA Unichrome / Unichrome Pro | Removed since 6.3[129] |
| savage | 2.6.14[130] | S3 Graphics Savage 3D/MX/IX/4/SuperSavage/Pro/Twister | Removed since 6.3[131] |
Development
[edit]The Direct Rendering Manager is developed within the Linux kernel, and its source code resides in the /drivers/gpu/drm directory of the Linux source code. The subsystem maintainer is Dave Airlie, with other maintainers taking care of specific drivers.[132] As usual in the Linux kernel development, DRM submaintainers and contributors send their patches with new features and bug fixes to the main DRM maintainer which integrates them into its own Linux repository. The DRM maintainer in turn submits all of these patches that are ready to be mainlined to Linus Torvalds whenever a new Linux version is going to be released. Torvalds, as top maintainer of the whole kernel, holds the last word on whether a patch is suitable or not for inclusion in the kernel.
For historical reasons, the source code of the libdrm library is maintained under the umbrella of the Mesa project.[133]
History
[edit]In 1999, while developing DRI for XFree86, Precision Insight created the first version of DRM for the 3dfx video cards, as a Linux kernel patch included within the Mesa source code.[134] Later that year, the DRM code was mainlined in Linux kernel 2.3.18 under the /drivers/char/drm/ directory for character devices.[135] During the following years the number of supported video cards grew. When Linux 2.4.0 was released in January 2001 there was already support for Creative Labs GMX 2000, Intel i810, Matrox G200/G400 and ATI Rage 128, in addition to 3dfx Voodoo3 cards,[136] and that list expanded during the 2.4.x series, with drivers for ATI Radeon cards, some SiS video cards and Intel 830M and subsequent integrated GPUs.
The split of DRM into two components, DRM core and DRM driver, called DRM core/personality split was done during the second half of 2004,[11][137] and merged into kernel version 2.6.11.[138] This split allowed multiple DRM drivers for multiple devices to work simultaneously, opening the way to multi-GPU support.
The idea of putting all the video mode setting code in one place inside the kernel had been acknowledged for years,[139][140] but the graphics card manufacturers had argued that the only way to do the mode-setting was to use the routines provided by themselves and contained in the Video BIOS of each graphics card. Such code had to be executed using x86 real mode, which prevented it from being invoked by a kernel running in protected mode.[44] The situation changed when Luc Verhaegen and other developers found a way to do the mode-setting natively instead of BIOS-based,[141][44] showing that it was possible to do it using normal kernel code and laying the groundwork for what would become Kernel Mode Setting. In May 2007 Jesse Barnes (Intel) published the first proposal for a drm-modesetting API and a working native implementation of mode-setting for Intel GPUs within the i915 DRM driver.[42] In December 2007 Jerome Glisse started to add the native mode-setting code for ATI cards to the radeon DRM driver.[142][143] Work on both the API and drivers continued during 2008, but got delayed by the necessity of a memory manager also in kernel space to handle the framebuffers.[144]
In October 2008 the Linux kernel 2.6.27 brought a major source code reorganization, prior to some significant upcoming changes. The DRM source code tree was moved to its own source directory /drivers/gpu/drm/ and the different drivers were moved into their own subdirectories. Headers were also moved into a new /include/drm directory.[145]
The increasing complexity of video memory management led to several approaches to solving this issue. The first attempt was the Translation Table Maps (TTM) memory manager, developed by Thomas Hellstrom (Tungsten Graphics) in collaboration with Emma Anholt (Intel) and Dave Airlie (Red Hat).[5] TTM was proposed for inclusion into mainline kernel 2.6.25 in November 2007,[5] and again in May 2008, but was ditched in favor of a new approach called Graphics Execution Manager (GEM).[24] GEM was first developed by Keith Packard and Emma Anholt from Intel as a simpler solution for memory management for their i915 driver.[6] GEM was well received and merged into the Linux kernel version 2.6.28 released in December 2008.[146] Meanwhile, TTM had to wait until September 2009 to be finally merged into Linux 2.6.31 as a requirement of the new Radeon KMS DRM driver.[147]
With memory management in place to handle buffer objects, DRM developers could finally add to the kernel the already finished API and code to do mode setting. This expanded API is what is called Kernel Mode-setting (KMS) and the drivers which implement it are often referred to as KMS drivers. In March 2009, KMS was merged into the Linux kernel version 2.6.29,[30][148] along with KMS support for the i915 driver.[149] The KMS API has been exposed to user space programs since libdrm 2.4.3.[150] The userspace X.Org DDX driver for Intel graphics cards was also the first to use the new GEM and KMS APIs.[151] KMS support for the radeon DRM driver was added to Linux 2.6.31 release of September 2009.[152][153][154] The new radeon KMS driver used the TTM memory manager but exposed GEM-compatible interfaces and ioctls instead of TTM ones.[23]
Since 2006 the nouveau project had been developing a free software DRM driver for NVIDIA GPUs outside of the official Linux kernel. In 2010 the nouveau source code was merged into Linux 2.6.33 as an experimental driver.[58][59] At the time of merging, the driver had been already converted to KMS, and behind the GEM API it used TTM as its memory manager.[155]
The new KMS API—including the GEM API—was a big milestone in the development of DRM, but it didn't stop the API from being enhanced in the following years. KMS gained support for page flips in conjunction with asynchronous VBLANK notifications in Linux 2.6.33[156][157]—only for the i915 driver, radeon and nouveau added it later during Linux 2.6.38 release.[158] The new page flip interface was added to libdrm 2.4.17.[159] In early 2011, during the Linux 2.6.39 release cycle, the so-called dumb buffers—a hardware-independent non-accelerated way to handle simple buffers suitable for use as framebuffers—were added to the KMS API.[160][161] The goal was to reduce the complexity of applications such as Plymouth that don't need to use special accelerated operations provided by driver-specific ioctls.[162] The feature was exposed by libdrm from version 2.4.25 onwards.[163] Later that year it also gained a new main type of objects, called planes. Planes were developed to represent hardware overlays supported by the scanout engine.[164][165] Plane support was merged into Linux 3.3.[166] and libdrm 2.4.30. Another concept added to the API—during Linux 3.5[167] and libdrm 2.4.36[168] releases—were generic object properties, a method to add generic values to any KMS object. Properties are specially useful to set special behaviour or features to objects such as CRTCs and planes.
An early proof of concept to provide GPU offloading between DRM drivers was developed by Dave Airlie in 2010.[7][169] Since Airlie was trying to mimic the NVIDIA Optimus technology, he decided to name it "PRIME".[7] Airlie resumed his work on PRIME in late 2011, but based on the new DMA-BUF buffer sharing mechanism introduced by Linux kernel 3.3.[170] The basic DMA-BUF PRIME infrastructure was finished in March 2012[171] and merged into the Linux 3.4 release,[172][173][174] as well as into libdrm 2.4.34.[175] Later during the Linux 3.5 release, several DRM drivers implemented PRIME support, including i915 for Intel cards, radeon for AMD cards and nouveau for NVIDIA cards.[176][177]
In recent years, the DRM API has incrementally expanded with new and improved features. In 2013, as part of GSoC, David Herrmann developed the multiple render nodes feature.[55] His code was added to the Linux kernel version 3.12 as an experimental feature[178][179] supported by i915,[180] radeon[181] and nouveau[182] drivers, and enabled by default since Linux 3.17.[77] In 2014 Matt Roper (Intel) developed the universal planes (or unified planes) concept by which framebuffers (primary planes), overlays (secondary planes) and cursors (cursor planes) are all treated as a single type of object with an unified API.[183] Universal planes support provides a more consistent DRM API with fewer, more generic ioctls.[33] In order to maintain the API backwards compatible, the feature is exposed by DRM core as an additional capability that a DRM driver can provide. Universal plane support debuted in Linux 3.15[184] and libdrm 2.4.55.[185] Several drivers, such as the Intel i915,[186] have already implemented it.
The most recent DRM API enhancement is the atomic mode-setting API, which brings atomicity to the mode-setting and page flipping operations on a DRM device. The idea of an atomic API for mode-setting was first proposed in early 2012.[187] Ville Syrjälä (Intel) took over the task of designing and implementing such atomic API.[188] Based on his work, Rob Clark (Texas Instruments) took a similar approach aiming to implement atomic page flips.[189] Later in 2013 both proposed features were reunited in a single one using a single ioctl for both tasks.[190] Since it was a requirement, the feature had to wait for the support of universal planes to be merged in mid-2014.[186] During the second half of 2014 the atomic code was greatly enhanced by Daniel Vetter (Intel) and other DRM developers[191]: 18 in order to facilitate the transition for the existing KMS drivers to the new atomic framework.[192] All of this work was finally merged into Linux 3.19[193] and Linux 4.0[194][195][196] releases, and enabled by default since Linux 4.2.[197] libdrm exposed the new atomic API since version 2.4.62.[198] Multiple drivers have already been converted to the new atomic API.[199] By 2018 ten new DRM drivers based on this new atomic model had been added to the Linux kernel.[200]
Adoption
[edit]The Direct Rendering Manager kernel subsystem was initially developed to be used with the new Direct Rendering Infrastructure of the XFree86 4.0 display server, later inherited by its successor, the X.Org Server. Therefore, the main users of DRM were DRI clients that link to the hardware-accelerated OpenGL implementation that lives in the Mesa 3D library, as well as the X Server itself. Nowadays DRM is also used by several Wayland compositors, including Weston reference compositor. kmscon is a virtual console implementation that runs in user space using DRM KMS facilities.[201]
In 2015, version 358.09 (beta) of the proprietary Nvidia GeForce driver received support for the DRM mode-setting interface implemented as a new kernel blob called nvidia-modeset.ko. This new driver component works in conjunction with the nvidia.ko kernel module to program the display engine (i.e. display controller) of the GPU.[202]
See also
[edit]References
[edit]- ^ "Linux kernel/drivers/gpu/drm/README.drm". kernel.org. Archived from the original on 2014-02-26. Retrieved 2014-02-26.
- ^ a b Uytterhoeven, Geert. "The Frame Buffer Device". Kernel.org. Retrieved 28 January 2015.
- ^ a b c White, Thomas. "How DRI and DRM Work". Retrieved 22 July 2014.
- ^ Faith, Rickard E. (11 May 1999). "The Direct Rendering Manager: Kernel Support for the Direct Rendering Infrastructure". Archived from the original on 24 May 2016. Retrieved 12 May 2016.
- ^ a b c d e f g h Corbet, Jonathan (6 November 2007). "Memory management for graphics processors". LWN.net. Retrieved 23 July 2014.
- ^ a b c d e f g Packard, Keith; Anholt, Eric (13 May 2008). "GEM - the Graphics Execution Manager". dri-devel mailing list. Retrieved 23 July 2014.
- ^ a b c Airlie, Dave (12 March 2010). "GPU offloading - PRIME - proof of concept". Archived from the original on 10 February 2015. Retrieved 10 February 2015.
- ^ a b c d e Kitching, Simon. "DRM and KMS kernel modules". Retrieved 13 May 2016.
- ^ a b c d e Herrmann, David (1 September 2013). "Splitting DRM and KMS device nodes". Retrieved 23 July 2014.
- ^ "README.rst - mesa/drm - Direct Rendering Manager headers and kernel modules". 2020-03-21. Archived from the original on 2020-03-21.
- ^ a b Airlie, Dave (4 September 2004). "New proposed DRM interface design". dri-devel (Mailing list).
- ^ a b c d e f g Peres, Martin; Ravier, Timothée (2 February 2013). "DRI-next/DRM2: A walkthrough the Linux Graphics stack and its security" (PDF). Retrieved 13 May 2016.
- ^ Høgsberg, Kristian (4 September 2008). "The DRI2 Extension - Version 2.0". X.Org. Retrieved 23 May 2016.
- ^ a b c d e f Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - Memory management". Retrieved 31 August 2016.
- ^ Vetter, Daniel. "i915/GEM Crashcourse by Daniel Vetter". Intel Open Source Technology Center. Retrieved 31 January 2015.
GEM essentially deals with graphics buffer objects (which can contain textures, renderbuffers, shaders, or all kinds of other state objects and data used by the gpu)
- ^ a b c Vetter, Daniel (4 May 2011). "GEM Overview". Retrieved 13 February 2015.
- ^ Packard, Keith (28 September 2012). "Thoughts about DRI.Next". Retrieved 26 May 2016.
GEM flink has lots of issues. The flink names are global, allowing anyone with access to the device to access the flink data contents.
- ^ a b Herrmann, David (2 July 2013). "DRM Security". The 2013 X.Org Developer's Conference (XDC2013) Proceedings. Retrieved 13 February 2015.
gem-flink doesn't provide any private namespaces to applications and servers. Instead, only one global namespace is provided per DRM node. Malicious authenticated applications can attack other clients via brute-force "name-guessing" of gem buffers
- ^ Kerrisk, Michael (25 September 2012). "XDC2012: Graphics stack security". LWN.net. Retrieved 25 November 2015.
- ^ a b Packard, Keith (4 July 2008). "gem update". Retrieved 25 April 2016.
- ^ "drm-memory man page". Ubuntu manuals. Retrieved 29 January 2015.
Many modern high-end GPUs come with their own memory managers. They even include several different caches that need to be synchronized during access. [...] . Therefore, memory management on GPUs is highly driver- and hardware-dependent.
- ^ "Intel Graphics Media Accelerator Developer's Guide". Intel Corporation. Retrieved 24 November 2015.
- ^ a b c Larabel, Michael (26 August 2008). "A GEM-ified TTM Manager For Radeon". Phoronix. Retrieved 24 April 2016.
- ^ a b c Corbet, Jonathan (28 May 2008). "GEM v. TTM". LWN.net. Retrieved 10 February 2015.
- ^ Corbet, Jonathan (11 January 2012). "DMA buffer sharing in 3.3". LWN.net. Retrieved 14 May 2016.
- ^ Clark, Rob; Semwal, Sumit. "DMA Buffer Sharing Framework: An Introduction" (PDF). Retrieved 14 May 2016.
- ^ Peres, Martin (26 September 2014). "The Linux graphics stack, Optimus and the Nouveau driver" (PDF). Retrieved 14 May 2016.
- ^ Pinchart, Laurent (20 February 2013). "Anatomy of an Embedded KMS Driver" (PDF). Retrieved 27 June 2016.
- ^ Edge, Jake (9 October 2013). "DRI3 and Present". LWN.net. Retrieved 28 May 2016.
- ^ a b c d "Linux 2.6.29 - Kernel Modesetting". Linux Kernel Newbies. Retrieved 19 November 2015.
- ^ "VGA Hardware". OSDev.org. Retrieved 23 November 2015.
- ^ Rathmann, B. (15 February 2008). "The state of Nouveau, part I". LWN.net. Retrieved 23 November 2015.
Graphics cards are programmed in numerous ways, but most initialization and mode setting is done via memory-mapped IO. This is just a set of registers accessible to the CPU via its standard memory address space. The registers in this address space are split up into ranges dealing with various features of the graphics card such as mode setup, output control, or clock configuration.
- ^ a b c d e Paalanen, Pekka (5 June 2014). "From pre-history to beyond the global thermonuclear war". Retrieved 29 July 2014.
- ^ "drm-kms man page". Ubuntu manuals. Retrieved 19 November 2015.
- ^ Corbet, Jonathan (13 January 2010). "The end of user-space mode setting?". LWN.net. Retrieved 20 November 2015.
- ^ a b "Mode Setting Design Discussion". X.Org Wiki. Retrieved 19 November 2015.
- ^ a b Corbet, Jonathan (22 January 2007). "LCA: Updates on the X Window System". LWN.net. Retrieved 23 November 2015.
- ^ "XF86VIDMODE manual page". X.Org. Retrieved 23 April 2016.
- ^ "X11R6.1 Release Notes". X.Org. 14 March 1996. Retrieved 23 April 2016.
- ^ Corbet, Jonathan (20 July 2004). "Kernel Summit: Video Drivers". LWN.net. Retrieved 23 November 2015.
- ^ a b "Fedora - Features/KernelModeSetting". Fedora Project. Retrieved 20 November 2015.
Historically, the X server was responsible for saving output state when it started up, and then restoring it when it switched back to text mode. Fast user switching was accomplished with a VT switch, so switching away from the first user's X server would blink once to go to text mode, then immediately blink again to go to the second user's session.
- ^ a b c d e Barnes, Jesse (17 May 2007). "[RFC] enhancing the kernel's graphics subsystem". linux-kernel (Mailing list).
- ^ a b c "DrmModesetting - Enhancing kernel graphics". DRI Wiki. Retrieved 23 November 2015.
- ^ a b c d e Packard, Keith (16 September 2007). "kernel-mode-drivers". Retrieved 30 April 2016.
- ^ a b Packard, Keith (24 April 2000). "Sharpening the Intel Driver Focus". Retrieved 23 May 2016.
A more subtle limitation is that the driver couldn't handle interrupts, so there wasn't any hot-plug monitor support.
- ^ Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - Driver Initialization". Retrieved 31 August 2016.
- ^ "q3k (@q3k@hackerspace.pl)". Warsaw Hackerspace Social Club. 2023-01-31. Retrieved 2023-02-13.
DRM/KMS driver fully working now, although still without DMA. Oh, and it's written in Rust, although it's mostly just full of raw unsafe blocks.
- ^ "q3k (@q3k@hackerspace.pl)". Warsaw Hackerspace Social Club. 2023-01-31. Retrieved 2023-02-13.
Cool thing is, since we have a 'normal' DRM/KMS driver (and help from @emersion@hackerspace.pl) we can just do things like... run Wayland! Weston on an iPod Nano 5G.
- ^ a b c d e f g Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - KMS Initialization and Cleanup". Retrieved 31 August 2016.
- ^ a b "Video Cards". X.Org Wiki. Retrieved 11 April 2016.
- ^ a b Deucher, Alex (15 April 2010). "Notes about radeon display hardware". Archived from the original on 5 April 2016. Retrieved 8 April 2016.
- ^ a b c d e Vetter, Daniel (5 August 2015). "Atomic mode setting design overview, part 1". LWN.net. Retrieved 7 May 2016.
- ^ a b Reding, Thierry (1 February 2015). "Atomic Mode-Setting" (PDF). FOSDEM Archives. Retrieved 7 May 2016.
- ^ Vetter, Daniel (12 August 2015). "Atomic mode setting design overview, part 2". LWN.net. Retrieved 7 May 2016.
- ^ a b c Herrmann, David (29 May 2013). "DRM Render- and Modeset-Nodes". Retrieved 21 July 2014.
- ^ a b c d Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - Render nodes". Retrieved 31 August 2016.
- ^ a b Deucher, Alex (20 April 2015). "Initial amdgpu driver release". dri-devel (Mailing list).
- ^ a b "Linux 2.6.33 - Nouveau, a driver for Nvidia graphic cards". Linux Kernel Newbies. Retrieved 26 April 2016.
- ^ a b "drm/nouveau: Add DRM driver for NVIDIA GPUs". Kernel.org. Retrieved 27 January 2015.
- ^ "DRM: add DRM Driver for Samsung SoC EXYNOS4210". Kernel.org. Retrieved 3 March 2016.
- ^ "vmwgfx: Take the driver out of staging". Kernel.org. Retrieved 3 March 2016.
- ^ "Linux 3.3 - DriverArch - Graphics". Linux Kernel Newbies. Retrieved 3 March 2016.
- ^ Larabel, Michael (10 January 2012). "The Linux 3.3 DRM Pull Is Heavy On Enhancements". Phoronix. Retrieved 3 March 2016.
- ^ "drm: Initial KMS driver for AST (ASpeed Technologies) 2000 series (v2)". Kernel.org. Retrieved 3 March 2016.
- ^ Airlie, Dave (17 May 2012). "mgag200: initial g200se driver (v2)". Retrieved 24 January 2018.
- ^ "drm: Renesas SH Mobile DRM driver". Kernel.org. Retrieved 3 March 2016.
- ^ "drm: Add NVIDIA Tegra20 support". Kernel.org. Retrieved 3 March 2016.
- ^ "drm/omap: move out of staging". Kernel.org. Retrieved 3 March 2016.
- ^ "drm: Renesas R-Car Display Unit DRM driver". Kernel.org. Retrieved 3 March 2016.
- ^ "drm/msm: basic KMS driver for snapdragon". Kernel.org. Retrieved 3 March 2016.
- ^ Larabel, Michael (28 August 2013). "Snapdragon DRM/KMS Driver Merged For Linux 3.12". Phoronix. Retrieved 26 January 2015.
- ^ Edge, Jake (8 April 2015). "An update on the freedreno graphics driver". LWN.net. Retrieved 23 April 2015.
- ^ King, Russell (18 October 2013). "[GIT PULL] Armada DRM support". dri-devel (Mailing list).
- ^ "DRM: Armada: Add Armada DRM driver". Kernel.org. Retrieved 3 March 2016.
- ^ "drm/bochs: new driver". Kernel.org. Retrieved 3 March 2016.
- ^ Larabel, Michael (8 August 2014). "Linux 3.17 DRM Pull Brings New Graphics Driver". Phoronix. Retrieved 3 March 2016.
- ^ a b Corbet, Jonathan (13 August 2014). "3.17 merge window, part 2". LWN.net. Retrieved 7 October 2014.
- ^ a b Corbet, Jonathan (17 December 2014). "3.19 Merge window part 2". LWN.net. Retrieved 9 February 2015.
- ^ "drm: imx: Move imx-drm driver out of staging". Kernel.org. Retrieved 9 February 2015.
- ^ "drm: rockchip: Add basic drm driver". Kernel.org. Retrieved 3 March 2016.
- ^ Larabel, Michael (25 June 2015). "Linux 4.2 DRM Updates: Lots Of AMD Attention, No Nouveau Driver Changes". Phoronix. Retrieved 31 August 2015.
- ^ Corbet, Jonathan (1 July 2015). "4.2 Merge window part 2". LWN.net. Retrieved 31 August 2015.
- ^ Deucher, Alex (3 August 2015). "[PATCH 00/11] Add Fiji Support". dri-devel (Mailing list).
- ^ "Add virtio gpu driver". Kernel.org. Retrieved 3 March 2016.
- ^ Corbet, Jonathan (11 November 2015). "4.4 Merge window, part 1". LWN.net. Retrieved 11 January 2016.
- ^ Larabel, Michael (15 November 2015). "A Look At The New Features Of The Linux 4.4 Kernel". Phoronix. Retrieved 11 January 2016.
- ^ "drm/vc4: Add KMS support for Raspberry Pi". Kernel.org.
- ^ Larabel, Michael (24 January 2016). "The Many New Features & Improvements Of The Linux 4.5 Kernel". Phoronix. Retrieved 14 March 2016.
- ^ Corbet, Jonathan (20 January 2016). "4.5 merge window part 2". LWN.Net. Retrieved 14 March 2016.
- ^ "Merge tag 'sun4i-drm-for-4.7'". Kernel.org.
- ^ a b c Airlie, Dave (23 May 2016). "[git pull] drm for v4.7". dri-devel (Mailing list).
- ^ "Merge tag 'drm-hisilicon-next-2016-04-29'". Kernel.org.
- ^ "Merge tag 'mediatek-drm-2016-05-09'". Kernel.org.
- ^ Larabel, Michael (22 November 2016). "Hisilicon Hibmc DRM Driver Being Added For Linux 4.10". Phoronix. Retrieved 24 January 2018.
- ^ "Huawei FusionServer RH5885 V3 Technical White Paper". 18 November 2016. Archived from the original on 2018-01-25.
uses an onboard display chip that is integrated into the management chip Hi1710 and uses the IP core of the SM750
- ^ "drm/vkms: Introduce basic VKMS driver". git.kernel.org. Retrieved 2022-07-20.
- ^ Larabel, Michael (15 August 2018). "Virtual Kernel Mode-Setting Driver Being Added To Linux 4.19". Phoronix. Retrieved 20 July 2022.
- ^ "drm/lima: driver for ARM Mali4xx GPUs". git.kernel.org. Retrieved 2019-11-28.
- ^ a b c Larabel, Michael (9 May 2019). "Linux 5.2 DRM Makes Icelake Production-Ready, Adds Lima & Panfrost Drivers". Phoronix. Retrieved 20 July 2022.
- ^ "drm/panfrost: Add initial panfrost driver". git.kernel.org. Retrieved 2019-11-28.
- ^ "drm/vboxvideo: Move the vboxvideo driver out of staging". git.kernel.org. Retrieved 2022-07-20.
- ^ "drm/hyperv: Add DRM driver for hyperv synthetic video device". git.kernel.org. Retrieved 2021-08-30.
- ^ Larabel, Michael (9 June 2021). "Microsoft's Hyper-V DRM Display Driver Will Land For Linux 5.14". Phoronix. Retrieved 30 August 2021.
- ^ "drm: Add simpledrm driver". git.kernel.org. Retrieved 2021-08-30.
- ^ Larabel, Michael (13 May 2021). "Linux 5.14 To Bring SimpleDRM Driver, VC4 HDR, Marks More AGP Code As Legacy". Phoronix. Retrieved 30 August 2021.
- ^ "drm/ofdrm: Add ofdrm for Open Firmware framebuffers". git.kernel.org. Retrieved 21 February 2023.
- ^ Larabel, Michael (20 October 2022). "Open Firmware DRM Driver "OFDRM" Queuing For Linux 6.2". Phoronix. Retrieved 21 February 2023.
- ^ "drm: Add kms driver for loongson display controller". git.kernel.org. Retrieved 23 February 2024.
- ^ Larabel, Michael (13 July 2023). "Open-Source Graphics Driver Updates Begin Queuing For Linux 6.6". Phoronix. Retrieved 23 February 2024.
- ^ "drm/imagination: Add skeleton PowerVR driver". git.kernel.org. Retrieved 27 May 2024.
- ^ Larabel, Michael (23 November 2023). "Imagination PowerVR Open-Source GPU Driver To Be Introduced In Linux 6.8". Phoronix. Retrieved 27 May 2024.
- ^ "drm/xe: Introduce a new DRM driver for Intel GPUs". git.kernel.org. Retrieved 27 May 2024.
- ^ Larabel, Michael (15 December 2023). "Intel's New "Xe" Kernel Graphics Driver Submitted Ahead Of Linux 6.8". Phoronix. Retrieved 27 May 2024.
- ^ "Merge tag 'drm-misc-next-2024-03-28' into drm-next". git.kernel.org. Retrieved 3 August 2025.
Add drm/panthor driver and assorted fixes.
- ^ Larabel, Michael (26 March 2024). "Panthor DRM Driver Queued For Linux 6.10 To Support Newer Arm Mali GPUs". Phoronix. Retrieved 3 August 2025.
- ^ "drm/sysfb: Add efidrm for EFI displays". git.kernel.org. Retrieved 3 August 2025.
- ^ a b Larabel, Michael (10 April 2025). "Graphics/Display Driver Changes Begin Queuing For Linux 6.16 This Summer". Phoronix. Retrieved 3 August 2025.
- ^ "drm/sysfb: Add vesadrm for VESA displays". git.kernel.org. Retrieved 3 August 2025.
- ^ "drm: remove the gamma driver". Kernel.org. Retrieved 27 January 2015.
- ^ "[DRM]: Delete sparc64 FFB driver code that never gets built". Kernel.org. Retrieved 27 January 2015.
- ^ "drm: Remove the obsolete driver-tdfx". Kernel.org. Retrieved 23 February 2024.
- ^ "drm: Remove the obsolete driver-mga". Kernel.org. Retrieved 23 February 2024.
- ^ "drm: Remove the obsolete driver-r128". Kernel.org. Retrieved 23 February 2024.
- ^ "drm: Remove the obsolete driver-i810". Kernel.org. Retrieved 23 February 2024.
- ^ "drm: Remove the obsolete driver-sis". Kernel.org. Retrieved 23 February 2024.
- ^ "drm: remove i830 driver". Kernel.org. Retrieved 27 January 2015.
- ^ "drm: Add via unichrome support". Kernel.org. Retrieved 27 January 2015.
- ^ "drm: Remove the obsolete driver-via". Kernel.org. Retrieved 23 February 2024.
- ^ "drm: add savage driver". Kernel.org. Retrieved 27 January 2015.
- ^ "drm: Remove the obsolete driver-savage". Kernel.org. Retrieved 23 February 2024.
- ^ "List of maintainers of the linux kernel". Kernel.org. Retrieved 14 July 2014.
- ^ "libdrm git repository". Retrieved 23 July 2014.
- ^ "First DRI release of 3dfx driver". Mesa 3D. Retrieved 15 July 2014.
- ^ "Import 2.3.18pre1". The History of Linux in GIT Repository Format 1992-2010 (2010). Retrieved 15 July 2014.
- ^ Torvalds, Linus. "Linux 2.4.0 source code". Kernel.org. Retrieved 29 July 2014.
- ^ Airlie, Dave (30 December 2004). "[bk pull] drm core/personality split". linux-kernel (Mailing list).
- ^ Torvalds, Linus (11 January 2005). "Linux 2.6.11-rc1". linux-kernel (Mailing list).
- ^ Gettys, James; Packard, Keith (15 June 2004). "The (Re)Architecture of the X Window System". Retrieved 30 April 2016.
- ^ Smirl, Jon (30 August 2005). "The State of Linux Graphics". Retrieved 30 April 2016.
I believe the best solution to this problem is for the kernel to provide a single, comprehensive device driver for each piece of video hardware. This means that conflicting drivers like fbdev and DRM must be merged into a cooperating system. It also means that poking hardware from user space while a kernel based device driver is loaded should be prevented.
- ^ Verhaegen, Luc (2 March 2006). "X and Modesetting: Atrophy illustrated" (PDF). Retrieved 30 April 2016.
- ^ Glisse, Jerome (4 December 2007). "Radeon kernel modesetting". Retrieved 30 April 2016.
- ^ Larabel, Michael (1 October 2008). "The State of Kernel Mode-Setting". Phoronix. Retrieved 30 April 2016.
- ^ Packard, Keith (21 July 2008). "X output status july 2008". Retrieved 1 May 2016.
- ^ "drm: reorganise drm tree to be more future proof". Kernel.org.
- ^ "Linux 2.6.28 - The GEM Memory Manager for GPU memory". Linux Kernel Newbies. Retrieved 23 July 2014.
- ^ "drm: Add the TTM GPU memory manager subsystem". Kernel.org.
- ^ "DRM: add mode setting support". Kernel.org.
- ^ "DRM: i915: add mode setting support". Kernel.org.
- ^ Anholt, Eric (22 December 2008). "[ANNOUNCE] libdrm-2.4.3". dri-devel (Mailing list).
- ^ Barnes, Jesse (20 October 2008). "[ANNOUNCE] xf86-video-intel 2.5.0". xorg-announce (Mailing list).
- ^ "Linux 2.6.31 - ATI Radeon Kernel Mode Setting support". Linux Kernel Newbies. Archived from the original on 5 November 2015. Retrieved 28 April 2016.
- ^ Torvalds, Linus (9 September 2009). "Linux 2.6.31". linux-kernel (Mailing list).
- ^ "drm/radeon: introduce kernel modesetting for radeon hardware". Kernel.org.
- ^ "The irregular Nouveau Development Companion #40". Nouveau project. Retrieved 3 May 2016.
- ^ "Linux 2.6.33 - Graphic improvements". Linux Kernel Newbies. Retrieved 28 April 2016.
- ^ "drm/kms: add page flipping ioctl". Kernel.org.
- ^ "Linux 2.6.38 - Graphics". Linux Kernel Newbies. Retrieved 28 April 2016.
- ^ Airlie, Dave (21 December 2009). "[ANNOUNCE] libdrm 2.4.17". dri-devel (Mailing list).
- ^ "drm: dumb scanout create/mmap for intel/radeon (v3)". Kernel.org.
- ^ "Linux 2 6 39-DriversArch". Linux Kernel Newbies. Retrieved 19 April 2016.
- ^ Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel; Wunner, Lukas. "Linux GPU Driver Developer's Guide - Dumb Buffer Objects". Retrieved 31 August 2016.
- ^ Wilson, Chris (11 April 2011). "[ANNOUNCE] libdrm 2.4.25". dri-devel (Mailing list).
- ^ Barnes, Jesse (25 April 2011). "[RFC] drm: add overlays as first class KMS objects". dri-devel (Mailing list).
- ^ Barnes, Jesse (13 May 2011). "[RFC] drm: add overlays as first class KMS objects". dri-devel (Mailing list).
- ^ "drm: add plane support v3". Kernel.org.
- ^ "drm: add generic ioctls to get/set properties on any object". Kernel.org.
- ^ Widawsky, Ben (27 June 2012). "[ANNOUNCE] libdrm 2.4.36". xorg-announce (Mailing list).
- ^ Larabel, Michael. "Proof Of Concept: Open-Source Multi-GPU Rendering!". Phoronix. Retrieved 14 April 2016.
- ^ Larabel, Michael (23 February 2012). "DRM Base PRIME Support Part Of VGEM Work". Phoronix. Retrieved 14 April 2016.
- ^ Airlie, Dave (27 March 2012). "[PATCH] drm: base prime/dma-buf support (v5)". dri-devel (Mailing list).
- ^ Larabel, Michael (30 March 2012). "Last Minute For Linux 3.4: DMA-BUF PRIME Support". Phoronix. Retrieved 15 April 2016.
- ^ "drm: base prime/dma-buf support (v5)". Kernel.org.
- ^ "Linux 3.4 DriverArch". Linux Kernel Newbies. Retrieved 15 April 2016.
- ^ Anholt, Eric (10 May 2012). "[ANNOUNCE] libdrm 2.4.34". dri-devel (Mailing list).
- ^ Larabel, Michael (12 May 2012). "DMA-BUF PRIME Coming Together For Linux 3.5". Phoronix. Retrieved 15 April 2016.
- ^ "Linux 3.5 DriverArch". Linux Kernel Newbies. Retrieved 15 April 2016.
- ^ Corbet, Jonathan (11 September 2013). "3.12 merge window, part 2". LWN.net. Retrieved 21 July 2014.
- ^ "drm: implement experimental render nodes". Kernel.org.
- ^ "drm/i915: Support render nodes". Kernel.org.
- ^ "drm/radeon: Support render nodes". Kernel.org.
- ^ "drm/nouveau: Support render nodes". Kernel.org.
- ^ Roper, Matt (7 March 2014). "[RFCv2 00/10] Universal plane support". dri-devel (Mailing list).
- ^ Larabel, Michael (2 April 2014). "Universal Plane Support Set For Linux 3.15". Phoronix. Retrieved 14 April 2016.
- ^ Lankhorst, Maarten (25 July 2014). "[ANNOUNCE] libdrm 2.4.55". dri-devel (Mailing list).
- ^ a b Vetter, Daniel (7 August 2014). "Neat stuff for 3.17". Retrieved 14 April 2016.
- ^ Barnes, Jesse (15 February 2012). "[RFC] drm: atomic mode set API". dri-devel (Mailing list).
- ^ Syrjälä, Ville (24 May 2012). "[RFC][PATCH 0/6] WIP: drm: Atomic mode setting idea". dri-devel (Mailing list).
- ^ Clark, Rob (9 September 2012). "[RFC 0/9] nuclear pageflip". dri-devel (Mailing list).
- ^ Clark, Rob (6 October 2013). "[RFCv1 00/12] Atomic/nuclear modeset/pageflip". dri-devel (Mailing list).
- ^ Vetter, Daniel (3 February 2016). "Embrace the Atomic Display Age" (PDF). Retrieved 4 May 2016.
- ^ Vetter, Daniel (2 November 2014). "Atomic Modeset Support for KMS Drivers". Retrieved 4 May 2016.
- ^ Airlie, Dave (14 December 2014). "[git pull] drm for 3.19-rc1". dri-devel (Mailing list).
- ^ Vetter, Daniel (28 January 2015). "Update for Atomic Display Updates". Retrieved 4 May 2016.
- ^ Airlie, Dave (15 February 2015). "[git pull] drm pull for 3.20-rc1". dri-devel (Mailing list).
- ^ "Linux 4.0 - DriverArch - Graphics". Linux Kernel Newbies. Retrieved 3 May 2016.
- ^ "Linux 4.2 - Atomic modesetting API enabled by default". Linux Kernel Newbies. Retrieved 3 May 2016.
- ^ Velikov, Emil (29 June 2015). "[ANNOUNCE] libdrm 2.4.62". dri-devel (Mailing list).
- ^ Vetter, Daniel (6 June 2016). "Awesome Atomic Advances". Retrieved 7 June 2016.
right now there's 17 drivers supporting atomic modesetting merged into the DRM subsystem
- ^ Stone, Daniel (20 March 2018). "A new era for Linux's low-level graphics - Part 1". Retrieved 5 May 2018.
- ^ Herrmann, David (10 December 2012). "KMSCON Introduction". Retrieved 22 November 2015.
- ^ "Linux, Solaris, and FreeBSD driver 358.09 (beta)". 2015-12-10.
External links
[edit]- DRM home page
- Linux GPU Driver Developer's Guide (formerly Linux DRM Developer's Guide)
- Embedded Linux Conference 2013 - Anatomy of an Embedded KMS driver on YouTube
Direct Rendering Manager
View on Grokipedia/dev/dri/card0), where hardware-specific drivers load upon GPU detection to authenticate clients and manage state changes under a single DRM master process.[1]
Over time, DRM has expanded to support a wide range of GPUs from vendors like AMD, Intel, and NVIDIA, integrating with libraries such as libdrm for application interfacing and facilitating advancements in open-source graphics stacks like Mesa.[2] In the upcoming Linux kernel 6.18 (released December 2025), it incorporates new drivers, such as the "Rocket" accelerator for neural processing units (NPUs) on Rockchip SoCs, underscoring its role in evolving Linux graphics and compute capabilities.[5]
Introduction
Overview
The Direct Rendering Manager (DRM) is a subsystem within the Linux kernel designed to manage access to graphics processing units (GPUs) and enable direct rendering capabilities without requiring CPU-mediated data transfers.[2] It serves as the foundational kernel component for handling graphics hardware, providing a standardized framework for device initialization, resource allocation, and secure user-space access to GPU functionality.[4] Originally developed to support accelerated 3D graphics, DRM has evolved into a comprehensive interface for both 2D and 3D rendering, video decoding, and display management across a wide range of hardware.[2] At its core, DRM facilitates direct rendering by allowing user-space applications to submit rendering commands and data directly to the GPU hardware through kernel drivers, minimizing overhead and improving performance compared to traditional indirect rendering paths that involve server-side copying.[4] Key functionalities include GPU memory allocation via managers like the Translation Table Maps (TTM) or Graphics Execution Manager (GEM), command submission through DMA engines, modesetting for display configuration, and synchronization mechanisms such as hardware locks and vblank event handling to coordinate access among multiple processes and prevent resource conflicts.[2] These features support efficient handling of 2D/3D graphics workloads and video processing, ensuring secure and concurrent hardware utilization.[4] In the broader Linux graphics ecosystem, DRM interacts with user-space libraries such as Mesa, which implements APIs like OpenGL and Vulkan, via the libdrm wrapper to expose kernel interfaces through device files like /dev/dri/card0.[2] A high-level view of this interaction can be described as follows:User-space Applications (e.g., Games, Browsers)
|
v
Graphics Libraries (Mesa for [OpenGL](/page/OpenGL)/[Vulkan](/page/Vulkan))
|
v
libdrm (User-space [API](/page/API))
|
v
DRM Kernel Subsystem (Drivers for GPU)
|
v
GPU Hardware
User-space Applications (e.g., Games, Browsers)
|
v
Graphics Libraries (Mesa for [OpenGL](/page/OpenGL)/[Vulkan](/page/Vulkan))
|
v
libdrm (User-space [API](/page/API))
|
v
DRM Kernel Subsystem (Drivers for GPU)
|
v
GPU Hardware
Role in Linux Graphics Stack
The Direct Rendering Manager (DRM) serves as the primary kernel-level interface in the Linux graphics stack, bridging user-space components such as Mesa and Vulkan drivers with underlying graphics hardware to facilitate direct access and control. It provides a unified framework for managing graphics resources, including memory allocation and hardware synchronization, while enforcing security through mechanisms like device file permissions under/dev/dri. This architecture allows user-space applications to perform accelerated rendering without excessive kernel mediation, supporting modern graphics APIs and compositors like Wayland or X11.[2][8]
DRM relies on several key dependencies within the Linux ecosystem to fulfill its role. It integrates with the input/output subsystem to handle events such as hotplug detection for displays and input devices, ensuring seamless coordination between graphics rendering and user interactions. For legacy fallbacks, DRM can leverage the framebuffer console interface when advanced mode setting is unavailable, providing a basic display pathway. Additionally, DRM works in tandem with the Direct Rendering Infrastructure (DRI), which extends DRM's capabilities to user-space by enabling unprivileged programs to issue rendering commands directly to the GPU, thus supporting hardware-accelerated 3D graphics without root privileges. This integration is essential for the overall graphics pipeline, where DRM manages the kernel-user space boundary to prevent unauthorized hardware access.[2][4]
In terms of operational flow, user-space applications interact with DRM primarily through ioctl calls on device files, initiating tasks like buffer allocation via GEM objects for efficient memory handling, command queuing to submit GPU workloads, and page flip events to update display contents without tearing. Synchronization is achieved using fences, which signal completion of rendering operations and coordinate multi-process access to shared resources, enabling GPU virtualization concepts such as concurrent execution across multiple applications or virtual machines. This setup supports zero-copy rendering by allowing buffers to be mapped directly between user-space and hardware, minimizing data transfers and optimizing performance for compute-intensive tasks.[8][9]
A distinctive aspect of DRM is its unification of render and display paths under a single framework, handling off-screen computations (rendering) through buffer objects and scanout operations (display) via mode setting, which eliminates the need for disparate legacy systems and streamlines resource sharing across the graphics stack.[2][9]
History and Development
Origins and Evolution
The Direct Rendering Manager (DRM) originated in 1999 as a kernel subsystem developed under the XFree86 project to enable direct hardware-accelerated 3D rendering on Linux, bypassing the performance limitations of the existing framebuffer device (fbdev) interface, which relied on CPU-intensive software emulation. Led by Precision Insight, Inc., with primary contributions from developer Rickard E. Faith, the initial DRM design provided secure, shared access to graphics hardware through a modular kernel framework, initially implemented as patches for 3dfx video cards.[10][11] The first mainline integration of DRM occurred with Linux kernel 2.4.0, released in January 2001, introducing support for Accelerated Graphics Port (AGP) memory bridging and basic command submission for rendering tasks. This addressed key bottlenecks in software rendering by allowing user-space applications to directly issue GPU commands via the Direct Rendering Infrastructure (DRI) version 1, which was fully integrated that year. Early drivers targeted hardware like the 3dfx Voodoo series for texture mapping acceleration and Matrox G200/G400 chips for vertex processing, marking the shift from monolithic fbdev handling to vendor-specific kernel modules.[10] During the Linux 2.6 kernel series, starting with its release in December 2003, DRM evolved to incorporate advanced memory management for efficient buffer allocation and sharing, as seen in the addition of a basic memory allocator in version 2.6.19. Power management features were enhanced through integration with ACPI suspend/resume cycles, enabling GPU state preservation during low-power states. The framework transitioned toward fully modular drivers, allowing dynamic loading of vendor code without recompiling the kernel. In April 2008, with Linux 2.6.25, the DRM core introduced a unified API for consistent device interaction across drivers, while the pre-Kernel Mode Setting (KMS) era emphasized render-only nodes for secure, non-privileged GPU access focused on acceleration rather than display configuration.[12][13]Key Milestones and Recent Advances
The Graphics Execution Manager (GEM) was introduced in 2007-2008 as a kernel-level solution for managing graphics buffers, enabling efficient allocation and access to GPU memory without relying on user-space mechanisms.[14] This framework was merged into the Linux kernel version 2.6.28, released in December 2008, marking a pivotal shift toward unified memory management across diverse GPU architectures.[15] Kernel Mode Setting (KMS) began its rollout in late 2008, allowing the kernel to handle display configuration independently of firmware or user-space tools, which improved boot-time graphics initialization and reduced reliance on proprietary blobs.[16] Initial support landed in kernel 2.6.29 in March 2009, with broader adoption and stabilization occurring through 2010 across major drivers, enabling seamless mode switches and multi-monitor setups without X server intervention.[17] Atomic modesetting emerged in 2012 with kernel 3.6, introducing a transaction-based approach to display updates that ensures page-flip atomicity for tear-free rendering by coordinating changes to CRTCs, planes, and connectors in a single commit.[18] This feature, building on legacy modesetting, allowed applications to prepare complex state changes—like overlay adjustments and gamma corrections—atomically, minimizing visual artifacts in dynamic environments such as compositors.[19] Render nodes were added in kernel 3.17 in October 2014, decoupling render-only access from display control to enhance security by isolating unprivileged rendering tasks and supporting multi-GPU scenarios without exposing master device privileges.[20] This separation prevented potential exploits in rendering paths from affecting display hardware, while facilitating better resource sharing in virtualized or containerized setups.[21] In recent years, DRM has incorporated Rust-based drivers starting with Linux kernel 6.15 in May 2025, exemplified by the NOVA core for NVIDIA GPUs, which leverages Rust's memory safety to mitigate common kernel bugs like use-after-free in graphics handling.[22] The fair DRM scheduler, merged in 2025, addresses equitable GPU time-sharing in multi-tenant environments by adopting a CFS-inspired algorithm that prevents priority inversion and ensures low-latency clients receive fair cycles, improving throughput in shared cloud workloads.[23] Additionally, dma-fence enhancements in kernel 6.17, released in October 2025, introduced safe access rules and new APIs for synchronization, reducing race conditions in buffer sharing across drivers like Intel Xe.[24] A notable security milestone occurred in May 2025 with the patching of CVE-2025-37762 in the Virtio DRM driver, which fixed a dmabuf unpinning error in the framebuffer preparation path, bolstering isolation between virtualized guests and host resources to prevent memory leaks and potential escapes.[25] DRM development is coordinated through the drm-next integration tree hosted on freedesktop.org, where features undergo rigorous review before upstreaming to the mainline kernel, with major contributions from Intel (e.g., i915 driver maintenance), AMD (e.g., amdgpu enhancements), and partial support from NVIDIA via open-source components like Nouveau.[26] This collaborative process, managed by the DRI project, ensures compatibility and stability across hardware vendors.[27]Software Architecture
Core API and Access Control
The Direct Rendering Manager (DRM) provides a foundational user-space API that enables applications to interact with graphics hardware through the Linux kernel. User-space programs access this API via ioctl() system calls on device files such as /dev/dri/card0, which serve as the primary entry points for resource management, buffer allocation, and command submission to the GPU. This interface abstracts hardware-specific details, allowing drivers to expose consistent functionality while supporting extensions for vendor-specific needs. The API's design emphasizes security and isolation, ensuring that graphics operations are mediated by the kernel to prevent direct hardware access.[8] As of 2025, the DRM subsystem has begun incorporating Rust for driver development, enabling safer kernel modules with memory safety guarantees, as demonstrated in ongoing contributions to the graphics stack.[28] Central to the DRM API are key ioctls that handle authentication, buffer management, and basic device operations. For instance, DRM_IOCTL_GET_MAGIC authenticates clients by returning a unique magic token, which is essential for subsequent permission grants. Legacy buffer handling ioctls like DRM_IOCTL_ADD_BUFS and DRM_IOCTL_MARK_BUFS allow user-space to allocate and mark DMA buffers for rendering, though modern implementations often integrate with higher-level managers for these tasks. Vendor-specific ioctls, defined in driver headers (e.g., include/uapi/drm/i915_drm.h for Intel), extend the core API without breaking compatibility. These ioctls are dispatched through a structured table in the drm_driver structure, ensuring orderly processing.[8] Access control in DRM revolves around the DRM-Master concept, where a primary client—typically a display server like Xorg or Wayland compositor—obtains master status to hold exclusive rights for modesetting and display configuration. Secondary clients, such as render applications, must authenticate to the master using the magic token via DRM_IOCTL_AUTH_MAGIC to gain render access, preventing unauthorized GPU usage and enabling secure multi-client scenarios. This model enforces per-client isolation through file descriptors, where each open /dev/dri/card* instance maintains independent state, supporting multi-process environments without interference. The master can revoke permissions dynamically, ensuring robust control over shared resources.[8] The API has maintained stability since its introduction in kernel 2.6, with error handling standardized through negative return values (e.g., -ENODEV for device unavailability) and errno codes for specific failures like permission denials. Versioning is managed via DRM_IOCTL_VERSION, which reports the core API level to user-space, allowing graceful handling of incompatibilities. Render nodes (/dev/dri/renderD*) further enhance isolation by providing non-master access solely for compute and rendering, bypassing modeset privileges and relying on file-system permissions for security. Brief integration with buffer sharing extensions allows authenticated clients to map resources across processes, while memory operations often leverage GEM for efficient handling.[8]Graphics Execution Manager
The Graphics Execution Manager (GEM) serves as the primary memory management layer within the Direct Rendering Manager (DRM) subsystem, providing an object-based model for handling GPU buffers in Linux graphics drivers. Introduced as an Intel-sponsored initiative, GEM replaces the fragmented scatter-gather direct memory access (DMA) approaches used in legacy DRM drivers, which often required frequent GPU reinitializations and led to inefficiencies in buffer handling. By abstracting buffers as kernel-managed objects, GEM enables more efficient allocation, sharing, and execution of graphics workloads, particularly on unified memory architecture (UMA) devices where system RAM is shared between CPU and GPU. This model supports variable-size allocations, allowing drivers to request buffers of arbitrary page-aligned sizes without fixed granularity constraints.[14][9] At its core, GEM represents GPU buffers as instances of thestruct drm_gem_object, which drivers extend with private data for hardware-specific needs. These objects act as kernel-allocated handles referencing memory in video RAM (VRAM) or CPU-accessible system memory, depending on the driver implementation. Key operations—such as creation, mapping, and eviction—are exposed through ioctls, including DRM_IOCTL_GEM_CREATE for allocating a new object with a specified size, DRM_IOCTL_GEM_MMAP for user-space mapping, and driver-specific calls for synchronization and domain transitions. Creation involves initializing the object via drm_gem_object_init() (typically backed by shmem for pageable CPU access) or drm_gem_private_object_init() for driver-managed storage, with reference counting via drm_gem_object_get() and drm_gem_object_put() ensuring proper lifetime management. Under memory pressure, GEM employs a least-recently-used (LRU) eviction strategy through struct drm_gem_lru and shrinker mechanisms, using functions like drm_gem_evict_locked() to unpin and swap out non-resident objects, thereby freeing GPU aperture space.[9][14]
In the execution flow, user-space applications submit batches of commands to the GPU's ring buffers via driver ioctls (e.g., DRM_IOCTL_I915_GEM_EXECBUFFER in Intel drivers), referencing GEM objects as inputs or outputs. GEM ensures object residency by binding them to the graphics translation table (GTT) or equivalent aperture, handling migrations between CPU and GPU domains if needed, and enforcing synchronization through reservation objects (dma_resv) to prevent concurrent access. This process resolves relocations in command buffers and transitions memory domains (e.g., from CPU-writable to GPU-render), guaranteeing coherent execution without explicit user-space intervention. For drivers requiring advanced migration, Translation Table Maps (TTM) can serve as an optional backend, providing generalized support for page table management, caching, and swapping between domains—capabilities beyond GEM's native UMA-focused design.[9][14]
Compared to TTM, GEM adopts a simpler, more driver-centric approach tailored to rendering tasks, eschewing TTM's comprehensive VRAM management and multi-domain complexity in favor of streamlined UMA operations and minimal core overhead. While TTM excels in heterogeneous memory environments with features like automatic eviction and placement policies, GEM's lightweight framework has made it the default for many drivers, such as Intel's i915, where TTM integration has been added as a backend for enhanced migration without altering the GEM API surface. This balance allows GEM to prioritize performance in common rendering scenarios, such as improved frame rates in applications like OpenArena (from 15.4 fps to 23.6 fps on Intel 915 hardware) by reducing overhead in buffer setup and execution.[9][14]
Kernel Mode Setting
Kernel Mode Setting (KMS) is a core component of the Direct Rendering Manager (DRM) subsystem in the Linux kernel, responsible for kernel-driven control of display hardware to configure screen resolutions, refresh rates, and output ports. By handling modesetting directly in the kernel space, KMS eliminates the need for user-space applications to load proprietary firmware for basic display initialization, enabling faster boot times, seamless handoff to user-space compositors, and improved reliability across diverse hardware. Drivers initialize the KMS core by callingdrmm_mode_config_init() on the DRM device, which sets up the foundational struct drm_device mode configuration.[29]
The KMS device model abstracts display hardware into interconnected entities: CRTCs (controllers that manage the timing and scanning of frames in the display pipeline), encoders (which convert digital signals to the format required by specific outputs), connectors (physical interfaces like HDMI or DisplayPort linking to monitors), and planes (independent layers for sourcing and blending pixel data, including primary framebuffers and overlays). These entities expose properties—such as modes, status, and capabilities—that userspace queries and modifies via ioctls; for example, DRM_IOCTL_MODE_GETRESOURCES retrieves the list of available CRTCs, encoders, and connectors to build a topology map. This model allows precise control over display pipelines while abstracting vendor-specific details.[30][31]
KMS provides two modesetting paradigms: the legacy approach, which applies changes through per-plane commits via individual ioctls like DRM_IOCTL_MODE_SET, and the atomic API, which offers a more advanced, transactional interface for coordinated updates. The atomic mode was introduced in Linux kernel 3.19, with full core support solidified by kernel 4.6, enabling userspace to propose a complete state change (via drm_atomic_state) that the kernel validates through an atomic check before applying it atomically to avoid partial failures. This facilitates advanced features, such as shadow planes for rendering updates off-screen before display to reduce tearing, and gamma lookup tables (LUTs) for per-CRTC color and brightness adjustments, improving efficiency in modern compositors.[32][33]
KMS incorporates mechanisms for dynamic display handling, including hotplug detection where changes in connector status trigger uevents to notify userspace of events like monitor connections or disconnections, allowing real-time reconfiguration. Power management is supported via Display Power Management Signaling (DPMS) states—ON, STANDBY, SUSPEND, and OFF—applied to connectors to optimize energy use without full subsystem shutdown. Additionally, KMS handles multi-monitor topologies through properties like tile grouping for seamless large displays and suggested positioning (x/y coordinates) for logical arrangement across multiple CRTCs. KMS uses memory managers such as GEM or TTM to allocate and manage scanout buffers, ensuring framebuffers are kernel-accessible for direct hardware rendering.[34][35][29]
Buffer Management and Render Nodes
The Direct Rendering Manager (DRM) employs advanced buffer management techniques to facilitate efficient memory handling in graphics pipelines, building upon underlying storage abstractions like GEM objects for allocation and manipulation. Central to this is the dma-buf subsystem, a generic kernel framework that enables the sharing of buffers across multiple device drivers and subsystems for direct memory access (DMA) operations. Buffers are exported and imported using dma-buf file descriptors (fds), allowing seamless transfer without unnecessary copying, which is essential for performance-critical applications such as rendering and media processing.[36][9] The PRIME protocol extends dma-buf specifically within DRM to support cross-device buffer sharing and render offloading, originally developed for multi-GPU platforms like NVIDIA Optimus. Introduced in Linux kernel 3.8 in 2012, PRIME allows applications to render on one GPU (e.g., a discrete NVIDIA dGPU) and display on another (e.g., an integrated Intel iGPU) in heterogeneous setups, using ioctls such as DRM_IOCTL_PRIME_HANDLE_TO_FD to convert local GEM handles to dma-buf fds and DRM_IOCTL_PRIME_FD_TO_HANDLE for the reverse. This enables zero-copy operations, including video decoding pipelines where decoded frames from a V4L2 media driver can be directly imported into DRM for scanout without intermediate copies.[37][9][38] To enhance security and isolation, DRM introduced render nodes in Linux kernel 3.12 in 2013, providing dedicated device files like /dev/dri/renderD* for compute-only access without modesetting privileges. Unlike primary nodes (/dev/dri/card*), which require master authentication for display control, render nodes restrict ioctls to non-privileged rendering commands, preventing unauthorized access to kernel mode setting (KMS) functions and mitigating risks from untrusted clients in multi-user or containerized environments. This separation supports secure off-screen rendering and GPGPU workloads while allowing broader access to GPU resources.[39][40] Buffer operations in DRM rely on robust synchronization mechanisms to coordinate asynchronous GPU tasks, primarily through dma-fence objects that signal completion of hardware operations. These fences can be attached to buffers to ensure proper ordering, with dma-buf exporters managing attachments via the dma_buf_attach and dma_buf_map_attachment APIs. Recent enhancements in 2025, including fixes for dma-fence lifetime management in schedulers, have improved support for chainable fences (via dma_fence_chain), enabling more efficient sequencing of dependent operations in complex pipelines without risking use-after-free issues.[36][41]Hardware Support
Supported Graphics Vendors
The Direct Rendering Manager (DRM) subsystem in the Linux kernel supports a wide array of graphics hardware from major vendors through dedicated open-source drivers, enabling features like Kernel Mode Setting (KMS), Graphics Execution Manager (GEM) buffer objects, and atomic modesetting across compatible GPUs. Intel's integrated graphics are handled by the i915 driver, which has provided DRM support since kernel version 2.6.25 in 2008, though foundational work dates back to 2007. The i915 driver offers full KMS, GEM, and atomic modeset compatibility, covering generations from Sandy Bridge (2011) through modern integrated GPUs like those in Meteor Lake and Lunar Lake processors, as well as discrete Arc Alchemist and Battlemage cards up to 2025 releases. The Xe driver provides support for newer Intel architectures, including Battlemage discrete GPUs mainlined in kernel 6.12 (2024).[42] AMD GPUs are supported via the amdgpu driver for modern hardware starting with kernel 4.2 in 2015 (fully mainlined around 4.6 in 2016 for Polaris-era RDNA and GCN architectures), alongside the legacy radeon driver for pre-GCN cards. The amdgpu driver enables Vulkan and OpenGL acceleration through Mesa integration, with recent additions including support for RDNA4 architectures (e.g., Radeon RX 8000 series) in kernel 6.11 and later.[43][44] NVIDIA hardware receives open-source support through the Nouveau driver, which has offered basic 2D and 3D acceleration via DRM since its inception in 2007 (merged in kernel 2.6.30). Nouveau provides limited reclocking and power management for GeForce, Quadro, and Tesla GPUs up to Turing and Ampere architectures, but full feature parity remains challenging due to reverse-engineering efforts. NVIDIA's proprietary driver supports DRM/KMS integration via the nvidia-drm kernel module (enabled with nvidia-drm.modeset=1) for modesetting and display management.[45][46] For ARM-based systems, the Panfrost driver provides open-source DRM support for Mali GPUs based on Midgard, Bifrost, and Valhall architectures since kernel 4.19 in 2018. For newer CSF-based Mali hardware (Valhall CSF and later), the Panthor driver delivers support, merged in kernel 6.10 (2024). Vivante GPUs, common in embedded systems, are supported by the Etnaviv driver, which handles GC series cores for 2D/3D rendering and video decode.[47][48][49] Additional vendors include VMware via the vmwgfx driver for virtualized graphics in hosted environments and Virtio-GPU for paravirtualized acceleration in QEMU/KVM setups. Emerging support for Qualcomm Adreno GPUs is provided by the Freedreno driver, focusing on open-source 3D rendering for Snapdragon SoCs, though full coverage lags behind proprietary blobs. As of Linux kernel 6.17 (September 2025), the DRM subsystem includes over 20 active drivers, spanning discrete, integrated, and virtualized GPUs, with ongoing additions in 6.18; however, gaps persist for proprietary implementations, particularly full NVIDIA reclocking and advanced features on non-open hardware.[50]Driver Implementation Details
DRM drivers are structured around the corestruct drm_driver, which defines a set of mandatory and optional callbacks to interface with the DRM subsystem. For Graphics Execution Manager (GEM) support, drivers must implement essential callbacks such as .gem_init to initialize GEM-specific resources during device probe, ensuring buffer object management is set up correctly. Similarly, for Kernel Mode Setting (KMS), the struct drm_mode_config_funcs requires implementations like .mode_valid to validate display modes against hardware constraints, preventing invalid configurations from being applied. Optional callbacks, such as those for power management (e.g., .suspend and .resume in the driver ops), allow drivers to handle system suspend/resume cycles, though they are not required for basic functionality.[51]
Memory management in DRM drivers commonly relies on two backends: the Translation Table Manager (TTM) and GEM. TTM serves as the primary backend for drivers handling dedicated video RAM, such as the AMDGPU driver for AMD hardware and VMware's virtual GPU drivers, providing eviction, migration, and placement policies for complex memory hierarchies. In contrast, Intel's i915 driver employs a driver-local GEM implementation, leveraging shared memory (shmem) for unified memory architecture (UMA) devices, which simplifies allocation without TTM's overhead for integrated graphics. The Xe driver follows a similar approach for newer Intel hardware.[9]
Vendor-specific extensions enhance DRM drivers with hardware-unique features. In the Intel i915 driver, GuC (Graphics Micro-Controller) and HuC (HEVC Micro-Controller) firmware loading is managed by the kernel, where the driver authenticates and initializes the HuC firmware for media acceleration while relying on GuC for workload scheduling and submission. AMD's amdgpu driver integrates Reliability, Availability, and Serviceability (RAS) features for error reporting, exposing uncorrectable (UE) and correctable (CE) error counts via sysfs interfaces and debugfs for injection and control, enabling proactive fault detection in data center environments. For Arm Mali GPUs, the Panthor driver (for CSF-based hardware) utilizes job submission queues in the Command Stream Frontend (CSF), allowing batched job dispatching to the firmware for efficient compute and graphics workloads.[52][53][54]
Recent advancements include the first Rust-based DRM driver, NOVA for NVIDIA GPUs, providing core infrastructure merged in Linux kernel 6.15. The virtio-gpu driver gained enhanced support in 6.15, including panic screen compatibility for better debugging in virtualized environments. Additionally, the Fair DRM Scheduler, integrated in kernel 6.16 (July 2025), introduces timeslicing inspired by the Completely Fair Scheduler (CFS), improving fairness and reducing latency in multi-client scenarios for drivers like amdgpu and nouveau by eliminating multiple run queues and prioritizing interactive workloads.[55][56]
Debugging DRM drivers involves kernel-exposed interfaces and userspace tools. The drm_info utility queries device properties and capabilities via ioctls, while debugfs mounts (e.g., under /sys/kernel/debug/dri/) expose driver-specific files for runtime inspection, such as queue states or firmware logs. GPU hangs, often detected via watchdog timeouts, trigger driver-initiated resets to recover the device, with the DRM core coordinating fence signaling and context cleanup to minimize impact on userspace.[9]
A key operational principle of DRM is User API (UAPI) stability, which guarantees that kernel-user interfaces remain backward-compatible across driver versions. New UAPI additions require accompanying open-source userspace implementations (e.g., in Mesa) to be reviewed and merged upstream first, ensuring no regressions; this policy, enforced through tests like IGT, allows userspace applications to interact reliably with evolving kernel drivers without breakage.[8]
