Recent from talks
Nothing was collected or created yet.
DragonFly BSD
View on Wikipedia
| DragonFly BSD | |
|---|---|
DragonFly BSD 6.2.1 UEFI boot loader | |
| Developer | Matthew Dillon |
| OS family | Unix-like (BSD) |
| Working state | Current |
| Source model | Open source |
| Initial release | 1.0 / 12 July 2004 |
| Latest release | 6.4.2 / 9 May 2025[1] |
| Repository | |
| Available in | English |
| Package manager | DPorts, pkg |
| Supported platforms | x86-64 |
| Kernel type | Hybrid[2] |
| Userland | BSD |
| Default user interface | Unix shell |
| License | BSD[3] |
| Official website | www |
DragonFly BSD is a free and open-source Unix-like operating system forked from FreeBSD 4.8. Matthew Dillon, an Amiga developer in the late 1980s and early 1990s and FreeBSD developer between 1994 and 2003, began working on DragonFly BSD in June 2003 and announced it on the FreeBSD mailing lists on 16 July 2003.[4]
Dillon started DragonFly in the belief that the techniques adopted for threading and symmetric multiprocessing in FreeBSD 5[5] would lead to poor performance and maintenance problems. He sought to correct these anticipated problems within the FreeBSD project.[6] Due to conflicts with other FreeBSD developers over the implementation of his ideas,[7] his ability to directly change the codebase was eventually revoked. Despite this, the DragonFly BSD and FreeBSD projects still work together, sharing bug fixes, driver updates, and other improvements. Dillon named the project after photographing a dragonfly in his yard, while he was still working on FreeBSD.[citation needed]
Intended as the logical continuation of the FreeBSD 4.x series, DragonFly has diverged significantly from FreeBSD, implementing lightweight kernel threads (LWKT), an in-kernel message passing system, and the HAMMER file system.[8] Many design concepts were influenced by AmigaOS.[9]
System design
[edit]Kernel
[edit]The kernel messaging subsystem being developed is similar to those found in microkernels such as Mach, though it is less complex by design. DragonFly's messaging subsystem has the ability to act in either a synchronous or asynchronous fashion, and attempts to use this capability to achieve the best performance possible in any given situation.[10]
According to developer Matthew Dillon, progress is being made to provide both device input/output (I/O) and virtual file system (VFS) messaging capabilities that will enable the remainder of the project goals to be met. The new infrastructure will allow many parts of the kernel to be migrated out into userspace; here they will be more easily debugged as they will be smaller, isolated programs, instead of being small parts entwined in a larger chunk of code. Additionally, the migration of select kernel code into userspace has the benefit of making the system more robust; if a userspace driver crashes, it will not crash the kernel.[11]
System calls are being split into userland and kernel versions and being encapsulated into messages. This will help reduce the size and complexity of the kernel by moving variants of standard system calls into a userland compatibility layer, and help maintain forwards and backwards compatibility between DragonFly versions. Linux and other Unix-like OS compatibility code is being migrated out similarly.[9]
Threading
[edit]As support for multiple instruction set architectures complicates symmetric multiprocessing (SMP) support,[7] DragonFly BSD now limits its support to the x86-64 platform.[12] DragonFly originally ran on the x86 architecture, however as of version 4.0 it is no longer supported. Since version 1.10, DragonFly supports 1:1 userland threading (one kernel thread per userland thread),[13] which is regarded as a relatively simple solution that is also easy to maintain.[9] Inherited from FreeBSD, DragonFly also supports multi-threading.[14]
In DragonFly, each CPU has its own thread scheduler. Upon creation, threads are assigned to processors and are never preemptively switched from one processor to another; they are only migrated by the passing of an inter-processor interrupt (IPI) message between the CPUs involved. Inter-processor thread scheduling is also accomplished by sending asynchronous IPI messages. One advantage to this clean compartmentalization of the threading subsystem is that the processors' on-board caches in symmetric multiprocessor systems do not contain duplicated data, allowing for higher performance by giving each processor in the system the ability to use its own cache to store different things to work on.[9]
The LWKT subsystem is being employed to partition work among multiple kernel threads (for example in the networking code there is one thread per protocol per processor), reducing competition by removing the need to share certain resources among various kernel tasks.[7]
Shared resources protection
[edit]In order to run safely on multiprocessor machines, access to shared resources (like files, data structures) must be serialized so that threads or processes do not attempt to modify the same resource at the same time. In order to prevent multiple threads from accessing or modifying a shared resource simultaneously, DragonFly employs critical sections, and serializing tokens to prevent concurrent access. While both Linux and FreeBSD 5 employ fine-grained mutex models to achieve higher performance on multiprocessor systems, DragonFly does not.[7] Until recently, DragonFly also employed spls, but these were replaced with critical sections.
Much of the system's core, including the LWKT subsystem, the IPI messaging subsystem and the new kernel memory allocator, are lockless, meaning that they work without using mutexes, with each process operating on a single CPU. Critical sections are used to protect against local interrupts, individually for each CPU, guaranteeing that a thread currently being executed will not be preempted.[13]
Serializing tokens are used to prevent concurrent accesses from other CPUs and may be held simultaneously by multiple threads, ensuring that only one of those threads is running at any given time. Blocked or sleeping threads therefore do not prevent other threads from accessing the shared resource unlike a thread that is holding a mutex. Among other things, the use of serializing tokens prevents many of the situations that could result in deadlocks and priority inversions when using mutexes, as well as greatly simplifying the design and implementation of a many-step procedure that would require a resource to be shared among multiple threads. The serializing token code is evolving into something quite similar to the "Read-copy-update" feature now available in Linux. Unlike Linux's current RCU implementation, DragonFly's is being implemented such that only processors competing for the same token are affected rather than all processors in the computer.[15]
DragonFly switched to multiprocessor safe slab allocator, which requires neither mutexes nor blocking operations for memory assignment tasks.[16] It was eventually ported into standard C library in the userland, where it replaced FreeBSD's malloc implementation.[17]
Virtual kernel
[edit]Since release 1.8 DragonFly has a virtualization mechanism similar to User-mode Linux,[18] allowing a user to run another kernel in the userland. The virtual kernel (vkernel) is run in completely isolated environment with emulated network and storage interfaces, thus simplifying testing kernel subsystems and clustering features.[9][11]
The vkernel has two important differences from the real kernel: it lacks many routines for dealing with the low-level hardware management and it uses C standard library (libc) functions in place of in-kernel implementations wherever possible. As both real and virtual kernel are compiled from the same code base, this effectively means that platform-dependent routines and re-implementations of libc functions are clearly separated in a source tree.[19]
The vkernel runs on top of hardware abstractions provided by the real kernel. These include the kqueue-based timer, the console (mapped to the virtual terminal where vkernel is executed), the disk image and virtual kernel Ethernet device (VKE), tunneling all packets to the host's tap interface.[20]
Package management
[edit]Third-party software is available on DragonFly as binary packages via pkgng or from a native ports collection – DPorts.[21]
DragonFly originally used the FreeBSD Ports collection as its official package management system, but starting with the 1.4 release switched to NetBSD's pkgsrc system, which was perceived as a way of lessening the amount of work needed for third-party software availability.[6][22] Eventually, maintaining compatibility with pkgsrc proved to require more effort than was initially anticipated, so the project created DPorts, an overlay on top of the FreeBSD Ports collection.[23][24]
CARP support
[edit]The initial implementation of Common Address Redundancy Protocol (commonly referred to as CARP) was finished in March 2007.[25] As of 2011, CARP support is integrated into DragonFly BSD.[26]
HAMMER file systems
[edit]Alongside the Unix File System, which is typically the default file system on BSDs, DragonFly BSD supports the HAMMER and HAMMER2 file systems. HAMMER2 is the default file system as of version 5.2.0.
HAMMER was developed specifically for DragonFly BSD to provide a feature-rich yet better designed analogue of the increasingly popular ZFS.[9][11][27] HAMMER supports configurable file system history, snapshots, checksumming, data deduplication and other features typical for file systems of its kind.[18][28]
HAMMER2, the successor of the HAMMER file system, is now considered stable, used by default, and the focus of further development. Plans for its development were initially shared in 2012.[29] In 2017, Dillon announced that the next DragonFly BSD version (5.0.0) would include a usable, though still experimental, version of HAMMER2, and described features of the design.[30] With the release after 5.0.0, version 5.2.0, HAMMER2 became the new default file system.
devfs
[edit]In 2007 DragonFly BSD received a new device file system (devfs), which dynamically adds and removes device nodes, allows accessing devices by connection paths, recognises drives by serial numbers and removes the need for pre-populated /dev file system hierarchy. It was implemented as a Google Summer of Code 2009 project.[31]
Application snapshots
[edit]DragonFly BSD supports Amiga-style resident applications feature: it takes a snapshot of a large, dynamically linked program's virtual memory space after loading, allowing future instances of the program to start much more quickly than it otherwise would have. This replaces the prelinking capability that was being worked on earlier in the project's history, as the resident support is much more efficient. Large programs like those found in KDE Software Compilation with many shared libraries will benefit the most from this support.[32]
Development and distribution
[edit]
As with FreeBSD and OpenBSD, the developers of DragonFly BSD are slowly replacing pre-function prototype-style C code with more modern, ANSI equivalents. Similar to other operating systems, DragonFly's version of the GNU Compiler Collection has an enhancement called the Stack-Smashing Protector (ProPolice) enabled by default, providing some additional protection against buffer overflow based attacks. As of 23 July 2005[update], the kernel is no longer built with this protection by default.[32]
Being a derivative of FreeBSD, DragonFly has inherited an easy-to-use integrated build system that can rebuild the entire base system from source with only a few commands. The DragonFly developers use the Git version control system to manage changes to the DragonFly source code. Unlike its parent FreeBSD, DragonFly has both stable and unstable releases in a single source tree, due to a smaller developer base.[7]
Like the other BSD kernels (and those of most modern operating systems), DragonFly employs a built-in kernel debugger to help the developers find kernel bugs. Furthermore, as of October 2004[update], a debug kernel, which makes bug reports more useful for tracking down kernel-related problems, is installed by default, at the expense of a relatively small quantity of disk space. When a new kernel is installed, the backup copy of the previous kernel and its modules are stripped of their debugging symbols to further minimize disk space usage.
Distribution media
[edit]The operating system is distributed as a Live CD and Live USB that boots into a complete DragonFly system.[18][31] It includes the base system and a complete set of manual pages, and may include source code and useful packages in future versions. The advantage of this is that with a single CD users can install the software onto a computer, use a full set of tools to repair a damaged installation, or demonstrate the capabilities of the system without installing it. Daily snapshots are available from the master site for those who want to install the most recent versions of DragonFly without building from source.
Like the other free and open-source BSDs, DragonFly is distributed under the terms of the modern version of the BSD license.
Release history
[edit]Reverse chronological:

| Version | Date[33] | Changes |
|---|---|---|
| 6.4.2 | 9 May 2025 |
|
| 6.4.1 | 30 April 2025 |
|
| 6.4 | 30 December 2022 | |
| 6.2.1 | 9 January 2022 | |
| 6.0 | 10 May 2021 |
|
| 5.8 | 3 March 2020 | |
| 5.6 | 17 June 2019 |
|
| 5.4 | 3 December 2018 |
|
| 5.2 | 10 April 2018 | |
| 5.0 | 16 October 2017 |
|
| 4.8 | 27 March 2017 | |
| 4.6 | 2 August 2016 |
|
| 4.4 | 7 December 2015 | |
| 4.2 | 29 June 2015 |
|
| 4.0 | 25 November 2014 |
|
| 3.8 | 4 June 2014 |
|
| 3.6 | 25 November 2013 |
|
| 3.4 | 29 April 2013 |
|
| 3.2 | 2 November 2012 |
|
| 3.0 | 22 February 2012 |
|
| 2.10 | 26 April 2011 |
|
| 2.8 | 30 October 2010 |
|
| 2.6 | 6 April 2010 |
|
| 2.4 | 16 September 2009 | |
| 2.2 | 17 February 2009 | |
| 2.0 | 20 July 2008 |
|
| 1.12 | 26 February 2008 | |
| 1.10 | 6 August 2007 |
|
| 1.8 | 30 January 2007 |
|
| 1.6 | 24 July 2006 |
|
| 1.4 | 7 January 2006 | |
| 1.2 | 8 April 2005 | |
| 1.0 | 12 July 2004 |
|
See also
[edit]References
[edit]- ^ "DragonFly BSD 6.4". Dragonfly BSD. Retrieved 14 May 2025.
- ^ Dillon, Matthew (22 August 2006), "Re: How much of microkernel?", kernel mailing list, retrieved 14 September 2011
- ^ "DragonFly BSD License", DragonFly BSD, retrieved 17 January 2015
- ^ Dillon, Matthew (16 July 2003), "Announcing DragonFly BSD!", freebsd-current mailing list, retrieved 26 July 2007
- ^ Lehey, Greg (2001), Improving the FreeBSD SMP implementation (PDF), USENIX, retrieved 22 February 2012
- ^ a b Kerner, Sean Michael (10 January 2006), "New DragonFly Released For BSD Users", InternetNews, archived from the original on 28 June 2011, retrieved 20 November 2011
- ^ a b c d e f Biancuzzi, Federico (8 July 2004), "Behind DragonFly BSD", O'Reilly Media, archived from the original on 9 April 2014, retrieved 20 November 2011
- ^ Loli-Queru, Eugenia (13 March 2004), "Interview with Matthew Dillon of DragonFly BSD", OSNews, retrieved 22 February 2012
- ^ a b c d e f Chisnall, David (15 June 2007), "DragonFly BSD: UNIX for Clusters?", InformIT, retrieved 22 November 2011
- ^ Hsu, Jeffery M. (13 March 2004). The DragonFly BSD Operating System (PDF). AsiaBSDCon 2004. Taipei, Taiwan. Retrieved 20 November 2011.
- ^ a b c Andrews, Jeremy (6 August 2007), "Interview: Matthew Dillon", KernelTrap, archived from the original on 15 May 2011
- ^ "DragonFly BSD MP Performance Significantly Improved", OSNews, 16 November 2011, retrieved 19 November 2011
- ^ a b Luciani, Robert (24 May 2009), M:N threading in DragonflyBSD (PDF), BSDCon, archived from the original (PDF) on 23 December 2010
- ^ Sherrill, Justin (11 January 2004), Paying off already, archived from the original on 30 April 2014, retrieved 20 November 2011
- ^ Pistritto, Joe; Dillon, Matthew; Sherrill, Justin C.; et al. (24 April 2004), "Serializing token", kernel mailing list, archived from the original on 15 April 2013, retrieved 20 March 2012
- ^ Bonwick, Jeff; Adams, Jonathan (3 January 2002), Magazines and Vmem: Extending the Slab Allocator to Many CPUs and Arbitrary Resources, USENIX, retrieved 20 November 2011
- ^ Dillon, Matthew (23 April 2009), "New libc malloc committed", kernel mailing list, retrieved 8 August 2011
- ^ a b c d Vervloesem, Koen (21 April 2010), "DragonFly BSD 2.6: towards a free clustering operating system", LWN.net, retrieved 19 November 2011
- ^ Economopoulos, Aggelos (16 April 2007), "A peek at the DragonFly Virtual Kernel", LWN.net, no. part 1, retrieved 8 December 2011
- ^ Economopoulos, Aggelos (16 April 2007), "A peek at the DragonFly Virtual Kernel", LWN.net, no. part 2, retrieved 8 December 2011
- ^ "HowTo DPorts", DragonFly BSD, retrieved 2 December 2013
- ^ Weinem, Mark (2007). "10 years of pkgsrc". NetBSD. Joerg Sonnenberger about pkgsrc on DragonFly BSD and his pkgsrc development projects. Retrieved 22 November 2011.
- ^ Sherrill, Justin (30 September 2013), "Why dports?", DragonFly BSD Digest, archived from the original on 30 April 2014, retrieved 2 December 2011
- ^ Sherrill, Justin (29 September 2013), "Any new packages?", users mailing list, retrieved 2 December 2013
- ^ Buschmann, Jonathan (14 March 2007), "First Patch to get CARP on Dfly", kernel mailing list, retrieved 20 November 2011
- ^ "CARP(4) manual page", DragonFly On-Line Manual Pages, retrieved 20 November 2011
- ^ Dillon, Matthew (10 October 2007), "Re: HAMMER filesystem update - design document", kernel mailing list, retrieved 20 November 2011
- ^ Larabel, Michael (7 January 2011), "Can DragonFlyBSD's HAMMER Compete With Btrfs, ZFS?", Phoronix, retrieved 20 November 2011,
HAMMER does appear to be a very interesting BSD file-system. It is though not quite as fast as the ZFS file-system on BSD, but this is also an original file-system to the DragonFlyBSD project rather than being a port from OpenSolaris. Not only is HAMMER generally faster than the common UFS file-system, but it also has a much greater feature-set.
- ^ Dillon, Matthew (8 February 2012), "DESIGN document for HAMMER2 (08-Feb-2012 update)", users, retrieved 22 February 2012
- ^ Dillon, Matthew (18 August 2017), "Next DFly release will have an initial HAMMER2 implementation", users, retrieved 3 July 2018
- ^ a b Mr (7 January 2010), "DragonFlyBSD with Matthew Dillon", bsdtalk, archived from the original (ogg) on 25 April 2012, retrieved 20 November 2011
- ^ a b "DragonFly BSD diary", DragonFly BSD, 7 January 2006, retrieved 19 November 2011
- ^ "DragonFly: Releases", DragonFly BSD, retrieved 19 June 2014
- ^ Tigeot, Francois (31 July 2007), "KMS + i915 support now in -master", users mailing list, retrieved 2 December 2013
- ^ Matthew Dillon (4 June 2009). ""Re: DragonFly-2.3.1.165.g25822 master sys/dev/disk/ahci Makefile TODO ahci.c ahci.h ahci_attach.c ahci_cam.c ahci_dragonfly.c ahci_dragonfly.h atascsi.h"".
- ^ a b Kerner, Sean Michael (25 July 2006), "DragonFly BSD 1.6 Cuts the Cord", InternetNews, retrieved 20 November 2011
- ^ Townsend, Trent (18 January 2006), "A Quick Review of DragonFly BSD 1.4", OSNews, retrieved 16 November 2011
External links
[edit]DragonFly BSD
View on GrokipediaOrigins and History
Fork from FreeBSD
DragonFly BSD originated as a fork of FreeBSD 4.8, initiated by Matthew Dillon on June 16, 2003.[6] Dillon, a longtime FreeBSD contributor since 1994, sought to preserve the stability and performance of the FreeBSD 4.x series amid growing dissatisfaction with the project's direction for FreeBSD 5, which involved significant architectural changes leading to release delays and performance regressions.[1] By forking from the RELENG_4 branch, DragonFly aimed to evolve the 4.x codebase independently, unconstrained by FreeBSD's shift toward a more monolithic threading model using mutexes.[2] The initial development goals emphasized innovative kernel redesigns to enhance scalability and reliability, including the introduction of lightweight kernel threads (LWKT) for improved symmetric multiprocessing (SMP) performance through lock-free synchronization and partitioning techniques.[6] A key focus was exploring native clustering capabilities with cache coherency, enabling a single system image (SSI) via message-passing mechanisms that decoupled execution contexts from virtual memory address spaces, targeting both high-performance servers and desktop environments.[7] These objectives positioned DragonFly as an experimental platform for advanced BSD kernel concepts while backporting select FreeBSD 5 features, such as device drivers, to maintain compatibility.[8] Initially, DragonFly continued using FreeBSD's Ports collection for package management, as it was forked from FreeBSD 4.8. Starting with version 1.4 in 2006, the project adopted NetBSD's pkgsrc to leverage a portable, cross-BSD framework for third-party software, which facilitated resource sharing and eased maintenance for a smaller development team.[2] This choice supported the project's goal of rewriting the packaging infrastructure to better suit its evolving kernel and userland.[8] Dillon publicly announced DragonFly BSD on July 16, 2003, via the FreeBSD-current mailing list, describing it as the "logical continuation of the FreeBSD 4.x series" and inviting contributions to advance its kernel-focused innovations.[8] The announcement highlighted immediate priorities like SMP infrastructure and I/O pathway revisions, setting the stage for DragonFly's distinct trajectory within the BSD family.[2]Development Philosophy and Milestones
DragonFly BSD's development philosophy emphasizes minimal lock contention to achieve high concurrency, drawing from UNIX principles while prioritizing SMP scalability, filesystem coherency, and system reliability. Initially focused on native clustering support with cache coherency mechanisms, the project shifted post-fork to enhance single-system performance through innovative locking strategies, such as token-based synchronization, which allows for efficient shared resource access without traditional mutex overhead. This approach promotes maintainability by reducing complexity in kernel subsystems and enables features like virtual kernels, which run full kernel instances as user processes to facilitate experimentation and resource isolation without compromising the host system.[2][9][10] Under the leadership of founder Matthew Dillon, a veteran FreeBSD contributor, DragonFly BSD operates as a community-driven project where code submissions are reviewed collaboratively, with Dillon holding final approval to ensure alignment with core goals. The philosophy underscores pragmatic innovation, favoring algorithmic simplicity and performance over legacy compatibility, which has guided ongoing efforts toward supporting exabyte-scale storage via filesystems like HAMMER and adapting to modern hardware, including multi-core processors and advanced graphics.[11][2][12] Key milestones reflect this evolution: from 2003 to 2007, extensive kernel subsystem rewrites laid the foundation for clustering while improving overall architecture. In 2008, DragonFly 2.0 introduced the HAMMER filesystem, enhancing data integrity with features like snapshots and mirroring. By late 2011, fine-grain locking in the VM system significantly boosted multi-core efficiency. Subsequent achievements included porting the Linux DRM for accelerated graphics between 2012 and 2015, scaling PID, PGRP, and session subsystems for SMP in 2013, and optimizing fork/exec/exit/wait mechanisms in 2014 to support higher concurrency. In 2017, version 5.0 introduced the HAMMER2 filesystem for advanced storage capabilities including unlimited snapshots and built-in compression. Version 6.0 was released in 2023, with enhancements to the NVMM hypervisor module. As of May 2025, the current stable release is 6.4.2.[2]Kernel Architecture
Threading and Scheduling
DragonFly BSD employs a hybrid threading model that separates kernel and user space scheduling for enhanced isolation and performance. The kernel utilizes Lightweight Kernel Threads (LWKT), which are managed by the LWKT scheduler, a per-CPU fixed-priority round-robin mechanism designed for efficient execution of kernel-level tasks.[13] In contrast, user threads are handled by a dedicated User Thread Scheduler, which selects and assigns one user thread per CPU before delegating to the LWKT scheduler, thereby preventing user-space activities from directly interfering with kernel operations.[13] The LWKT system facilitates message passing between kernel threads via a lightweight port-based interface, allowing asynchronous or synchronous communication without necessitating full context switches in many scenarios, which supports high concurrency levels.[14] This design enables the kernel to manage up to a million processes or threads, provided sufficient physical memory, by minimizing overhead in thread creation and inter-thread coordination.[13] The scheduler is inherently SMP-aware, with each CPU maintaining an independent LWKT scheduler that assigns threads non-preemptively to specific processors, promoting scalability across multi-core systems.[13] It supports up to 256 CPU threads while reducing lock contention through token-based synchronization, where the LWKT scheduler integrates atomic operations to acquire tokens with minimal spinning before blocking, ensuring low-latency access in contended environments.[15] Additionally, DragonFly BSD provides process checkpointing capabilities, allowing processes to be suspended to disk for later resumption on the same or different machines, facilitating debugging, migration, and resource management; this feature integrates with virtual kernels to enable kernel-level process mobility.[13][16]Locking and Shared Resources
DragonFly BSD employs a token-based locking system, known as LWKT serializing tokens, to protect shared kernel resources while enabling high scalability and low contention. These tokens support both shared and exclusive modes, allowing multiple readers to access resources concurrently while writers acquire exclusive control, with atomic operations likeatomic_cmpset*() ensuring serialization. Unlike traditional mutexes, tokens permit recursion and prioritize spinning over blocking to minimize overhead, releasing all held tokens if a thread blocks, which reduces contention in multiprocessor environments. This design facilitates efficient handling of shared data structures by optimizing for common read-heavy workloads, contributing to the kernel's ability to scale across many cores without excessive lock wars.[9]
Complementing this, DragonFly BSD features lockless kernel memory allocators to avoid synchronization overhead in allocation paths. The kmalloc allocator is a per-CPU slab-based system that operates essentially without locks, distributing slabs across CPUs to enable parallel allocations and deallocations with minimal contention. For more specialized needs, objcache provides an object-oriented allocator tailored to frequent creation and destruction of specific kernel object types, such as network buffers or filesystem inodes, also designed to be lockless and built atop kmalloc for efficiency. These allocators enhance overall kernel performance by eliminating global locks in memory management, supporting high-throughput operations in multi-threaded contexts.[13]
Fine-grained locking permeates key subsystems, further bolstering scalability for shared resources. In the virtual memory (VM) system, locks are applied at the per-object level down to the physical map (pmap), a refinement completed in late 2011 that yielded substantial performance improvements on multi-core systems by reducing global contention. The network stack employs fine-grained locks alongside packet hashing to allow concurrent processing across CPUs, with major protocols like IPFW and PF operating with few locking collisions for multi-gigabyte-per-second throughput. Similarly, disk I/O subsystems, including AHCI for SATA and NVMe drivers, use per-queue locking and hardware parallelism to achieve contention-free operations, enabling high-bandwidth storage access without traditional monolithic locks.[2][13][17]
The process identifier (PID), process group (PGRP), and session (SESSION) subsystems are engineered for extreme scaling without relying on traditional locks, accommodating up to one million user processes through SMP-friendly algorithms. This design, implemented in 2013, leverages per-CPU structures and atomic operations to handle massive process loads—tested successfully up to 900,000 processes—while maintaining low latency and avoiding bottlenecks in process management. Such optimizations tie into the broader threading model but focus on resource protection to support dense workloads in virtualized or high-concurrency environments.[13][2]
Virtual Kernels
The virtual kernel (vkernel) feature in DragonFly BSD enables the execution of a complete DragonFly BSD kernel as a userland process within the host system, facilitating isolated kernel-level operations without impacting the host kernel.[13][10] Introduced in DragonFly BSD 1.8, released on January 30, 2007, vkernel was initially proposed by Matthew Dillon in September 2006 to address challenges in cache coherency and resource isolation for clustering applications.[18] This design allows developers to load and run kernel code in a contained environment, eliminating the need for system reboots during iterative testing and reducing boot sequence overhead.[19] At its core, vkernel operates by treating the guest kernel as a single host process that can manage multiple virtual memory spaces (vmspaces) through system calls likevmspace_create() and vmspace_destroy().[19] This supports hosting multiple virtual kernels on a single host, each in isolated environments with options for shared or private memory allocation, such as specifying memory limits via command-line flags like -m 64m.[10] Page faults are handled cooperatively between the host and guest kernels, with the host passing faults to the vkernel for resolution using a userspace virtual pagetable (vpagetable) and mechanisms like vmspace_mmap().[19] Signal delivery employs efficient mailboxes via sigaction() with the SA_MAILBOX flag, allowing interruptions with EINTR to minimize overhead.[19] Nested vkernels are possible due to recursive vmspace support, and the guest kernel links against libc for thread awareness, integrating with the host's scheduler for performance.[19] This complements DragonFly's lightweight kernel threading model by enabling kernel code to execute in userland contexts.[13]
Key features include device passthrough and simulation, supporting up to 16 virtual disks (vkd), CD-ROMs (vcd), and network interfaces (vke) that map to host resources like tap(4) devices and bridges.[10] For networking, vke interfaces connect via the host's bridge(4), such as bridging to a physical interface like re0, enabling simulated network environments without dedicated hardware.[10] Device emulation relies on host primitives, including kqueue for timers and file I/O for disk access, ensuring efficient resource utilization.[19] To enable vkernel, the host requires the sysctl vm.vkernel_enable=1, and guest kernels must be compiled with the VKERNEL or VKERNEL64 configuration option.[10]
Vkernel is primarily used for kernel debugging and testing, allowing safe experimentation with panics or faults—such as NULL pointer dereferences—via tools like GDB and the kernel debugger (ddb), without risking the host system.[20] Developers can attach GDB to the vkernel process using its PID, perform backtraces with bt, and inspect memory via /proc/<PID>/mem, as demonstrated in tracing issues like bugs in sys_ktrace.[20] Beyond debugging, it supports resource partitioning for scalability testing and development of clustering features, such as single system image setups over networks, by providing isolated environments for validation without hardware dependencies.[18][20]
Device Management
DragonFly BSD employs a device file system known as DEVFS to manage device nodes dynamically within the /dev directory. DEVFS automatically populates and removes device nodes as hardware is detected or detached, providing a scalable and efficient interface for accessing kernel devices without manual intervention.[21] This implementation supports cloning on open, which facilitates the creation of multiple instances of device drivers, enhancing flexibility for applications requiring isolated device access.[21] A key feature of DEVFS is its integration with block device serial numbers, enabling persistent identification of storage devices such as ATA, SATA, SCSI, and USB mass storage units. Serial numbers are probed and recorded in /dev/serno/, allowing systems to reference disks by unique identifiers rather than volatile names like /dev/ad0, which supports seamless disk migration across hardware without reconfiguration.[13] For example, a disk with serial number 9VMBWDM1 can be consistently accessed via /dev/serno/9VMBWDM1, ensuring stability in environments with frequent hardware changes.[22] DragonFly BSD includes robust drivers for modern storage interfaces, including AHCI for Serial ATA controllers and NVMe for PCIe-based non-volatile memory controllers. The AHCI driver supports hotplug operations, particularly on AMD chipsets, allowing dynamic attachment and detachment of SATA devices with minimal disruption.[23] Similarly, the NVMe driver, implemented from scratch, handles multiple namespaces and queues per controller, enabling efficient multi-device configurations and high-performance I/O in enterprise storage setups.[24] These drivers contribute to the system's partial implementation of Device Mapper standards, facilitating layered device transformations.[3] For secure storage, DragonFly BSD provides transparent disk encryption through the dm_target_crypt module within its Device Mapper framework, which is compatible with Linux's dm-crypt and supports LUKS volumes via cryptsetup.[25] Additionally, tcplay serves as a BSD-licensed, drop-in compatible tool for TrueCrypt and VeraCrypt volumes, leveraging dm_target_crypt to unlock and map encrypted containers without proprietary dependencies.[13] This encryption capability integrates with the broader storage stack, allowing encrypted devices to be treated as standard block devices for higher-level operations.[25] The device's I/O subsystem in DragonFly BSD is designed for low-contention access, minimizing kernel locks to support high-throughput operations across multiple cores. This architecture enables scalable handling of large-scale storage, including up to four swap devices with a total capacity of 55 terabytes, where I/O is interleaved for optimal performance and requires only 1 MB of RAM per 1 GB of swap space.[13] Such design choices ensure efficient resource utilization in demanding environments, with virtually no in-kernel bottlenecks impeding concurrent device access.[3]Filesystems and Storage
HAMMER Filesystem
The HAMMER filesystem, developed by Matthew Dillon for DragonFly BSD, was introduced in version 2.0 on July 20, 2008.[26] Designed from the ground up to address limitations in traditional BSD filesystems like UFS, it targets exabyte-scale storage capacities—up to 1 exabyte per filesystem—while incorporating master/slave replication for high-availability mirroring across networked nodes.[26] This replication model allows a master pseudo-filesystem to propagate changes to one or more read-only slave instances, ensuring data consistency without multi-master complexity.[27] HAMMER's core features emphasize reliability and temporal access. Instant crash recovery is achieved via intent logging, which records metadata operations as UNDO/REDO pairs in a dedicated log; upon remount after a crash, the filesystem replays these in seconds without requiring a full fsck scan.[26] It supports 60-day rolling snapshots by default, automatically generated daily via cron jobs with no runtime performance overhead, as snapshots are lightweight references to prior transaction states rather than full copies.[12] Historical file versions are tracked indefinitely until pruned, enabling users to access any past state of a file or directory using 64-bit transaction IDs (e.g., via the@@0x<transaction_id> syntax in paths), which facilitates fine-grained recovery from accidental deletions or modifications.[27]
The on-disk layout employs a single, global B+ tree to index all filesystem elements, including inodes, directories, and indirect blocks, providing logarithmic-time operations scalable to massive datasets.[26] To enable isolation and replication, HAMMER introduces pseudo-filesystems (PFS), which function like independent mount points but share the underlying storage volume; the root PFS (ID 0) serves as the master, while additional PFSs can be created for slaves or segmented workloads.[26] A single HAMMER filesystem supports up to 65,536 PFSs across its structure, with volumes (up to 256 per filesystem, each up to 4 petabytes) providing the physical backing via disk slices or raw devices.[26] All data and metadata are protected by 32-bit CRC checksums to detect corruption.[27]
Performance is optimized through several mechanisms tailored for modern workloads. Background filesystem checks run asynchronously to verify integrity without blocking access, while the design decouples front-end user operations from back-end disk I/O, allowing bulk writes to bypass the B-tree for sequential data streams.[26] Early architectural choices laid precursors for compression (planned as per-volume filters) and deduplication, where duplicate 64 KB blocks are identified and stored only once during reblocking operations, reducing storage redundancy in repetitive datasets.[26] These elements ensure HAMMER remains efficient for both small-file metadata-heavy tasks and large-scale archival use.[27]
HAMMER's design has influenced subsequent developments, including its successor HAMMER2, which refines replication and adds native compression while retaining core B-tree principles.[12]
HAMMER2 Filesystem
HAMMER2 is a modern filesystem developed for DragonFly BSD as the successor to the original HAMMER filesystem, becoming the default option starting with DragonFly BSD 5.0 released in October 2017.[28] It employs a block-level copy-on-write (COW) design, which enhances data integrity by avoiding in-place modifications and supports efficient crash recovery through consistent on-disk snapshots of the filesystem topology.[29] This COW mechanism also improves overall efficiency by reducing fragmentation and enabling features like instantaneous writable snapshots, which are created by simply copying a 1KB pseudo-file system (PFS) root inode.[29] Key enhancements in HAMMER2 include built-in compression using algorithms such as zlib or LZ4, which can be configured per directory or file and applies to blocks up to 64KB, achieving compression ratios from 25% to 400% depending on data patterns.[29] The filesystem supports automatic deduplication, with live deduplication occurring during operations like file copying to share identical data blocks and minimize physical writes.[29] Additionally, batch deduplication tools allow scanning for redundancies post-creation, and remote mounting is possible over NFS-like protocols via its clustering capabilities.[4] HAMMER2 also provides directory-level quotas for space and inode usage tracking, along with support for multi-volume setups through device ganging, enabling distributed storage across independent devices.[30] In terms of performance, HAMMER2 incorporates tiered storage arrangements via clustering, allowing nodes with varying hardware configurations, and fast cloning through its snapshot mechanism, which is nearly instantaneous and avoids full data duplication.[29] The 6.4 series, starting from December 2022, introduced experimental remote-mounting of HAMMER2 volumes, enhancing distributed access.[4] Subsequent updates up to 6.4.2 in May 2025 addressed large-scale operation issues, such as fixing runaway kernel memory during bulkfree scans on deep directory trees and improving lookup performance by reducing unnecessary retries on locked elements.[4] While HAMMER2 maintains backward compatibility with legacy HAMMER volumes through read-only mounting, its focus remains on advancing modern storage paradigms.[12]Auxiliary Filesystems
DragonFly BSD provides several auxiliary filesystems and mechanisms that support specialized storage needs, complementing its primary filesystems by enabling efficient mounting, temporary storage, caching, and dynamic linking without relying on core persistent storage details. These features enhance flexibility for system administration, virtualization, and performance optimization in diverse environments. NULLFS serves as a loopback filesystem layer, allowing a directory or filesystem to be mounted multiple times with varying options, such as read-only access or union stacking, which facilitates isolated environments like jails and simplifies administrative tasks without data duplication.[13] This mechanism, inherited and refined from BSD traditions, ensures low-overhead access to underlying data structures, making it ideal for scenarios requiring multiple views of the same content.[31] TMPFS implements a memory-based temporary filesystem that stores both metadata and file data in RAM, backed by swap space only under memory pressure to minimize I/O latency and contention.[32] Integrated closely with DragonFly's virtual memory subsystem, it supports scalable operations for runtime data like logs or session files, with recent enhancements clustering writes to reduce paging overhead by up to four times in low-memory conditions.[33] This design prioritizes speed for short-lived data, automatically mounting points like /var/run via configuration for immediate efficiency.[34] SWAPCACHE extends swap space functionality by designating an SSD partition to cache clean filesystem data and metadata, accelerating I/O on hybrid storage setups where traditional disks handle bulk storage.[35] Configured via simple partitioning and activation commands, it transparently boosts read performance for frequently accessed blocks, yielding substantial gains in server and workstation workloads with minimal hardware additions.[36] This caching layer operates alongside primary filesystems, providing a non-intrusive performance uplift without altering underlying storage layouts.[13] Variant symlinks introduce context-sensitive symbolic links that resolve dynamically based on process or user attributes, using embedded variables like {VARIANT} to point to environment-specific targets.[37] Managed through the varsym(2) interface and system-wide configurations in varsym.conf(5), they enable applications and administrators to create adaptive paths, such as user-specific binaries or architecture-dependent libraries, reducing manual configuration overhead.[38] Enabled via sysctl vfs.varsym_enable, this feature has been a core tool in DragonFly since its early development, offering precise control over symlink behavior without runtime scripting.[13]Networking and Services
CARP Implementation
DragonFly BSD includes native support for the Common Address Redundancy Protocol (CARP), a protocol that enables multiple hosts on the same local network to share IPv4 and IPv6 addresses for high-availability networking.[39] CARP's primary function is to provide failover by allowing a backup host to assume the shared IP addresses if the master host becomes unavailable, ensuring continuous network service availability.[39] It also supports load balancing across hosts by distributing traffic based on configured parameters.[39] The protocol operates in either preemptive or non-preemptive modes, determined by thenet.inet.carp.preempt sysctl value (default 1, enabling preemption).[40] In preemptive mode, a backup host with higher priority (lower advertisement skew) automatically assumes the master role upon recovery, while non-preemptive mode requires manual intervention or configuration to change roles.[40] CARP uses virtual host IDs (VHIDs) ranging from 1 to 255 and a shared password for authentication via SHA1-HMAC to secure group membership and prevent unauthorized takeovers.[39]
Configuration of CARP interfaces occurs primarily through the ifconfig utility at runtime or persistently via /etc/rc.conf by adding the interface to cloned_interfaces.[40] Key parameters include advbase (base advertisement interval in seconds, 1-255), advskew (skew value 0-254 to influence master election, where lower values prioritize mastery), vhid, pass (authentication password), and the parent physical interface via carpdev.[41] For example, a basic setup might use ifconfig carp0 create vhid 1 pass secret advskew 0 192.0.2.1/24 on the master host.[41] CARP requires enabling via sysctl net.inet.carp.allow=1 (default enabled).[39]
Failure detection and graceful degradation are handled through demotion counters, which track interface or service readiness and adjust advertisement skew dynamically to lower a host's priority if issues arise, such as a downed physical link or unsynchronized state.[42] Counters can be viewed with ifconfig -g groupname and incremented manually (e.g., via ifconfig carp0 -demote) to simulate failures or prevent preemption during maintenance.[42] This mechanism ensures reliable failover without unnecessary role switches.
For secure redundancy, CARP integrates with DragonFly BSD's firewall frameworks: PF requires explicit rules to pass CARP protocol (IP protocol 112) packets, such as pass out on $ext_if proto carp keep state, while IPFW needs corresponding pipe or rule allowances to avoid blocking advertisements.[42] Both firewalls support IPv4 and IPv6 CARP traffic, allowing filtered failover in production environments like clustered firewalls or gateways.[42] In clustered services, CARP complements filesystem replication (e.g., HAMMER) by providing network-layer redundancy without overlapping storage concerns.[43]
Time Synchronization
DragonFly BSD employs DNTPD, a custom lightweight implementation of the Network Time Protocol (NTP) client daemon, designed specifically to synchronize the system clock with external time sources while minimizing resource usage. Unlike traditional NTP daemons, DNTPD leverages double staggered linear regression and correlation analysis to achieve stratum-1 level accuracy without requiring a local GPS receiver, enabling precise time and frequency corrections even in challenging network conditions. This approach accumulates regressions at the nominal polling rate—defaulting to 300 seconds—and requires a high correlation threshold (≥0.99 for 8 samples or ≥0.96 for 16 samples) before applying adjustments, allowing for offset errors as low as 20 milliseconds or 1 millisecond with low-latency sources.[13][44] DNTPD supports pool configurations by allowing multiple server targets in its setup, facilitating redundancy and resilience against individual source failures, such as network outages or 1-second offsets. It integrates seamlessly with the kernel via the adjtime(2) system call for gradual clock adjustments, avoiding abrupt jumps that could disrupt ongoing operations; coarse offsets exceeding 2 minutes are corrected initially if needed, while finer sliding offsets and frequency drifts within 2 parts per million (ppm) are handled ongoing. Configuration occurs through the /etc/dntpd.conf file, which lists servers in a simple "serverAsynchronous Operations
DragonFly BSD enhances system performance through asynchronous input/output (I/O) mechanisms in its networking and storage layers, allowing non-blocking operations to handle high loads efficiently without stalling critical paths.[13] The NFSv3 implementation features full RPC asynchronization, replacing traditional userland nfsiod(8) threads with just two dedicated kernel threads that manage all client-side I/O multiplexing and non-blocking file operations over the network. This approach prevents bottlenecks from misordered read-ahead requests, improving reliability and throughput for distributed file access.[13][45] The network stack supports high-concurrency I/O via lightweight kernel threading (LWKT) message passing, where most hot-path operations are asynchronous and thread-serialized for scalability. To optimize for symmetric multiprocessing (SMP) environments, it employs serializing token locking, which serializes broad code sections with minimal contention, allowing recursive acquisition and automatic release on blocking to boost parallel I/O performance.[9] DragonFly BSD integrates the DragonFly Mail Agent (DMA), a lightweight SMTP server designed for efficient local delivery and remote transfers, leveraging the kernel's asynchronous networking primitives for non-blocking mail handling in resource-constrained setups.[13][46] Starting in the 6.x release series, optimizations enable experimental remote mounting of HAMMER2 volumes directly over the network, reducing latency for distributed storage while building on asynchronous NFSv3 for seamless integration with remote filesystem features.[3]Software Management and Distribution
Package Management
DragonFly BSD employs DPorts as its primary package management system for installing and maintaining third-party software. DPorts is a ports collection derived from FreeBSD's Ports Collection, adapted with minimal modifications to ensure compatibility while incorporating DragonFly-specific ports. This system allows users to build applications from source or install pre-compiled binary packages, emphasizing a stable and familiar environment for software distribution.[5] Historically, DragonFly BSD relied on NetBSD's pkgsrc for package management up through version 3.4, which provided source-based compilation and binary support across multiple BSD variants. The project transitioned to DPorts starting with the 3.6 release in 2013 to enhance compatibility with DragonFly's evolving kernel and userland, as well as to streamline maintenance by leveraging FreeBSD's extensive porting efforts. This shift more than doubled the available software options and aligned DragonFly closer to FreeBSD's ecosystem without adopting its full base system.[47] DPorts undergoes quarterly merges from FreeBSD's stable ports branches to prioritize reliability over the latest upstream changes, with the 2024Q3 branch fully integrated as of late 2024 and the 2025Q2 branch under development in November 2025. These merges ensure a curated set of ports that compile and run effectively on DragonFly, supporting thousands of applications ranging from desktop environments to servers. Binary packages are available primarily for the x86_64 architecture, DragonFly's sole supported platform since version 4.0.[3][48][49] Software installation and updates are handled via the pkg(8) tool, a lightweight binary package manager that supports commands likepkg install <package> for adding applications, pkg upgrade for system-wide updates, and pkg delete for removal. For source builds, users employ the standard make utility within the DPorts tree, fetched via git clone from the official repository. The system focuses on conflict-free upgrades and stability, with features like package auditing (pkg audit) to detect vulnerabilities, making it suitable for production environments rather than rapid development cycles.[5]
Application-Level Features
DragonFly BSD provides several application-level features that enhance runtime management and flexibility, leveraging its unique kernel and filesystem designs for seamless operation without interrupting ongoing processes. One key feature is process checkpointing, which allows applications to be suspended and saved to disk at any time, enabling later resumption on the same system or migration to another compatible system. This is achieved through thesys_checkpoint(2) system call, which serializes the process state—including multi-threaded processes—into a file, and the checkpt(1) utility for restoration. The checkpoint image is stored on a HAMMER or HAMMER2 filesystem, integrating it with the filesystem's snapshot capabilities to version the process alongside directory contents, thus providing per-process versioning without downtime. For example, applications can handle the SIGCKPT signal to perform cleanup before checkpointing, and upon resume, they receive a positive return value from the system call to detect the event. Limitations include incomplete support for devices, sockets, and pipes, making it suitable primarily for simple or compute-bound applications rather than those reliant on network connections.
Complementing this, DragonFly BSD supports directory-level snapshots via the HAMMER and HAMMER2 filesystems, which applications can leverage for versioning data without service interruption. HAMMER2 enables instant, writable snapshots by copying the volume header's root block table, allowing mounted snapshots for ongoing access and modification. Automatic snapshotting is configurable through /etc/periodic.conf, retaining up to 60 days of daily snapshots and finer-grained 30-second intervals for recent history, facilitating undo operations with the undo(1) tool. These filesystem snapshots provide applications with robust, non-disruptive backup and recovery at the directory level, directly supporting the versioning of checkpointed process files.
Variant symlinks offer dynamic linking for applications, resolving based on runtime context such as user, group, UID, jail, or architecture via embedded variables like ${USER} or ${ARCH}. Implemented through varsym(2), these symlinks allow application authors to create configuration paths that adapt automatically—for instance, directing to user-specific libraries or architecture-appropriate binaries—enhancing portability and management without hardcoded paths. System-wide variables are managed via varsym.conf(5), enabling administrators to control resolutions globally or per-context.
These features integrate with DragonFly BSD's DPorts system, which builds and deploys applications from ports, allowing developers to incorporate checkpointing and variant symlink support natively during compilation for optimized runtime behavior.
Release History and Media
DragonFly BSD's first stable release, version 1.0, was made available on July 12, 2004.[50] Subsequent major releases have followed a series-based structure, with version 5.0 released on October 16, 2017, introducing bootable support for the HAMMER2 filesystem as an experimental option alongside the established HAMMER1.[28] Version 6.0 arrived on May 10, 2021, featuring revamped virtual file system caching and enhancements to HAMMER2, including multi-volume support.[51] The 6.2 series began with version 6.2.1 on January 9, 2022, incorporating the NVMM type-2 hypervisor for hardware-accelerated virtualization and initial AMD GPU driver support matching Linux 4.19.[52] The project maintains a release cadence centered on stable branches, where major versions introduce significant features and point releases address security vulnerabilities, bugs, and stability improvements without altering core functionality.[50] For instance, the 6.4 series started with version 6.4.0 on December 30, 2022, and progressed through 6.4.1 on April 30, 2025, to 6.4.2 on May 9, 2025, the latter including fixes for IPv6-related panics, installer issues with QEMU disk sizing, and crashes in userland programs generating many subprocesses.[4] Distribution media for DragonFly BSD targets x86_64 architectures and includes live ISO images that boot directly for installation or testing, encompassing the base system and DPorts package management tools in a compact, DVD-sized format of approximately 700 MB uncompressed.[53] USB installers are provided as raw disk images suitable for writing to flash drives via tools likedd, enabling portable installations.[53] Netboot options are supported through PXE-compatible images and daily snapshots available on official mirrors, facilitating network-based deployments.[53]
Recent updates in the 6.4 series build on prior work by stabilizing the NVMM hypervisor for type-2 virtualization and advancing experimental remote mounting capabilities for HAMMER2 volumes, allowing networked access to filesystem resources.[4]
| Release Series | Initial Release Date | Key Point Releases | Notable Features |
|---|---|---|---|
| 5.0 | October 16, 2017 | 5.0.1 (Nov 6, 2017), 5.0.2 (Dec 4, 2017) | HAMMER2 bootable support (experimental)[28] |
| 6.0 | May 10, 2021 | 6.0.1 (Oct 12, 2021) | VFS caching revamp, HAMMER2 multi-volume[51] |
| 6.2 | January 9, 2022 (6.2.1) | 6.2.2 (Jun 9, 2022) | NVMM hypervisor, AMD GPU driver[52] |
| 6.4 | December 30, 2022 (6.4.0) | 6.4.1 (Apr 30, 2025), 6.4.2 (May 9, 2025) | IPv6 fixes, remote HAMMER2 experiments[4] |
