Hubbry Logo
DragonFly BSDDragonFly BSDMain
Open search
DragonFly BSD
Community hub
DragonFly BSD
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
DragonFly BSD
DragonFly BSD
from Wikipedia

DragonFly BSD
DragonFly BSD 6.2.1 UEFI boot loader
DeveloperMatthew Dillon
OS familyUnix-like (BSD)
Working stateCurrent
Source modelOpen source
Initial release1.0 / 12 July 2004; 21 years ago (2004-07-12)
Latest release6.4.2 / 9 May 2025; 5 months ago (2025-05-09)[1]
Repository
Available inEnglish
Package managerDPorts, pkg
Supported platformsx86-64
Kernel typeHybrid[2]
UserlandBSD
Default
user interface
Unix shell
LicenseBSD[3]
Official websitewww.dragonflybsd.org

DragonFly BSD is a free and open-source Unix-like operating system forked from FreeBSD 4.8. Matthew Dillon, an Amiga developer in the late 1980s and early 1990s and FreeBSD developer between 1994 and 2003, began working on DragonFly BSD in June 2003 and announced it on the FreeBSD mailing lists on 16 July 2003.[4]

Dillon started DragonFly in the belief that the techniques adopted for threading and symmetric multiprocessing in FreeBSD 5[5] would lead to poor performance and maintenance problems. He sought to correct these anticipated problems within the FreeBSD project.[6] Due to conflicts with other FreeBSD developers over the implementation of his ideas,[7] his ability to directly change the codebase was eventually revoked. Despite this, the DragonFly BSD and FreeBSD projects still work together, sharing bug fixes, driver updates, and other improvements. Dillon named the project after photographing a dragonfly in his yard, while he was still working on FreeBSD.[citation needed]

Intended as the logical continuation of the FreeBSD 4.x series, DragonFly has diverged significantly from FreeBSD, implementing lightweight kernel threads (LWKT), an in-kernel message passing system, and the HAMMER file system.[8] Many design concepts were influenced by AmigaOS.[9]

System design

[edit]

Kernel

[edit]

The kernel messaging subsystem being developed is similar to those found in microkernels such as Mach, though it is less complex by design. DragonFly's messaging subsystem has the ability to act in either a synchronous or asynchronous fashion, and attempts to use this capability to achieve the best performance possible in any given situation.[10]

According to developer Matthew Dillon, progress is being made to provide both device input/output (I/O) and virtual file system (VFS) messaging capabilities that will enable the remainder of the project goals to be met. The new infrastructure will allow many parts of the kernel to be migrated out into userspace; here they will be more easily debugged as they will be smaller, isolated programs, instead of being small parts entwined in a larger chunk of code. Additionally, the migration of select kernel code into userspace has the benefit of making the system more robust; if a userspace driver crashes, it will not crash the kernel.[11]

System calls are being split into userland and kernel versions and being encapsulated into messages. This will help reduce the size and complexity of the kernel by moving variants of standard system calls into a userland compatibility layer, and help maintain forwards and backwards compatibility between DragonFly versions. Linux and other Unix-like OS compatibility code is being migrated out similarly.[9]

Threading

[edit]

As support for multiple instruction set architectures complicates symmetric multiprocessing (SMP) support,[7] DragonFly BSD now limits its support to the x86-64 platform.[12] DragonFly originally ran on the x86 architecture, however as of version 4.0 it is no longer supported. Since version 1.10, DragonFly supports 1:1 userland threading (one kernel thread per userland thread),[13] which is regarded as a relatively simple solution that is also easy to maintain.[9] Inherited from FreeBSD, DragonFly also supports multi-threading.[14]

In DragonFly, each CPU has its own thread scheduler. Upon creation, threads are assigned to processors and are never preemptively switched from one processor to another; they are only migrated by the passing of an inter-processor interrupt (IPI) message between the CPUs involved. Inter-processor thread scheduling is also accomplished by sending asynchronous IPI messages. One advantage to this clean compartmentalization of the threading subsystem is that the processors' on-board caches in symmetric multiprocessor systems do not contain duplicated data, allowing for higher performance by giving each processor in the system the ability to use its own cache to store different things to work on.[9]

The LWKT subsystem is being employed to partition work among multiple kernel threads (for example in the networking code there is one thread per protocol per processor), reducing competition by removing the need to share certain resources among various kernel tasks.[7]

Shared resources protection

[edit]

In order to run safely on multiprocessor machines, access to shared resources (like files, data structures) must be serialized so that threads or processes do not attempt to modify the same resource at the same time. In order to prevent multiple threads from accessing or modifying a shared resource simultaneously, DragonFly employs critical sections, and serializing tokens to prevent concurrent access. While both Linux and FreeBSD 5 employ fine-grained mutex models to achieve higher performance on multiprocessor systems, DragonFly does not.[7] Until recently, DragonFly also employed spls, but these were replaced with critical sections.

Much of the system's core, including the LWKT subsystem, the IPI messaging subsystem and the new kernel memory allocator, are lockless, meaning that they work without using mutexes, with each process operating on a single CPU. Critical sections are used to protect against local interrupts, individually for each CPU, guaranteeing that a thread currently being executed will not be preempted.[13]

Serializing tokens are used to prevent concurrent accesses from other CPUs and may be held simultaneously by multiple threads, ensuring that only one of those threads is running at any given time. Blocked or sleeping threads therefore do not prevent other threads from accessing the shared resource unlike a thread that is holding a mutex. Among other things, the use of serializing tokens prevents many of the situations that could result in deadlocks and priority inversions when using mutexes, as well as greatly simplifying the design and implementation of a many-step procedure that would require a resource to be shared among multiple threads. The serializing token code is evolving into something quite similar to the "Read-copy-update" feature now available in Linux. Unlike Linux's current RCU implementation, DragonFly's is being implemented such that only processors competing for the same token are affected rather than all processors in the computer.[15]

DragonFly switched to multiprocessor safe slab allocator, which requires neither mutexes nor blocking operations for memory assignment tasks.[16] It was eventually ported into standard C library in the userland, where it replaced FreeBSD's malloc implementation.[17]

Virtual kernel

[edit]

Since release 1.8 DragonFly has a virtualization mechanism similar to User-mode Linux,[18] allowing a user to run another kernel in the userland. The virtual kernel (vkernel) is run in completely isolated environment with emulated network and storage interfaces, thus simplifying testing kernel subsystems and clustering features.[9][11]

The vkernel has two important differences from the real kernel: it lacks many routines for dealing with the low-level hardware management and it uses C standard library (libc) functions in place of in-kernel implementations wherever possible. As both real and virtual kernel are compiled from the same code base, this effectively means that platform-dependent routines and re-implementations of libc functions are clearly separated in a source tree.[19]

The vkernel runs on top of hardware abstractions provided by the real kernel. These include the kqueue-based timer, the console (mapped to the virtual terminal where vkernel is executed), the disk image and virtual kernel Ethernet device (VKE), tunneling all packets to the host's tap interface.[20]

Package management

[edit]

Third-party software is available on DragonFly as binary packages via pkgng or from a native ports collectionDPorts.[21]

DragonFly originally used the FreeBSD Ports collection as its official package management system, but starting with the 1.4 release switched to NetBSD's pkgsrc system, which was perceived as a way of lessening the amount of work needed for third-party software availability.[6][22] Eventually, maintaining compatibility with pkgsrc proved to require more effort than was initially anticipated, so the project created DPorts, an overlay on top of the FreeBSD Ports collection.[23][24]

CARP support

[edit]

The initial implementation of Common Address Redundancy Protocol (commonly referred to as CARP) was finished in March 2007.[25] As of 2011, CARP support is integrated into DragonFly BSD.[26]

HAMMER file systems

[edit]

Alongside the Unix File System, which is typically the default file system on BSDs, DragonFly BSD supports the HAMMER and HAMMER2 file systems. HAMMER2 is the default file system as of version 5.2.0.

HAMMER was developed specifically for DragonFly BSD to provide a feature-rich yet better designed analogue of the increasingly popular ZFS.[9][11][27] HAMMER supports configurable file system history, snapshots, checksumming, data deduplication and other features typical for file systems of its kind.[18][28]

HAMMER2, the successor of the HAMMER file system, is now considered stable, used by default, and the focus of further development. Plans for its development were initially shared in 2012.[29] In 2017, Dillon announced that the next DragonFly BSD version (5.0.0) would include a usable, though still experimental, version of HAMMER2, and described features of the design.[30] With the release after 5.0.0, version 5.2.0, HAMMER2 became the new default file system.

devfs

[edit]

In 2007 DragonFly BSD received a new device file system (devfs), which dynamically adds and removes device nodes, allows accessing devices by connection paths, recognises drives by serial numbers and removes the need for pre-populated /dev file system hierarchy. It was implemented as a Google Summer of Code 2009 project.[31]

Application snapshots

[edit]

DragonFly BSD supports Amiga-style resident applications feature: it takes a snapshot of a large, dynamically linked program's virtual memory space after loading, allowing future instances of the program to start much more quickly than it otherwise would have. This replaces the prelinking capability that was being worked on earlier in the project's history, as the resident support is much more efficient. Large programs like those found in KDE Software Compilation with many shared libraries will benefit the most from this support.[32]

Development and distribution

[edit]
DragonFly BSD 6.2.1 with Lumina desktop environment

As with FreeBSD and OpenBSD, the developers of DragonFly BSD are slowly replacing pre-function prototype-style C code with more modern, ANSI equivalents. Similar to other operating systems, DragonFly's version of the GNU Compiler Collection has an enhancement called the Stack-Smashing Protector (ProPolice) enabled by default, providing some additional protection against buffer overflow based attacks. As of 23 July 2005, the kernel is no longer built with this protection by default.[32]

Being a derivative of FreeBSD, DragonFly has inherited an easy-to-use integrated build system that can rebuild the entire base system from source with only a few commands. The DragonFly developers use the Git version control system to manage changes to the DragonFly source code. Unlike its parent FreeBSD, DragonFly has both stable and unstable releases in a single source tree, due to a smaller developer base.[7]

Like the other BSD kernels (and those of most modern operating systems), DragonFly employs a built-in kernel debugger to help the developers find kernel bugs. Furthermore, as of October 2004, a debug kernel, which makes bug reports more useful for tracking down kernel-related problems, is installed by default, at the expense of a relatively small quantity of disk space. When a new kernel is installed, the backup copy of the previous kernel and its modules are stripped of their debugging symbols to further minimize disk space usage.

Distribution media

[edit]

The operating system is distributed as a Live CD and Live USB that boots into a complete DragonFly system.[18][31] It includes the base system and a complete set of manual pages, and may include source code and useful packages in future versions. The advantage of this is that with a single CD users can install the software onto a computer, use a full set of tools to repair a damaged installation, or demonstrate the capabilities of the system without installing it. Daily snapshots are available from the master site for those who want to install the most recent versions of DragonFly without building from source.

Like the other free and open-source BSDs, DragonFly is distributed under the terms of the modern version of the BSD license.

Release history

[edit]

Reverse chronological:

Version Date[33] Changes
6.4.2 9 May 2025
  • Bug fixes
6.4.1 30 April 2025
  • Bug fixes
6.4 30 December 2022
6.2.1 9 January 2022
  • NVMM ported to DragonFly
  • Added support for growfs to changing the size of an existing HAMMER2 volume.
  • Added xdisk. Remote HAMMER2 disks can be mounted (experimental feature)
  • Imported amdgpu driver, matches Linux 4.19 support.
  • Updated DRM
6.0 10 May 2021
  • Improved work of 'dsynth' - tool that allow to maintain local DPort repository
  • Removed support of MAP_VPAGETABLE mmap(), as result no 'vkernel' in this release able to work
5.8 3 March 2020
5.6 17 June 2019
  • Improved virtual memory system
  • Updates to radeon and ttm
  • Performance improvements for HAMMER2
5.4 3 December 2018
  • Updated drivers for network, virtual machines & display
  • GCC 8.0 with the previous GCC releases
  • Hammer with more issue fixes
5.2 10 April 2018
5.0 16 October 2017
  • New HAMMER2 filesystem
  • Can now support over 900,000 processes on a single machine
  • Improved i915 support
  • IPFW better performance
4.8 27 March 2017
4.6 2 August 2016
  • Improved i915 and Radeon support
  • NVM Express support
  • Improved SMP performance
  • Improved network performance
  • Preliminary support for UEFI booting
  • autofs imported from FreeBSD, amd removed
4.4 7 December 2015
  • GCC 5.2
  • gold now the default linker
  • Improved i915 and Radeon support
  • Complete overhaul of the locale system
  • Collation support for named locales
  • Regex library replaced with TRE
  • Symbol versioning support in libc
  • Numerous HAMMER cleanups and fixes
4.2 29 June 2015
  • GCC 5.1.1
  • Improved i915 and Radeon support
  • Improved sound support
  • Improved support for memory controller and temperature sensors
  • Path MTU Discovery enabled by default
  • SCTP support removed
  • Sendmail replaced by DMA
  • GNU Info pages removed
4.0 25 November 2014
  • Non-locking, multi-threading PF
  • Related networking better-threaded for improved throughput
  • Procctl security feature in kernel
  • Support for up to 256 CPUs
  • Improved wireless networking support
  • Rust and Free Pascal now supported
  • i915 support greatly improved
  • GCC 4.7.4
3.8 4 June 2014
  • Dynamic root and PAM support
  • USB4BSD now default
  • Native C-State support for Intel CPUs
  • TCP port token split for better TCP connect(2) performance
  • GCC 4.7.3
  • HAMMER2 in system ( not ready for production use )
  • Final 32-bit release
3.6 25 November 2013
  • SMP contention reduction
  • Kernel modesetting for Intel and AMD GPUs
  • Hardware acceleration for Intel GPUs up to Ivy Bridge[34]
3.4 29 April 2013
  • New package manager, DPorts, introduced
  • GCC 4.7
  • Improved CPU usage and tmpfs performance under extreme load
3.2 2 November 2012
  • Multiprocessor-capable kernel became mandatory.
  • Performance improvements in the scheduler.
  • USB4BSD imported from FreeBSD.
  • PUFFS imported from NetBSD.
3.0 22 February 2012
  • Multiprocessor-capable kernel became the default
  • HAMMER performance improvements
  • TrueCrypt-compatible encryption support
  • dm-crypt replaced with a compatible BSD-licensed library
  • Enhanced POSIX compatibility
  • Device driver for ECC memory
  • Major network protocol stack and SMP improvements
  • ACPI-related improvements
2.10 26 April 2011
  • Giant lock removed from every area except the virtual memory subsystem
  • HAMMER deduplication
  • GCC 4.4
  • Bridging system rewritten
  • Major performance improvements
2.8 30 October 2010
2.6 6 April 2010
  • Swapcache
  • tmpfs imported from NetBSD
  • HAMMER and general I/O improvements
2.4 16 September 2009
2.2 17 February 2009
2.0 20 July 2008
1.12 26 February 2008
1.10 6 August 2007
1.8 30 January 2007
1.6 24 July 2006
  • New random number generator
  • IEEE 802.11 framework refactored
  • Major giant lock, clustering, and userland VFS improvements
  • Major stability improvements[36]
1.4 7 January 2006
1.2 8 April 2005
1.0 12 July 2004

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
DragonFly BSD is a free and open-source operating system derived from the 4.4BSD-Lite codebase, forked from 4.8 in June 2003 by developer . It targets AMD64 (x86_64) architectures, running on x86_64-compatible hardware from entry-level to high-end systems, and emphasizes high performance, stability, and scalability through innovations in multi-processor support and kernel design. The project diverges from other BSD variants like , , and by prioritizing pragmatic advancements, such as minimal lock contention in the kernel and efficient (SMP), while maintaining a lean and customizable base system suitable for servers, desktops, education, research, and . Originating from Dillon's frustrations with the delayed FreeBSD 5 series and its performance regressions, DragonFly BSD was announced in June 2003 with goals to preserve the reliability of the FreeBSD 4 branch while introducing modern features like lightweight kernel threads and a revised virtual file system (VFS) layer. Key architectural innovations include virtual kernels (vkernels), which allow lightweight, isolated instances of the kernel for resource partitioning and debugging without full virtualization overhead, and the NVMM module for native type-2 hypervisor support. The system employs a message-passing inter-process communication model inspired by Mach for improved modularity, though it retains a monolithic kernel structure overall. A defining feature is the HAMMER2 filesystem, introduced as the default in recent releases, which supports volumes up to 1 exbibyte, unlimited snapshots for backups and versioning, built-in compression, deduplication, and automatic integrity checks without requiring tools like fsck after crashes. Hardware compatibility has expanded to include the amdgpu driver for modern AMD GPUs, AHCI and NVMe storage controllers, and swapcache mechanisms to optimize SSD usage. For package management, DragonFly uses DPorts, a ports collection adapted from FreeBSD's, with the latest synchronization based on the 2024Q3 branch and ongoing work toward 2025Q2 integration; binary packages are available via pkg(8). As of May 2025, the current stable release is version 6.4.2, which includes bug fixes for the installer, IPv6 handling, and userland utilities, alongside enhancements for hypervisor compatibility. The project remains actively developed by a community focused on long-term stability and innovative storage solutions, with ongoing efforts in areas like scalable temporary filesystems (TMPFS) and precise time synchronization via DNTPD.

Origins and History

Fork from FreeBSD

DragonFly BSD originated as a of 4.8, initiated by on June 16, 2003. Dillon, a longtime contributor since 1994, sought to preserve the stability and performance of the FreeBSD 4.x series amid growing dissatisfaction with the project's direction for FreeBSD 5, which involved significant architectural changes leading to release delays and performance regressions. By forking from the RELENG_4 branch, DragonFly aimed to evolve the 4.x codebase independently, unconstrained by 's shift toward a more monolithic threading model using mutexes. The initial development goals emphasized innovative kernel redesigns to enhance and reliability, including the introduction of lightweight kernel threads (LWKT) for improved (SMP) performance through lock-free synchronization and partitioning techniques. A key focus was exploring native clustering capabilities with cache coherency, enabling a single system image (SSI) via message-passing mechanisms that decoupled execution contexts from address spaces, targeting both high-performance servers and desktop environments. These objectives positioned DragonFly as an experimental platform for advanced BSD kernel concepts while backporting select 5 features, such as device drivers, to maintain compatibility. Initially, DragonFly continued using 's Ports collection for package management, as it was forked from 4.8. Starting with version 1.4 in 2006, the project adopted NetBSD's pkgsrc to leverage a portable, cross-BSD framework for third-party software, which facilitated resource sharing and eased maintenance for a smaller development team. This choice supported the project's goal of rewriting the packaging infrastructure to better suit its evolving kernel and userland. Dillon publicly announced DragonFly BSD on July 16, 2003, via the FreeBSD-current mailing list, describing it as the "logical continuation of the FreeBSD 4.x series" and inviting contributions to advance its kernel-focused innovations. The announcement highlighted immediate priorities like SMP infrastructure and I/O pathway revisions, setting the stage for DragonFly's distinct trajectory within the BSD family.

Development Philosophy and Milestones

DragonFly BSD's development philosophy emphasizes minimal lock contention to achieve high concurrency, drawing from UNIX principles while prioritizing SMP scalability, filesystem coherency, and system reliability. Initially focused on native clustering support with cache coherency mechanisms, the project shifted post-fork to enhance single-system performance through innovative locking strategies, such as token-based , which allows for efficient access without traditional mutex overhead. This approach promotes maintainability by reducing complexity in kernel subsystems and enables features like virtual kernels, which run full kernel instances as user processes to facilitate experimentation and resource isolation without compromising the host system. Under the leadership of founder , a veteran contributor, DragonFly BSD operates as a community-driven project where code submissions are reviewed collaboratively, with Dillon holding final approval to ensure alignment with core goals. The philosophy underscores pragmatic innovation, favoring algorithmic simplicity and performance over legacy compatibility, which has guided ongoing efforts toward supporting exabyte-scale storage via filesystems like and adapting to modern hardware, including multi-core processors and advanced graphics. Key milestones reflect this evolution: from 2003 to 2007, extensive kernel subsystem rewrites laid the foundation for clustering while improving overall architecture. In 2008, DragonFly 2.0 introduced the HAMMER filesystem, enhancing data integrity with features like snapshots and mirroring. By late 2011, fine-grain locking in the VM system significantly boosted multi-core efficiency. Subsequent achievements included porting the Linux DRM for accelerated graphics between 2012 and 2015, scaling PID, PGRP, and session subsystems for SMP in 2013, and optimizing fork/exec/exit/wait mechanisms in 2014 to support higher concurrency. In 2017, version 5.0 introduced the HAMMER2 filesystem for advanced storage capabilities including unlimited snapshots and built-in compression. Version 6.0 was released in 2023, with enhancements to the NVMM hypervisor module. As of May 2025, the current stable release is 6.4.2.

Kernel Architecture

Threading and Scheduling

DragonFly BSD employs a hybrid threading model that separates kernel and user space scheduling for enhanced isolation and performance. The kernel utilizes Lightweight Kernel Threads (LWKT), which are managed by the LWKT scheduler, a per-CPU fixed-priority round-robin mechanism designed for efficient execution of kernel-level tasks. In contrast, user threads are handled by a dedicated User Thread Scheduler, which selects and assigns one user thread per CPU before delegating to the LWKT scheduler, thereby preventing user-space activities from directly interfering with kernel operations. The LWKT system facilitates between kernel threads via a lightweight port-based interface, allowing asynchronous or synchronous communication without necessitating full context switches in many scenarios, which supports high concurrency levels. This design enables the kernel to manage up to a million processes or threads, provided sufficient physical memory, by minimizing overhead in thread creation and inter-thread coordination. The scheduler is inherently SMP-aware, with each CPU maintaining an independent LWKT scheduler that assigns threads non-preemptively to specific processors, promoting scalability across multi-core systems. It supports up to 256 CPU threads while reducing lock contention through token-based , where the LWKT scheduler integrates atomic operations to acquire tokens with minimal spinning before blocking, ensuring low-latency access in contended environments. Additionally, DragonFly BSD provides checkpointing capabilities, allowing to be suspended to disk for later resumption on the same or different machines, facilitating , migration, and ; this feature integrates with virtual kernels to enable kernel-level mobility.

Locking and Shared Resources

DragonFly BSD employs a token-based locking system, known as LWKT serializing , to protect shared kernel resources while enabling high and low contention. These support both shared and exclusive modes, allowing multiple readers to access resources concurrently while writers acquire exclusive control, with atomic operations like atomic_cmpset*() ensuring . Unlike traditional mutexes, permit and prioritize spinning over blocking to minimize overhead, releasing all held if a thread blocks, which reduces contention in multiprocessor environments. This design facilitates efficient handling of shared data structures by optimizing for common read-heavy workloads, contributing to the kernel's ability to scale across many cores without excessive lock wars. Complementing this, DragonFly BSD features lockless kernel allocators to avoid overhead in allocation paths. The kmalloc allocator is a per-CPU slab-based system that operates essentially without locks, distributing slabs across CPUs to enable parallel allocations and deallocations with minimal contention. For more specialized needs, objcache provides an object-oriented allocator tailored to frequent creation and destruction of specific kernel object types, such as network buffers or filesystem inodes, also designed to be lockless and built atop kmalloc for efficiency. These allocators enhance overall kernel performance by eliminating global locks in , supporting high-throughput operations in multi-threaded contexts. Fine-grained locking permeates key subsystems, further bolstering scalability for shared resources. In the (VM) system, locks are applied at the per-object level down to the physical map (pmap), a refinement completed in late that yielded substantial performance improvements on multi-core systems by reducing global contention. The network stack employs fine-grained locks alongside packet hashing to allow concurrent processing across CPUs, with major protocols like IPFW and PF operating with few locking collisions for multi-gigabyte-per-second throughput. Similarly, disk I/O subsystems, including AHCI for and NVMe drivers, use per-queue locking and hardware parallelism to achieve contention-free operations, enabling high-bandwidth storage access without traditional monolithic locks. The process identifier (PID), , and session (SESSION) subsystems are engineered for extreme scaling without relying on traditional locks, accommodating up to one million through SMP-friendly algorithms. This design, implemented in 2013, leverages per-CPU structures and atomic operations to handle massive loads—tested successfully up to 900,000 —while maintaining low latency and avoiding bottlenecks in . Such optimizations tie into the broader threading model but focus on resource protection to support dense workloads in virtualized or high-concurrency environments.

Virtual Kernels

The virtual kernel (vkernel) feature in DragonFly BSD enables the execution of a complete DragonFly BSD kernel as a userland within the host system, facilitating isolated kernel-level operations without impacting the host kernel. Introduced in DragonFly BSD 1.8, released on January 30, 2007, vkernel was initially proposed by in September 2006 to address challenges in cache coherency and isolation for clustering applications. This design allows developers to load and run kernel code in a contained environment, eliminating the need for system reboots during iterative testing and reducing boot sequence overhead. At its core, vkernel operates by treating the guest kernel as a single host process that can manage multiple spaces (vmspaces) through system calls like vmspace_create() and vmspace_destroy(). This supports hosting multiple virtual kernels on a single host, each in isolated environments with options for shared or private memory allocation, such as specifying memory limits via command-line flags like -m 64m. Page faults are handled cooperatively between the host and guest kernels, with the host passing faults to the vkernel for resolution using a userspace virtual pagetable (vpagetable) and mechanisms like vmspace_mmap(). Signal delivery employs efficient mailboxes via sigaction() with the SA_MAILBOX flag, allowing interruptions with EINTR to minimize overhead. Nested vkernels are possible due to recursive vmspace support, and the guest kernel links against libc for thread awareness, integrating with the host's scheduler for performance. This complements DragonFly's lightweight kernel threading model by enabling kernel code to execute in userland contexts. Key features include device passthrough and simulation, supporting up to 16 virtual disks (vkd), CD-ROMs (vcd), and network interfaces (vke) that map to host resources like tap(4) devices and bridges. For networking, vke interfaces connect via the host's bridge(4), such as bridging to a physical interface like re0, enabling simulated network environments without dedicated hardware. Device emulation relies on host primitives, including kqueue for timers and file I/O for disk access, ensuring efficient resource utilization. To enable vkernel, the host requires the sysctl vm.vkernel_enable=1, and guest kernels must be compiled with the VKERNEL or VKERNEL64 configuration option. Vkernel is primarily used for kernel debugging and testing, allowing safe experimentation with panics or faults—such as dereferences—via tools like GDB and the kernel debugger (ddb), without risking the host system. Developers can attach GDB to the vkernel process using its PID, perform backtraces with bt, and inspect via /proc/<PID>/mem, as demonstrated in tracing issues like bugs in sys_ktrace. Beyond debugging, it supports resource partitioning for testing and development of clustering features, such as single system image setups over networks, by providing isolated environments for validation without hardware dependencies.

Device Management

DragonFly BSD employs a system known as DEVFS to manage device nodes dynamically within the /dev directory. DEVFS automatically populates and removes device nodes as hardware is detected or detached, providing a scalable and efficient interface for accessing kernel devices without manual intervention. This implementation supports on open, which facilitates the creation of multiple instances of device drivers, enhancing flexibility for applications requiring isolated device access. A key feature of DEVFS is its integration with block device serial numbers, enabling persistent identification of storage devices such as ATA, , , and USB mass storage units. Serial numbers are probed and recorded in /dev/serno/, allowing systems to reference disks by unique identifiers rather than volatile names like /dev/ad0, which supports seamless disk migration across hardware without reconfiguration. For example, a disk with serial number 9VMBWDM1 can be consistently accessed via /dev/serno/9VMBWDM1, ensuring stability in environments with frequent hardware changes. DragonFly BSD includes robust drivers for modern storage interfaces, including AHCI for Serial ATA controllers and NVMe for PCIe-based non-volatile memory controllers. The AHCI driver supports hotplug operations, particularly on chipsets, allowing dynamic attachment and detachment of devices with minimal disruption. Similarly, the NVMe driver, implemented from scratch, handles multiple namespaces and queues per controller, enabling efficient multi-device configurations and high-performance I/O in enterprise storage setups. These drivers contribute to the system's partial implementation of standards, facilitating layered device transformations. For secure storage, DragonFly BSD provides transparent through the dm_target_crypt module within its framework, which is compatible with Linux's and supports LUKS volumes via cryptsetup. Additionally, tcplay serves as a BSD-licensed, drop-in compatible tool for and volumes, leveraging dm_target_crypt to unlock and map encrypted containers without proprietary dependencies. This encryption capability integrates with the broader storage stack, allowing encrypted devices to be treated as standard block devices for higher-level operations. The device's I/O subsystem in DragonFly BSD is designed for low-contention access, minimizing kernel locks to support high-throughput operations across multiple cores. This enables scalable handling of large-scale storage, including up to four swap devices with a total capacity of 55 terabytes, where I/O is interleaved for optimal performance and requires only 1 MB of RAM per 1 GB of swap space. Such design choices ensure efficient resource utilization in demanding environments, with virtually no in-kernel bottlenecks impeding concurrent device access.

Filesystems and Storage

HAMMER Filesystem

The HAMMER filesystem, developed by for DragonFly BSD, was introduced in version 2.0 on July 20, 2008. Designed from the ground up to address limitations in traditional BSD filesystems like UFS, it targets exabyte-scale storage capacities—up to 1 exabyte per filesystem—while incorporating replication for high-availability mirroring across networked nodes. This replication model allows a master pseudo-filesystem to propagate changes to one or more read-only slave instances, ensuring data consistency without multi-master complexity. HAMMER's core features emphasize reliability and temporal access. Instant crash recovery is achieved via intent logging, which records metadata operations as UNDO/REDO pairs in a dedicated log; upon remount after a crash, the filesystem replays these in seconds without requiring a full fsck scan. It supports 60-day rolling snapshots by default, automatically generated daily via cron jobs with no runtime performance overhead, as snapshots are lightweight references to prior transaction states rather than full copies. Historical file versions are tracked indefinitely until pruned, enabling users to access any past state of a file or directory using 64-bit transaction IDs (e.g., via the @@0x<transaction_id> syntax in paths), which facilitates fine-grained recovery from accidental deletions or modifications. The on-disk layout employs a single, global to index all filesystem elements, including inodes, directories, and indirect blocks, providing logarithmic-time operations scalable to massive datasets. To enable isolation and replication, introduces pseudo-filesystems (PFS), which function like independent mount points but share the underlying storage volume; the root PFS (ID 0) serves as the master, while additional PFSs can be created for slaves or segmented workloads. A single filesystem supports up to 65,536 PFSs across its structure, with volumes (up to 256 per filesystem, each up to 4 petabytes) providing the physical backing via disk slices or raw devices. All data and metadata are protected by 32-bit CRC checksums to detect corruption. Performance is optimized through several mechanisms tailored for modern workloads. Background filesystem checks run asynchronously to verify integrity without blocking access, while the design decouples front-end user operations from back-end disk I/O, allowing bulk writes to bypass the for sequential data streams. Early architectural choices laid precursors for compression (planned as per-volume filters) and deduplication, where duplicate 64 KB blocks are identified and stored only once during reblocking operations, reducing storage redundancy in repetitive datasets. These elements ensure HAMMER remains efficient for both small-file metadata-heavy tasks and large-scale archival use. HAMMER's design has influenced subsequent developments, including its successor , which refines replication and adds native compression while retaining core principles.

HAMMER2 Filesystem

is a modern filesystem developed for DragonFly BSD as the successor to the original HAMMER filesystem, becoming the default option starting with DragonFly BSD released in October 2017. It employs a block-level (COW) design, which enhances by avoiding in-place modifications and supports efficient crash recovery through consistent on-disk snapshots of the filesystem topology. This COW mechanism also improves overall efficiency by reducing fragmentation and enabling features like instantaneous writable snapshots, which are created by simply copying a 1KB pseudo-file system (PFS) root inode. Key enhancements in HAMMER2 include built-in compression using algorithms such as zlib or LZ4, which can be configured per directory or file and applies to blocks up to 64KB, achieving compression ratios from 25% to 400% depending on data patterns. The filesystem supports automatic deduplication, with live deduplication occurring during operations like file copying to share identical data blocks and minimize physical writes. Additionally, batch deduplication tools allow scanning for redundancies post-creation, and remote mounting is possible over NFS-like protocols via its clustering capabilities. HAMMER2 also provides directory-level quotas for space and inode usage tracking, along with support for multi-volume setups through device ganging, enabling distributed storage across independent devices. In terms of performance, HAMMER2 incorporates tiered storage arrangements via clustering, allowing nodes with varying hardware configurations, and fast cloning through its snapshot mechanism, which is nearly instantaneous and avoids full data duplication. The 6.4 series, starting from December 2022, introduced experimental remote-mounting of HAMMER2 volumes, enhancing distributed access. Subsequent updates up to 6.4.2 in May 2025 addressed large-scale operation issues, such as fixing runaway kernel memory during bulkfree scans on deep directory trees and improving lookup performance by reducing unnecessary retries on locked elements. While HAMMER2 maintains with legacy HAMMER volumes through read-only mounting, its focus remains on advancing modern storage paradigms.

Auxiliary Filesystems

DragonFly BSD provides several auxiliary filesystems and mechanisms that support specialized storage needs, complementing its primary filesystems by enabling efficient mounting, temporary storage, caching, and dynamic linking without relying on core persistent storage details. These features enhance flexibility for system administration, , and performance optimization in diverse environments. NULLFS serves as a filesystem layer, allowing a directory or filesystem to be mounted multiple times with varying options, such as read-only access or union stacking, which facilitates isolated environments like jails and simplifies administrative tasks without data duplication. This mechanism, inherited and refined from BSD traditions, ensures low-overhead access to underlying data structures, making it ideal for scenarios requiring multiple views of the same content. TMPFS implements a memory-based temporary filesystem that stores both metadata and file in RAM, backed by swap space only under memory pressure to minimize I/O latency and contention. Integrated closely with DragonFly's subsystem, it supports scalable operations for runtime like logs or session files, with recent enhancements clustering writes to reduce paging overhead by up to four times in low- conditions. This design prioritizes speed for short-lived , automatically mounting points like /var/run via configuration for immediate efficiency. SWAPCACHE extends swap space functionality by designating an SSD partition to cache clean filesystem and metadata, accelerating I/O on hybrid storage setups where traditional disks handle bulk storage. Configured via simple partitioning and activation commands, it transparently boosts read performance for frequently accessed blocks, yielding substantial gains in server and workloads with minimal hardware additions. This caching layer operates alongside primary filesystems, providing a non-intrusive performance uplift without altering underlying storage layouts. Variant symlinks introduce context-sensitive symbolic links that resolve dynamically based on process or user attributes, using embedded variables like USERor{USER} or {VARIANT} to point to environment-specific targets. Managed through the varsym(2) interface and system-wide configurations in varsym.conf(5), they enable applications and administrators to create adaptive paths, such as user-specific binaries or architecture-dependent libraries, reducing manual configuration overhead. Enabled via sysctl vfs.varsym_enable, this feature has been a core tool in DragonFly since its early development, offering precise control over symlink behavior without runtime scripting.

Networking and Services

CARP Implementation

DragonFly BSD includes native support for the (CARP), a protocol that enables multiple hosts on the same local network to share IPv4 and addresses for high-availability networking. CARP's primary function is to provide by allowing a backup host to assume the shared IP addresses if the master host becomes unavailable, ensuring continuous network service availability. It also supports load balancing across hosts by distributing traffic based on configured parameters. The protocol operates in either preemptive or non-preemptive modes, determined by the net.inet.carp.preempt value (default 1, enabling preemption). In preemptive mode, a backup host with higher priority (lower advertisement skew) automatically assumes the master role upon recovery, while non-preemptive mode requires manual intervention or configuration to change roles. CARP uses virtual host IDs (VHIDs) ranging from 1 to 255 and a shared password for via SHA1-HMAC to secure group membership and prevent unauthorized takeovers. Configuration of interfaces occurs primarily through the ifconfig utility at runtime or persistently via /etc/rc.conf by adding the interface to cloned_interfaces. Key parameters include advbase (base advertisement interval in seconds, 1-255), advskew (skew value 0-254 to influence master election, where lower values prioritize mastery), vhid, pass ( ), and the parent physical interface via carpdev. For example, a basic setup might use ifconfig carp0 create vhid 1 pass secret advskew 0 192.0.2.1/24 on the master host. requires enabling via sysctl net.inet.carp.allow=1 (default enabled). Failure detection and graceful degradation are handled through demotion counters, which track interface or service readiness and adjust advertisement skew dynamically to lower a host's priority if issues arise, such as a downed physical link or unsynchronized state. Counters can be viewed with ifconfig -g groupname and incremented manually (e.g., via ifconfig carp0 -demote) to simulate failures or prevent preemption during maintenance. This mechanism ensures reliable failover without unnecessary role switches. For secure redundancy, integrates with DragonFly BSD's firewall frameworks: PF requires explicit rules to pass CARP protocol (IP protocol 112) packets, such as pass out on $ext_if proto carp keep state, while IPFW needs corresponding pipe or rule allowances to avoid blocking advertisements. Both firewalls support IPv4 and CARP traffic, allowing filtered in production environments like clustered firewalls or gateways. In clustered services, CARP complements filesystem replication (e.g., ) by providing network-layer redundancy without overlapping storage concerns.

Time Synchronization

DragonFly BSD employs DNTPD, a custom lightweight implementation of the Network Time Protocol (NTP) client daemon, designed specifically to synchronize the system clock with external time sources while minimizing resource usage. Unlike traditional NTP daemons, DNTPD leverages double staggered and analysis to achieve stratum-1 level accuracy without requiring a local GPS receiver, enabling precise time and frequency corrections even in challenging network conditions. This approach accumulates regressions at the nominal polling rate—defaulting to 300 seconds—and requires a high threshold (≥0.99 for 8 samples or ≥0.96 for 16 samples) before applying adjustments, allowing for offset errors as low as 20 milliseconds or 1 millisecond with low-latency sources. DNTPD supports pool configurations by allowing multiple server targets in its setup, facilitating redundancy and resilience against individual source failures, such as network outages or 1-second offsets. It integrates seamlessly with the kernel via the adjtime(2) system call for gradual clock adjustments, avoiding abrupt jumps that could disrupt ongoing operations; coarse offsets exceeding 2 minutes are corrected initially if needed, while finer sliding offsets and frequency drifts within 2 parts per million (ppm) are handled ongoing. Configuration occurs through the /etc/dntpd.conf file, which lists servers in a simple "server " format— the default pulls from the ntp.org pool for the DragonFly zone, ensuring at least three sources for robust majority-based decisions. DNTPD excels in low-bandwidth environments compared to standard NTP implementations, as its streamlined design reduces overhead while maintaining high accuracy through line intercept methods and standard deviation summation. In server environments, DNTPD provides long-term drift correction by continuously monitoring and adjusting the clock frequency based on regression slopes, ensuring sustained over extended periods. It handles leap seconds through standard NTP protocol mechanisms, inserting or deleting them as announced by upstream sources to maintain alignment with (UTC). Overall, these features make DNTPD particularly suitable for resource-constrained systems requiring reliable time management without the complexity of full NTP server functionality.

Asynchronous Operations

DragonFly BSD enhances system performance through asynchronous (I/O) mechanisms in its networking and storage layers, allowing non-blocking operations to handle high loads efficiently without stalling critical paths. The NFSv3 implementation features full RPC asynchronization, replacing traditional userland nfsiod(8) threads with just two dedicated kernel threads that manage all client-side I/O and non-blocking file operations over the network. This approach prevents bottlenecks from misordered read-ahead requests, improving reliability and throughput for distributed file access. The network stack supports high-concurrency I/O via lightweight kernel threading (LWKT) , where most hot-path operations are asynchronous and thread-serialized for . To optimize for (SMP) environments, it employs serializing token locking, which serializes broad code sections with minimal contention, allowing recursive acquisition and automatic release on blocking to boost parallel I/O performance. DragonFly BSD integrates the DragonFly Mail Agent (DMA), a lightweight SMTP server designed for efficient local delivery and remote transfers, leveraging the kernel's asynchronous networking primitives for non-blocking mail handling in resource-constrained setups. Starting in the 6.x release series, optimizations enable experimental remote mounting of HAMMER2 volumes directly over the network, reducing latency for distributed storage while building on asynchronous NFSv3 for seamless integration with remote filesystem features.

Software Management and Distribution

Package Management

DragonFly BSD employs DPorts as its primary package for installing and maintaining third-party software. DPorts is a ports collection derived from FreeBSD's Ports Collection, adapted with minimal modifications to ensure compatibility while incorporating DragonFly-specific ports. This system allows users to build applications from source or install pre-compiled binary packages, emphasizing a stable and familiar environment for . Historically, DragonFly BSD relied on NetBSD's pkgsrc for package management up through version 3.4, which provided source-based compilation and binary support across multiple BSD variants. The project transitioned to DPorts starting with the 3.6 release in to enhance compatibility with DragonFly's evolving kernel and userland, as well as to streamline maintenance by leveraging FreeBSD's extensive porting efforts. This shift more than doubled the available software options and aligned DragonFly closer to FreeBSD's without adopting its full base system. DPorts undergoes quarterly merges from FreeBSD's stable ports branches to prioritize reliability over the latest upstream changes, with the 2024Q3 branch fully integrated as of late and the 2025Q2 branch under development in November 2025. These merges ensure a curated set of ports that compile and run effectively on , supporting thousands of applications ranging from desktop environments to servers. Binary packages are available primarily for the x86_64 architecture, DragonFly's sole supported platform since version 4.0. Software installation and updates are handled via the tool, a lightweight binary that supports commands like pkg install <package> for adding applications, pkg upgrade for system-wide updates, and pkg delete for removal. For source builds, users employ the standard make utility within the DPorts tree, fetched via git clone from the official repository. The system focuses on conflict-free upgrades and stability, with features like package auditing (pkg audit) to detect vulnerabilities, making it suitable for production environments rather than rapid development cycles.

Application-Level Features

DragonFly BSD provides several application-level features that enhance runtime management and flexibility, leveraging its unique kernel and filesystem designs for seamless operation without interrupting ongoing processes. One key feature is process checkpointing, which allows applications to be suspended and saved to disk at any time, enabling later resumption on the same system or migration to another compatible system. This is achieved through the sys_checkpoint(2) , which serializes the process state—including multi-threaded processes—into a file, and the checkpt(1) utility for restoration. The checkpoint image is stored on a or HAMMER2 filesystem, integrating it with the filesystem's snapshot capabilities to version the process alongside directory contents, thus providing per-process versioning without downtime. For example, applications can handle the SIGCKPT signal to perform cleanup before checkpointing, and upon resume, they receive a positive return value from the to detect the event. Limitations include incomplete support for devices, sockets, and pipes, making it suitable primarily for simple or compute-bound applications rather than those reliant on network connections. Complementing this, DragonFly BSD supports directory-level snapshots via the and HAMMER2 filesystems, which applications can leverage for versioning data without service interruption. HAMMER2 enables instant, writable snapshots by copying the volume header's root block table, allowing mounted snapshots for ongoing access and modification. Automatic snapshotting is configurable through /etc/periodic.conf, retaining up to 60 days of daily snapshots and finer-grained 30-second intervals for recent history, facilitating operations with the undo(1) tool. These filesystem snapshots provide applications with robust, non-disruptive and recovery at the directory level, directly supporting the versioning of checkpointed files. Variant symlinks offer dynamic linking for applications, resolving based on runtime context such as user, group, UID, jail, or architecture via embedded variables like ${USER} or ${ARCH}. Implemented through varsym(2), these symlinks allow application authors to create configuration paths that adapt automatically—for instance, directing to user-specific libraries or architecture-appropriate binaries—enhancing portability and management without hardcoded paths. System-wide variables are managed via varsym.conf(5), enabling administrators to control resolutions globally or per-context. These features integrate with DragonFly BSD's DPorts system, which builds and deploys applications from ports, allowing developers to incorporate checkpointing and symlink support natively during compilation for optimized runtime behavior.

Release History and Media

DragonFly BSD's first stable release, version 1.0, was made available on July 12, 2004. Subsequent major releases have followed a series-based structure, with version 5.0 released on October 16, 2017, introducing bootable support for the HAMMER2 filesystem as an experimental option alongside the established HAMMER1. Version 6.0 arrived on May 10, 2021, featuring revamped caching and enhancements to HAMMER2, including multi-volume support. The 6.2 series began with version 6.2.1 on January 9, 2022, incorporating the NVMM type-2 for hardware-accelerated and initial GPU driver support matching 4.19. The project maintains a release cadence centered on stable branches, where major versions introduce significant features and point releases address security vulnerabilities, bugs, and stability improvements without altering core functionality. For instance, the 6.4 series started with version 6.4.0 on December 30, 2022, and progressed through 6.4.1 on April 30, 2025, to 6.4.2 on May 9, 2025, the latter including fixes for IPv6-related panics, installer issues with disk sizing, and crashes in userland programs generating many subprocesses. Distribution media for DragonFly BSD targets x86_64 architectures and includes live ISO images that directly for installation or testing, encompassing the base and DPorts package management tools in a compact, DVD-sized format of approximately 700 MB uncompressed. USB installers are provided as raw disk images suitable for writing to flash drives via tools like dd, enabling portable installations. Netboot options are supported through PXE-compatible images and daily snapshots available on official mirrors, facilitating network-based deployments. Recent updates in the 6.4 series build on prior work by stabilizing the NVMM for type-2 and advancing experimental remote mounting capabilities for HAMMER2 volumes, allowing networked access to filesystem resources.
Release SeriesInitial Release DateKey Point ReleasesNotable Features
5.0October 16, 20175.0.1 (Nov 6, 2017), 5.0.2 (Dec 4, 2017)HAMMER2 bootable support (experimental)
6.0May 10, 20216.0.1 (Oct 12, 2021)VFS caching revamp, HAMMER2 multi-volume
6.2January 9, 2022 (6.2.1)6.2.2 (Jun 9, 2022)NVMM , GPU driver
6.4December 30, 2022 (6.4.0)6.4.1 (Apr 30, 2025), 6.4.2 (May 9, 2025) fixes, remote HAMMER2 experiments

References

Add your contribution
Related Hubs
User Avatar
No comments yet.