Recent from talks
Contribute something
Nothing was collected or created yet.
| Developer(s) | Sun Microsystems originally, Oracle Corporation since 2010, OpenZFS since 2013 |
|---|---|
| Variants | Oracle ZFS, OpenZFS |
| Introduced | November 2005 with OpenSolaris |
| Structures | |
| Directory contents | Extendible hashing |
| Limits | |
| Max volume size | 256 trillion yobibytes (2128 bytes)[1] |
| Max file size | 16 exbibytes (264 bytes) |
| Max no. of files |
|
| Max filename length | 1023 ASCII characters (fewer for multibyte character standards such as Unicode) |
| Features | |
| Forks | Yes (called "extended attributes", but they are full-fledged streams) |
| Attributes | POSIX, extended attributes |
| File system permissions | Unix permissions, NFSv4 ACLs |
| Transparent compression | Yes |
| Transparent encryption | Yes |
| Data deduplication | Yes |
| Copy-on-write | Yes |
| Other | |
| Supported operating systems |
|
ZFS (previously Zettabyte File System) is a file system with volume management capabilities. It began as part of the Sun Microsystems Solaris operating system in 2001. Large parts of Solaris, including ZFS, were published under an open source license as OpenSolaris for around 5 years from 2005 before being placed under a closed source license when Oracle Corporation acquired Sun in 2009–2010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, including ZFS, to continue its development as an open source project. In 2013, OpenZFS was founded to coordinate the development of open source ZFS.[3][4][5] OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems.[6][7][8]
Overview
[edit]The management of stored data generally involves two aspects: the physical volume management of one or more block storage devices (such as hard drives and SD cards), including their organization into logical block devices as VDEVs (ZFS Virtual Device)[9] as seen by the operating system (often involving a volume manager, RAID controller, array manager, or suitable device driver); and the management of data and files that are stored on these logical block devices (a file system or other data storage).
- Example: A RAID array of 2 hard drives and an SSD caching disk is controlled by Intel's RST system, part of the chipset and firmware built into a desktop computer. The Windows user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as reading from/writing to the cache drive or rebuilding the RAID array if a disk fails). The management of the individual devices and their presentation as a single device is distinct from the management of the files held on that apparent device.
ZFS is unusual because, unlike most other storage systems, it unifies both of these roles and acts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their status, condition, and logical arrangement into volumes) as well as of all the files stored on them. ZFS is designed to ensure (subject to sufficient data redundancy) that data stored on disks cannot be lost due to physical errors, misprocessing by the hardware or operating system, or bit rot events and data corruption that may happen over time. Its complete control of the storage system is used to ensure that every step, whether related to file management or disk management, is verified, confirmed, corrected if needed, and optimized, in a way that the storage controller cards and separate volume and file systems cannot achieve.
ZFS also includes a mechanism for dataset and pool-level snapshots and replication, including snapshot cloning, which is described by the FreeBSD documentation as one of its "most powerful features" with functionality that "even other file systems with snapshot functionality lack".[10] Very large numbers of snapshots can be taken without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ("live") file system to be fully snapshotted several times an hour in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back "live" or previous file system states can be viewed, even on very large file systems, leading to savings in comparison to formal backup and restore processes.[10] Snapshots can also be cloned to form new independent file systems. ZFS also has the ability to take a pool level snapshot (known as a "checkpoint"), which allows rollback of operations that may affect the entire pool's structure or that add or remove entire datasets.
History
[edit]2004–2010: Development at Sun Microsystems
[edit]In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, and Xenix. This became Unix System V Release 4 (SVR4).[11] The project was released under the name Solaris, which became the successor to SunOS 4 (although SunOS 4.1.x micro releases were retroactively named Solaris 1).[12]
ZFS was designed and implemented by a team at Sun led by Jeff Bonwick, Bill Moore,[13] and Matthew Ahrens. It was announced on September 14, 2004,[14] but development started in 2001.[15] Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005[16] and released for developers as part of build 27 of OpenSolaris on November 16, 2005. In June 2006, Sun announced that ZFS was included in the mainstream 6/06 update to Solaris 10.[17]
Solaris was originally developed as proprietary software, but Sun Microsystems was an early commercial proponent of open source software and in June 2005 released most of the Solaris codebase under the CDDL license and founded the OpenSolaris open-source project.[18] In Solaris 10 6/06 ("U2"), Sun added the ZFS file system and frequently updated ZFS with new features during the next 5 years. ZFS was ported to Linux, Mac OS X (continued as MacZFS), and FreeBSD, under this open source license.
The name at one point was said to stand for "Zettabyte File System",[19] but by 2006, the name was no longer considered to be an abbreviation.[20] A ZFS file system can store up to 256 quadrillion zettabytes (ZB).
In September 2007, NetApp sued Sun, claiming that ZFS infringed some of NetApp's patents on Write Anywhere File Layout. Sun counter-sued in October the same year claiming the opposite. The lawsuits were ended in 2010 with an undisclosed settlement.[21]
2010–current: Development at Oracle, OpenZFS
[edit]Ported versions of ZFS began to appear in 2005. After the Sun acquisition by Oracle in 2010, Oracle's version of ZFS became closed source, and the development of open-source versions proceeded independently, coordinated by OpenZFS from 2013.
Features
[edit]Summary
[edit]Examples of features specific to ZFS include:
- Designed for long-term storage of data, and indefinitely scaled datastore sizes with zero data loss, and high configurability.
- Hierarchical checksumming of all data and metadata, ensuring that the entire storage system can be verified on use, and confirmed to be correctly stored, or remedied if corrupt. Checksums are stored with a block's parent block, rather than with the block itself. This contrasts with many file systems where checksums (if held) are stored with the data so that if the data is lost or corrupt, the checksum is also likely to be lost or incorrect.
- Can store a user-specified number of copies of data or metadata, or selected types of data, to improve the ability to recover from data corruption of important files and structures.
- Automatic rollback of recent changes to the file system and data, in some circumstances, in the event of an error or inconsistency.
- Automated and (usually) silent self-healing of data inconsistencies and write failure when detected, for all errors where the data is capable of reconstruction. Data can be reconstructed using all of the following: error detection and correction checksums stored in each block's parent block; multiple copies of data (including checksums) held on the disk; write intentions logged on the SLOG (ZIL) for writes that should have occurred but did not occur (after a power failure); parity data from RAID/RAID-Z disks and volumes; copies of data from mirrored disks and volumes.
- Native handling of standard RAID levels and additional ZFS RAID layouts ("RAID-Z"). The RAID-Z levels stripe data across only the disks required, for efficiency (many RAID systems stripe indiscriminately across all devices), and checksumming allows rebuilding of inconsistent or corrupted data to be minimized to those blocks with defects;
- Native handling of tiered storage and caching devices, which is usually a volume related task. Because ZFS also understands the file system, it can use file-related knowledge to inform, integrate, and optimize its tiered storage handling which a separate device cannot;
- Native handling of snapshots and backup/replication which can be made efficient by integrating the volume and file handling. Relevant tools are provided at a low level and require external scripts and software for utilization.
- Native data compression and deduplication, although the latter is largely handled in RAM and is memory hungry.
- Efficient rebuilding of RAID arrays—a RAID controller often has to rebuild an entire disk, but ZFS can combine disk and file knowledge to limit any rebuilding to data which is actually missing or corrupt, greatly speeding up rebuilding;
- Unaffected by RAID hardware changes which affect many other systems. On many systems, if self-contained RAID hardware such as a RAID card fails, or the data is moved to another RAID system, the file system will lack information that was on the original RAID hardware, which is needed to manage data on the RAID array. This can lead to a total loss of data unless near-identical hardware can be acquired and used as a "stepping stone". Since ZFS manages RAID itself, a ZFS pool can be migrated to other hardware, or the operating system can be reinstalled, and the RAID-Z structures and data will be recognized and immediately accessible by ZFS again.
- Ability to identify data that would have been found in a cache but has been discarded recently instead; this allows ZFS to reassess its caching decisions in light of later use and facilitates very high cache-hit levels (ZFS cache hit rates are typically over 80%);
- Alternative caching strategies can be used for data that would otherwise cause delays in data handling. For example, synchronous writes which are capable of slowing down the storage system can be converted to asynchronous writes by being written to a fast separate caching device, known as the SLOG (sometimes called the ZIL—ZFS Intent Log).
- Highly tunable—many internal parameters can be configured for optimal functionality.
- Can be used for high availability clusters and computing, although not fully designed for this use.
Data integrity
[edit]One major feature that distinguishes ZFS from other file systems is that it is designed with a focus on data integrity by protecting the user's data on disk against silent data corruption caused by data degradation, power surges (voltage spikes), bugs in disk firmware, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc..
A 1999 study showed that neither any of the then-major and widespread filesystems (such as UFS, Ext,[22] XFS, JFS, or NTFS), nor hardware RAID (which has some issues with data integrity) provided sufficient protection against data corruption problems.[23][24][25][26] Initial research indicates that ZFS protects data better than earlier efforts.[27][28] It is also faster than UFS[29][30] and can be seen as its replacement.
Within ZFS, data integrity is achieved by using a Fletcher-based checksum or a SHA-256 hash throughout the file system tree.[31] Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved at its pointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating a Merkle tree.[31] In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so that the entire pool self-validates.[31]
When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it "should" be. If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool provides data redundancy (such as with internal mirroring), assuming that the copy of data is undamaged and with matching checksums.[32] It is optionally possible to provide additional in-pool redundancy by specifying copies=2 (or copies=3), which means that data will be stored twice (or three times) on the disk, effectively halving (or, for copies=3, reducing to one-third) the storage capacity of the disk.[33] Additionally, some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety even with the default copies=1 setting.
If other copies of the damaged data exist or can be reconstructed from checksums and parity data, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism) and recalculate the checksum—ideally resulting in the reproduction of the originally expected value. If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored.
If there are no copies of the damaged data, ZFS puts the pool in a faulted state,[34] preventing its future use and providing no documented ways to recover pool contents.
Consistency of data held in memory, such as cached data in the ARC, is not checked by default, as ZFS is expected to run on enterprise-quality hardware with error correcting RAM. However, the capability to check in-memory data exists and can be enabled using "debug flags".[35]
RAID
[edit]For ZFS to be able to guarantee data integrity, it needs multiple copies of the data or parity information, usually spread across multiple disks. This is typically achieved by using either a RAID controller or so-called "soft" RAID (built into a file system).
Avoidance of hardware RAID controllers
[edit]While ZFS can work with hardware RAID devices, it will usually work more efficiently and with greater data protection if it has raw access to all storage devices. ZFS relies on the disk for an honest view to determine the moment data is confirmed as safely written and has numerous algorithms designed to optimize its use of caching, cache flushing, and disk handling.
Disks connected to the system using a hardware, firmware, other "soft" RAID, or any other controller that modifies the ZFS-to-disk I/O path will affect ZFS performance and data integrity. If a third-party device performs caching or presents drives to ZFS as a single system without the low level view ZFS relies upon, there is a much greater chance that the system will perform less optimally and that ZFS will be less likely to prevent failures, recover from failures more slowly, or lose data due to a write failure. For example, if a hardware RAID card is used, ZFS may not be able to determine the condition of disks, determine if the RAID array is degraded or rebuilding, detect all data corruption, place data optimally across the disks, make selective repairs, control how repairs are balanced with ongoing use, or make repairs that ZFS could usually undertake. The hardware RAID card will interfere with ZFS's algorithms. RAID controllers also usually add controller-dependent data to the drives which prevents software RAID from accessing the user data. In the case of a hardware RAID controller failure, it may be possible to read the data with another compatible controller, but this isn't always possible and a replacement may not be available. Alternate hardware RAID controllers may not understand the original manufacturer's custom data required to manage and restore an array.
Unlike most other systems where RAID cards or similar hardware can offload resources and processing to enhance performance and reliability, with ZFS it is strongly recommended that these methods not be used as they typically reduce the system's performance and reliability.
If disks must be attached through a RAID or other controller, it is recommended to minimize the amount of processing done in the controller by using a plain HBA (host adapter), a simple fanout card, or configure the card in JBOD mode (i.e. turn off RAID and caching functions), to allow devices to be attached with minimal changes in the ZFS-to-disk I/O pathway. A RAID card in JBOD mode may still interfere if it has a cache or, depending upon its design, may detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may require Time-Limited Error Recovery (TLER)/CCTL/ERC-enabled drives to prevent drive dropouts, so not all cards are suitable even with RAID functions disabled.[36]
ZFS's approach: RAID-Z and mirroring
[edit]Instead of hardware RAID, ZFS employs "soft" RAID, offering RAID-Z (parity based like RAID 5 and similar) and disk mirroring (similar to RAID 1). The schemes are highly flexible.
RAID-Z is a data/parity distribution scheme like RAID-5, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual read-modify-write sequence.[37]
As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this.[37]
In addition to handling whole-disk failures, RAID-Z can also detect and correct silent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor.[37]
RAID-Z and mirroring do not require any special hardware: they do not need NVRAM for reliability, and they do not need write buffering for good performance or data protection. With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks.[promotion?][37]
There are five different RAID-Z modes: striping (similar to RAID 0, offers no redundancy), RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), RAID-Z3 (a RAID 7[a] configuration, allows three disks to fail), and mirroring (similar to RAID 1, allows all but one disk to fail).[39]
The need for RAID-Z3 arose in the early 2000s as multi-terabyte capacity drives became more common. This increase in capacity—without a corresponding increase in throughput speeds—meant that rebuilding an array due to a failed drive could "easily take weeks or months" to complete.[38] During this time, the older disks in the array will be stressed by the additional workload, which could result in data corruption or drive failure. By increasing parity, RAID-Z3 reduces the chance of data loss by simply increasing redundancy.[40]
Resilvering and scrub (array syncing and integrity checking)
[edit]ZFS has no tool equivalent to fsck (the standard Unix and Linux data checking and repair tool for file systems).[41] Instead, ZFS has a built-in scrub function which regularly examines all data and repairs silent corruption and other problems. Some differences are:
- fsck must be run on an offline filesystem, which means the filesystem must be unmounted and is not usable while being repaired, while scrub is designed to be used on a mounted, live filesystem, and does not need the ZFS filesystem to be taken offline.
- fsck usually only checks metadata (such as the journal log) but never checks the data itself. This means, after an fsck, the data might still not match the original data as stored.
- fsck cannot always validate and repair data when checksums are stored with data (often the case in many file systems), because the checksums may also be corrupted or unreadable. ZFS always stores checksums separately from the data they verify, improving reliability and the ability of scrub to repair the volume. ZFS also stores multiple copies of data—metadata, in particular, may have upwards of 4 or 6 copies (multiple copies per disk and multiple disk mirrors per volume), greatly improving the ability of scrub to detect and repair extensive damage to the volume, compared to fsck.
- scrub checks everything, including metadata and the data. The effect can be observed by comparing fsck to scrub times—sometimes a fsck on a large RAID completes in a few minutes, which means only the metadata was checked. Traversing all metadata and data on a large RAID takes many hours, which is exactly what scrub does.
- while fsck detects and tries to fix errors using available filesystem data, scrub relies on redundancy to recover from issues. While fsck offers to fix the file system with partial data loss, scrub puts it into faulted state if there is no redundancy.[34]
The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week.[42][43]
Capacity
[edit]ZFS is a 128-bit file system,[44][16] so it can address 1.84 × 1019 times more data than 64-bit systems such as Btrfs. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2128 bits of data would require 3×1024 TB hard disk drives.[45]
Some theoretical limits in ZFS are:
- 16 exbibytes (264 bytes): maximum size of a single file
- 248: number of entries in any individual directory[46]
- 16 exbibytes: maximum size of any attribute
- 256: number of attributes of a file (actually constrained to 248 for the number of files in a directory)
- 256 quadrillion zebibytes (2128 bytes): maximum size of any zpool
- 264: number of devices in any zpool
- 264: number of file systems in a zpool
- 264: number of zpools in a system
Encryption
[edit]With Oracle Solaris, the encryption capability in ZFS[47] is embedded into the I/O pipeline. During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline. The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys.[48] A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism.
As of 2019[update] the encryption feature is also fully integrated into OpenZFS 0.8.0 available for Debian and Ubuntu Linux distributions.[49]
There have been anecdotal end-user reports of failures when using ZFS native encryption. An exact cause has not been established.[50][51]
Read/write efficiency
[edit]ZFS will automatically allocate data storage across all vdevs in a pool (and all devices in each vdev) in a way that generally maximises the performance of the pool. ZFS will also update its write strategy to take account of new disks added to a pool, when they are added.
As a general rule, ZFS allocates writes across vdevs based on the free space in each vdev. This ensures that vdevs which have proportionately less data already, are given more writes when new data is to be stored. This helps to ensure that as the pool becomes more used, the situation does not develop that some vdevs become full, forcing writes to occur on a limited number of devices. It also means that when data is read (and reads are much more frequent than writes in most uses), different parts of the data can be read from as many disks as possible at the same time, giving much higher read performance. Therefore, as a general rule, pools and vdevs should be managed and new storage added, so that the situation does not arise that some vdevs in a pool are almost full and others almost empty, as this will make the pool less efficient.
Free space in ZFS tends to become fragmented with usage. ZFS does not have a mechanism for defragmenting free space. There are anecdotal end-user reports of diminished performance when high free-space fragmentation is coupled with disk space over-utilization.[52][53]
Other features
[edit]Storage devices, spares, and quotas
[edit]Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis.
Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to datasets (file system instances or ZVOLs) as needed. Arbitrary storage device types can be added to existing pools to expand their size.[54]
The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance.
Caching mechanisms: ARC, L2ARC, Transaction groups, ZIL, SLOG, Special VDEV
[edit]ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance versus cost;[55] these are often called "hybrid storage pools".[56] Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such as solid-state drives (SSDs). Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM.
ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usually solid-state drives (SSDs)), for a total of four caches.
| Where stored | Read cache | Write cache | |
|---|---|---|---|
| First level cache | In RAM | Known as ARC, due to its use of a variant of the adaptive replacement cache (ARC) algorithm. RAM will always be used for caching, thus this level is always present. The efficiency of the ARC algorithm means that disks will often not need to be accessed, provided the ARC size is sufficiently large. If RAM is too small there will hardly be any ARC at all; in this case, ZFS always needs to access the underlying disks, which impacts performance, considerably. | Handled by means of "transaction groups"—writes are collated over a short period (typically 5–30 seconds) up to a given limit, with each group being written to disk ideally while the next group is being collated. This allows writes to be organized more efficiently for the underlying disks at the risk of minor data loss of the most recent transactions upon power interruption or hardware fault. In practice the power loss risk is avoided by ZFS write journaling and by the SLOG/ZIL second tier write cache pool (see below), so writes will only be lost if a write failure happens at the same time as a total loss of the second tier SLOG pool, and then only when settings related to synchronous writing and SLOG use are set in a way that would allow such a situation to arise. If data is received faster than it can be written, data receipt is paused until the disks can catch up. |
| Second level cache & Intent log | On fast storage devices (which can be added or removed from a "live" system without disruption in current versions of ZFS, although not always in older versions) | Known as L2ARC ("Level 2 ARC"), optional. ZFS will cache as much data in L2ARC as it can. L2ARC will also considerably speed up deduplication if the entire deduplication table can be cached in L2ARC. It can take several hours to fully populate the L2ARC from empty (before ZFS has decided which data are "hot" and should be cached). If the L2ARC device is lost, all reads will go out to the disks which slows down performance, but nothing else will happen (no data will be lost). | Known as SLOG or ZIL ("ZFS Intent Log")—the terms are often used incorrectly. A SLOG (secondary log device) is an optional dedicated cache on a separate device, for recording writes, in the event of a system issue. If an SLOG device exists, it will be used for the ZFS Intent Log as a second level log, and if no separate cache device is provided, the ZIL will be created on the main storage devices instead. The SLOG thus, technically, refers to the dedicated disk to which the ZIL is offloaded, in order to speed up the pool. Strictly speaking, ZFS does not use the SLOG device to cache its disk writes. Rather, it uses SLOG to ensure writes are captured to a permanent storage medium as quickly as possible, so that in the event of power loss or write failure, no data which was acknowledged as written, will be lost. The SLOG device allows ZFS to speedily store writes and quickly report them as written, even for storage devices such as HDDs that are much slower. In the normal course of activity, the SLOG is never referred to or read, and it does not act as a cache; its purpose is to safeguard data in flight during the few seconds taken for collation and "writing out", in case the eventual write were to fail. If all goes well, then the storage pool will be updated at some point within the next 5 to 60 seconds, when the current transaction group is written out to disk (see above), at which point the saved writes on the SLOG will simply be ignored and overwritten. If the write eventually fails, or the system suffers a crash or fault preventing its writing, then ZFS can identify all the writes that it has confirmed were written, by reading back the SLOG (the only time it is read from), and use this to completely repair the data loss.
This becomes crucial if a large number of synchronous writes take place (such as with ESXi, NFS and some databases),[57] where the client requires confirmation of successful writing before continuing its activity; the SLOG allows ZFS to confirm writing is successful much more quickly than if it had to write to the main store every time, without the risk involved in misleading the client as to the state of data storage. If there is no SLOG device then part of the main data pool will be used for the same purpose, although this is slower. If the log device itself is lost, it is possible to lose the latest writes, therefore the log device should be mirrored. In earlier versions of ZFS, loss of the log device could result in loss of the entire zpool, although this is no longer the case. Therefore, one should upgrade ZFS if planning to use a separate log device. |
A number of other caches, cache divisions, and queues also exist within ZFS. For example, each VDEV has its own data cache, and the ARC cache is divided between data stored by the user and metadata used by ZFS, with control over the balance between these.
Special VDEV Class
[edit]In OpenZFS 0.8 and later, it is possible to configure a Special VDEV class to preferentially store filesystem metadata, and optionally the Data Deduplication Table (DDT), and small filesystem blocks.[58] This allows, for example, to create a Special VDEV on fast solid-state storage to store the metadata, while the regular file data is stored on spinning disks. This speeds up metadata-intensive operations such as filesystem traversal, scrub, and resilver, without the expense of storing the entire filesystem on solid-state storage.
Copy-on-write transactional model
[edit]ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256)[59] of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme).
Snapshots and clones
[edit]An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots.
Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of the copy-on-write principle.
Sending and receiving snapshots
[edit]ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots. Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool.
Dynamic striping
[edit]Dynamic striping across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them.[60]
Variable block sizes
[edit]ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. If data compression is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations).[61]
Lightweight filesystem creation
[edit]In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems.[citation needed]
Adaptive endianness
[edit]Pools and their associated ZFS file systems can be moved between different platform architectures, including systems implementing different byte orders. The ZFS block pointer format stores filesystem metadata in an endian-adaptive way; individual metadata blocks are written with the native byte order of the system writing the block. When reading, if the stored endianness does not match the endianness of the system, the metadata is byte-swapped in memory.
This does not affect the stored data; as is usual in POSIX systems, files appear to applications as simple arrays of bytes, so applications creating and reading data remain responsible for doing so in a way independent of the underlying system's endianness.
Deduplication
[edit]Data deduplication capabilities were added to the ZFS source repository at the end of October 2009,[62] and relevant OpenSolaris ZFS development packages have been available since December 3, 2009 (build 128).
Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage.[63][64][65] An accurate assessment of the memory required for deduplication is made by referring to the number of unique blocks in the pool, and the number of bytes on disk and in RAM ("core") required to store each record—these figures are reported by inbuilt commands such as zpool and zdb. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can cause performance to plummet, or result in complete memory starvation.[citation needed] Because deduplication occurs at write-time, it is also very CPU-intensive and this can also significantly slow down a system.
Other storage vendors use modified versions of ZFS to achieve very high data compression ratios. Two examples in 2012 were GreenBytes[66] and Tegile.[67] In May 2014, Oracle bought GreenBytes for its ZFS deduplication and replication technology.[68]
As described above, deduplication is usually not recommended due to its heavy resource requirements (especially RAM) and impact on performance (especially when writing), other than in specific circumstances where the system and data are well-suited to this space-saving technique.
Additional capabilities
[edit]- Explicit I/O priority with deadline scheduling.[citation needed]
- Claimed globally optimal I/O sorting and aggregation.[citation needed]
- Multiple independent prefetch streams with automatic length and stride detection.[citation needed]
- Parallel, constant-time directory operations.[citation needed]
- End-to-end checksumming, using a kind of "Data Integrity Field", allowing data corruption detection (and recovery if you have redundancy in the pool). A choice of 3 hashes can be used, optimized for speed (fletcher), standardization and security (SHA256) and salted hashes (Skein).[69]
- Transparent filesystem compression. Supports LZJB, gzip,[70] LZ4 and Zstd.
- Intelligent scrubbing and resilvering (resyncing).[71]
- Load and space usage sharing among disks in the pool.[72]
- Ditto blocks: Configurable data replication per filesystem, with zero, one or two extra copies requested per write for user data, and with that same base number of copies plus one or two for metadata (according to metadata importance).[73] If the pool has several devices, ZFS tries to replicate over different devices. Ditto blocks are primarily an additional protection against corrupted sectors, not against total disk failure.[74]
- ZFS design (copy-on-write + superblocks) is safe when using disks with write cache enabled, if they honor the write barriers.[citation needed] This feature provides safety and a performance boost compared with some other filesystems.[according to whom?]
- On Solaris, when entire disks are added to a ZFS pool, ZFS automatically enables their write cache. This is not done when ZFS only manages discrete slices of the disk, since it does not know if other slices are managed by non-write-cache safe filesystems, like UFS.[citation needed] The FreeBSD implementation can handle disk flushes for partitions thanks to its GEOM framework, and therefore does not suffer from this limitation.[citation needed]
- Per-user, per-group, per-project, and per-dataset quota limits.[75]
- Filesystem encryption since Solaris 11 Express,[76] and OpenZFS (ZoL) 0.8.[58] (on some other systems ZFS can utilize encrypted disks for a similar effect; GELI on FreeBSD can be used this way to create fully encrypted ZFS storage).
- Pools can be imported in read-only mode.
- It is possible to recover data by rolling back entire transactions at the time of importing the zpool.[citation needed]
- Snapshots can be taken manually or automatically. The older versions of the stored data that they contain can be exposed as full read-only file systems. They can also be exposed as historic versions of files and folders when used with CIFS (also known as SMB, Samba or file shares); this is known as "Previous versions", "VSS shadow copies", or "File history" on Windows, or AFP and "Apple Time Machine" on Apple devices.[77]
- Disks can be marked as 'spare'. A data pool can be set to automatically and transparently handle disk faults by activating a spare disk and beginning to resilver the data that was on the suspect disk onto it, when needed.
Limitations
[edit]- As of Solaris 10 Update 11 and Solaris 11.2, it was neither possible to reduce the number of top-level vdevs in a pool except hot spares, cache, and log devices, nor to otherwise reduce pool capacity.[78] This functionality was said to be in development in 2007.[79] Enhancements to allow reduction of vdevs is under development in OpenZFS.[80] Online shrinking by removing non-redundant top-level vdevs is supported since Solaris 11.4 released in August 2018[81] and OpenZFS (ZoL) 0.8 released May 2019.[58]
- As of 2008[update] it was not possible to add a disk as a column to a RAID Z, RAID Z2 or RAID Z3 vdev. However, a new RAID Z vdev can be created instead and added to the zpool.[82]
- Some traditional nested RAID configurations, such as RAID 51 (a mirror of RAID 5 groups), are not configurable in ZFS, without some 3rd-party tools.[83] Vdevs can only be composed of raw disks or files, not other vdevs, using the default ZFS management commands.[84] However, a ZFS pool effectively creates a stripe (RAID 0) across its vdevs, so the equivalent of a RAID 50 or RAID 60 is common.
- Reconfiguring the number of devices in a top-level vdev requires copying data offline, destroying the pool, and recreating the pool with the new top-level vdev configuration, except for adding extra redundancy to an existing mirror, which can be done at any time or if all top level vdevs are mirrors with sufficient redundancy the zpool split[85] command can be used to remove a vdev from each top level vdev in the pool, creating a 2nd pool with identical data.
Data recovery
[edit]ZFS does not ship with tools such as fsck, because the file system itself was designed to self-repair. So long as a storage pool had been built with sufficient attention to the design of storage and redundancy of data, basic tools like fsck were never required. However, if the pool was compromised because of poor hardware, inadequate design or redundancy, or unfortunate mishap, to the point that ZFS was unable to mount the pool, traditionally, there were no other, more advanced, tools which allowed an end-user to attempt partial salvage of the stored data from a badly corrupted pool.
Modern ZFS has improved considerably on this situation over time, and continues to do so:
- Removal or abrupt failure of caching devices no longer causes pool loss. (At worst, loss of the ZIL may lose very recent transactions, but the ZIL does not usually store more than a few seconds' worth of recent transactions. Loss of the L2ARC cache does not affect data.)
- If the pool is unmountable, modern versions of ZFS will attempt to identify the most recent consistent point at which the pool can be recovered, at the cost of losing some of the most recent changes to the contents. Copy on write means that older versions of data, including top-level records and metadata, may still exist even though they are superseded, and if so, the pool can be wound back to a consistent state based on them. The older the data, the more likely it is that at least some blocks have been overwritten and that some data will be irrecoverable, so there is a limit at some point, on the ability of the pool to be wound back.
- Informally, tools exist to probe the reason why ZFS is unable to mount a pool, and guide the user or a developer as to manual changes required to force the pool to mount. These include using zdb (ZFS debug) to find a valid importable point in the pool, using dtrace or similar to identify the issue causing mount failure, or manually bypassing health checks that cause the mount process to abort, and allow mounting of the damaged pool.
- As of March 2018[update], a range of significantly enhanced methods are gradually being rolled out within OpenZFS. These include:[86]
- Code refactoring, and more detailed diagnostic and debug information on mount failures, to simplify diagnosis and fixing of corrupt pool issues;
- The ability to trust or distrust the stored pool configuration. This is particularly powerful, as it allows a pool to be mounted even when top-level vdevs are missing or faulty, when top level data is suspect, and also to rewind beyond a pool configuration change if that change was connected to the problem. Once the corrupt pool is mounted, readable files can be copied for safety, and it may turn out that data can be rebuilt even for missing vdevs, by using copies stored elsewhere in the pool.
- The ability to fix the situation where a disk needed in one pool, was accidentally removed and added to a different pool, causing it to lose metadata related to the first pool, which becomes unreadable.
OpenZFS and ZFS
[edit]Oracle Corporation ceased the public development of both ZFS and OpenSolaris after the acquisition of Sun in 2010. Some developers forked the last public release of OpenSolaris as the Illumos project. Because of the significant advantages present in ZFS, it has been ported to several different platforms with different features and commands. For coordinating the development efforts and to avoid fragmentation, OpenZFS was founded in 2013.
According to Matt Ahrens, one of the main architects of ZFS, over 50% of the original OpenSolaris ZFS code has been replaced in OpenZFS with community contributions as of 2019, making “Oracle ZFS” and “OpenZFS” politically and technologically incompatible.[87]
Commercial and open source products
[edit]- 2008: Sun shipped a line of ZFS-based 7000-series storage appliances.[88]
- 2013: Oracle shipped ZS3 series of ZFS-based filers and seized first place in the SPC-2 benchmark with one of them.[89]
- 2013: iXsystems ships ZFS-based NAS devices called FreeNAS, (now named TrueNAS CORE), for SOHO and TrueNAS for the enterprise.[90][91]
- 2014: Netgear ships a line of ZFS-based NAS devices called ReadyDATA, designed to be used in the enterprise.[92]
- 2015: rsync.net announces a cloud storage platform that allows customers to provision their own zpool and import and export data using zfs send and zfs receive.[93][94]
- 2020: iXsystems Begins development of a ZFS-based hyperconverged software called TrueNAS SCALE, for SOHO and TrueNAS for the enterprise.[91]
Oracle Corporation, closed source, and forking (from 2010)
[edit]In January 2010, Oracle Corporation acquired Sun Microsystems, and quickly discontinued the OpenSolaris distribution and the open source development model.[95][96] In August 2010, Oracle discontinued providing public updates to the source code of the Solaris OS/Networking repository, effectively turning Solaris 11 back into a closed source proprietary operating system.[97]
In response to the changing landscape of Solaris and OpenSolaris, the illumos project was launched via webinar[98] on Thursday, August 3, 2010, as a community effort of some core Solaris engineers to continue developing the open source version of Solaris, and complete the open sourcing of those parts not already open sourced by Sun.[99] illumos was founded as a Foundation, the Illumos Foundation, incorporated in the State of California as a 501(c)6 trade association. The original plan explicitly stated that illumos would not be a distribution or a fork. However, after Oracle announced discontinuing OpenSolaris, plans were made to fork the final version of the Solaris ON, allowing illumos to evolve into an operating system of its own.[100] As part of OpenSolaris, an open source version of ZFS was therefore integral within illumos.
ZFS was widely used within numerous platforms, as well as Solaris. Therefore, in 2013, the co-ordination of development work on the open source version of ZFS was passed to an umbrella project, OpenZFS. The OpenZFS framework allows any interested parties to collaboratively develop the core ZFS codebase in common, while individually maintaining any specific extra code which ZFS requires to function and integrate within their own systems.
Version history
[edit]| Old release |
| Latest FOSS stable release |
| ZFS Filesystem Version Number | Release date | Significant changes |
|---|---|---|
| 1 | OpenSolaris Nevada[101] build 36 | First release |
| 2 | OpenSolaris Nevada b69 | Enhanced directory entries. In particular, directory entries now store the object type. For example, file, directory, named pipe, and so on, in addition to the object number. |
| 3 | OpenSolaris Nevada b77 | Support for sharing ZFS file systems over SMB. Case insensitivity support. System attribute support. Integrated anti-virus support. |
| 4 | OpenSolaris Nevada b114 | Properties: userquota, groupquota, userused and groupused |
| 5 | OpenSolaris Nevada b137 | System attributes; symlinks now their own object type |
| ZFS Pool Version Number | Release date | Significant changes |
|---|---|---|
| 1 | OpenSolaris Nevada[101] b36 | First release |
| 2 | OpenSolaris Nevada b38 | Ditto Blocks |
| 3 | OpenSolaris Nevada b42 | Hot spares, double-parity RAID-Z (raidz2), improved RAID-Z accounting |
| 4 | OpenSolaris Nevada b62 | zpool history |
| 5 | OpenSolaris Nevada b62 | gzip compression for ZFS datasets |
| 6 | OpenSolaris Nevada b62 | "bootfs" pool property |
| 7 | OpenSolaris Nevada b68 | ZIL: adds the capability to specify a separate Intent Log device or devices |
| 8 | OpenSolaris Nevada b69 | ability to delegate zfs(1M) administrative tasks to ordinary users |
| 9 | OpenSolaris Nevada b77 | CIFS server support, dataset quotas |
| 10 | OpenSolaris Nevada b77 | Devices can be added to a storage pool as "cache devices" |
| 11 | OpenSolaris Nevada b94 | Improved zpool scrub / resilver performance |
| 12 | OpenSolaris Nevada b96 | Snapshot properties |
| 13 | OpenSolaris Nevada b98 | Properties: usedbysnapshots, usedbychildren, usedbyrefreservation, and usedbydataset |
| 14 | OpenSolaris Nevada b103 | passthrough-x aclinherit property support |
| 15 | OpenSolaris Nevada b114 | Properties: userquota, groupquota, usuerused and groupused; also required FS v4 |
| 16 | OpenSolaris Nevada b116 | STMF property support |
| 17 | OpenSolaris Nevada b120 | triple-parity RAID-Z |
| 18 | OpenSolaris Nevada b121 | ZFS snapshot holds |
| 19 | OpenSolaris Nevada b125 | ZFS log device removal |
| 20 | OpenSolaris Nevada b128 | zle compression algorithm that is needed to support the ZFS deduplication properties in ZFS pool version 21, which were released concurrently |
| 21 | OpenSolaris Nevada b128 | Deduplication |
| 22 | OpenSolaris Nevada b128 | zfs receive properties |
| 23 | OpenSolaris Nevada b135 | slim ZIL |
| 24 | OpenSolaris Nevada b137 | System attributes. Symlinks now their own object type. Also requires FS v5. |
| 25 | OpenSolaris Nevada b140 | Improved pool scrubbing and resilvering statistics |
| 26 | OpenSolaris Nevada b141 | Improved snapshot deletion performance |
| 27 | OpenSolaris Nevada b145 | Improved snapshot creation performance (particularly recursive snapshots) |
| 28 | OpenSolaris Nevada b147 | Multiple virtual device replacements |
Note: The Solaris version under development by Sun since the release of Solaris 10 in 2005 was codenamed 'Nevada', and was derived from what was the OpenSolaris codebase. 'Solaris Nevada' is the codename for the next-generation Solaris OS to eventually succeed Solaris 10 and this new code was then pulled successively into new OpenSolaris 'Nevada' snapshot builds.[101] OpenSolaris is now discontinued and OpenIndiana forked from it.[102][103] A final build (b134) of OpenSolaris was published by Oracle (2010-Nov-12) as an upgrade path to Solaris 11 Express.
Operating system support
[edit]List of Operating Systems, distributions and add-ons that support ZFS, the zpool version it supports, and the Solaris build they are based on (if any):
| OS | Zpool version | Sun/Oracle Build # | Comments |
|---|---|---|---|
| Oracle Solaris 11.4 | 49 | 11.4.51 (11.4 SRU 51)[104] | |
| Oracle Solaris 11.3 | 37 | 0.5.11-0.175.3.1.0.5.0 | |
| Oracle Solaris 10 1/13 (U11) | 32 | ||
| Oracle Solaris 11.2 | 35 | 0.5.11-0.175.2.0.0.42.0 | |
| Oracle Solaris 11 2011.11 | 34 | b175 | |
| Oracle Solaris Express 11 2010.11 | 31 | b151a | licensed for testing only |
| OpenSolaris 2009.06 | 14 | b111b | |
| OpenSolaris (last dev) | 22 | b134 | |
| OpenIndiana | 5000 | b147 | distribution based on illumos; creates a name clash naming their build code 'b151a' |
| Nexenta Core 3.0.1 | 26 | b134+ | GNU userland |
| NexentaStor Community 3.0.1 | 26 | b134+ | up to 18 TB, web admin |
| NexentaStor Community 3.1.0 | 28 | b134+ | GNU userland |
| NexentaStor Community 4.0 | 5000 | b134+ | up to 18 TB, web admin |
| NexentaStor Enterprise | 28 | b134 + | not free, web admin |
| GNU/kFreeBSD "Squeeze" (Unsupported) | 14 | Requires package "zfsutils" | |
| GNU/kFreeBSD "Wheezy-9" (Unsupported) | 28 | Requires package "zfsutils" | |
| FreeBSD | 5000 | ||
| zfs-fuse 0.7.2 | 23 | suffered from performance issues; defunct | |
| ZFS on Linux 0.6.5.8 | 5000 | 0.6.0 release candidate has POSIX layer | |
| KQ Infotech's ZFS on Linux | 28 | defunct; code integrated into LLNL-supported ZFS on Linux | |
| BeleniX 0.8b1 | 14 | b111 | small-size live-CD distribution; once based on OpenSolaris |
| Schillix 0.7.2 | 28 | b147 | small-size live-CD distribution; as SchilliX-ON 0.8.0 based on OpenSolaris |
| StormOS "hail" | distribution once based on Nexenta Core 2.0+, Debian Linux; superseded by Dyson OS | ||
| Jaris | Japanese Solaris distribution; once based on OpenSolaris | ||
| MilaX 0.5 | 20 | b128a | small-size live-CD distribution; once based on OpenSolaris |
| FreeNAS 8.0.2 / 8.2 | 15 | ||
| FreeNAS 8.3.0 | 28 | based on FreeBSD 8.3 | |
| FreeNAS 9.1.0+ | 5000 | based on FreeBSD 9.1+ | |
| XigmaNAS 11.4.0.4/12.2.0.4 | 5000 | based on FreeBSD 11.4/12.2 | |
| Korona 4.5.0 | 22 | b134 | KDE |
| EON NAS (v0.6) | 22 | b130 | embedded NAS |
| EON NAS (v1.0beta) | 28 | b151a | embedded NAS |
| napp-it | 28/5000 | Illumos/Solaris | Storage appliance; OpenIndiana (Hipster), OmniOS, Solaris 11, Linux (ZFS management) |
| OmniOS CE | 28/5000 | illumos-OmniOS branch | minimal stable/LTS storage server distribution based on Illumos, community driven |
| SmartOS | 28/5000 | Illumos b151+ | minimal live distribution based on Illumos (USB/CD boot); cloud and hypervisor use (KVM) |
| macOS 10.5, 10.6, 10.7, 10.8, 10.9 | 5000 | via MacZFS; superseded by OpenZFS on OS X | |
| macOS 10.6, 10.7, 10.8 | 28 | via ZEVO; superseded by OpenZFS on OS X | |
| NetBSD | 22 | ||
| MidnightBSD | 6 | ||
| Proxmox VE | 5000 | native support since 2014, pve.proxmox.com/wiki/ZFS_on_Linux | |
| Ubuntu Linux 16.04 LTS+ | 5000 | native support via installable binary module, wiki.ubuntu.com/ZFS | |
| ZFSGuru 10.1.100 | 5000 |
See also
[edit]- Comparison of file systems
- List of file systems
- Versioning file system—List of versioning file systems
Notes
[edit]References
[edit]- ^ a b "What Is ZFS?". Oracle Solaris ZFS Administration Guide. Oracle. Archived from the original on March 4, 2016. Retrieved December 29, 2015.
- ^ "ZFS on Linux Licensing". GitHub. Retrieved May 17, 2020.
- ^ Corbet, Jonathan (September 17, 2013). "The OpenZFS project launches". LWN.net. Archived from the original on October 4, 2013. Retrieved October 1, 2013.
- ^ "OpenZFS Announcement". OpenZFS. September 17, 2013. Archived from the original on April 2, 2018. Retrieved September 19, 2013.
- ^ open-zfs.org /History Archived December 24, 2013, at the Wayback Machine "OpenZFS is the truly open source successor to the ZFS project [...] Effects of the fork (2010 to date)"
- ^ Sean Michael Kerner (September 18, 2013). "LinuxCon: OpenZFS moves Open Source Storage Forward". infostor.com. Archived from the original on March 14, 2014. Retrieved October 9, 2013.
- ^ Corbet, Jonathan (September 17, 2013). "The OpenZFS project launches". LWN.net. Archived from the original on October 11, 2016. Retrieved October 1, 2013.
- ^ "OpenZFS—Communities co-operating on ZFS code and features". freebsdnews.net. September 23, 2013. Archived from the original on October 14, 2013. Retrieved March 14, 2014.
- ^ "The Starline ZFS FAQ". Starline. Archived from the original on July 7, 2024. Retrieved July 20, 2024.
- ^ a b "19.4. zfs Administration". www.freebsd.org. Archived from the original on February 23, 2017. Retrieved February 22, 2017.
- ^ Salus, Peter (1994). A Quarter Century of Unix. Addison-Wesley. pp. 199–200. ISBN 0-201-54777-5.
- ^ "What are SunOS and Solaris?". Knowledge Base. Indiana University Technology Services. May 20, 2013. Archived from the original on November 4, 2013. Retrieved November 10, 2014.
- ^ Brown, David. "A Conversation with Jeff Bonwick and Bill Moore". ACM Queue. Association for Computing Machinery. Archived from the original on July 16, 2011. Retrieved November 17, 2015.
- ^ "ZFS: the last word in file systems". Sun Microsystems. September 14, 2004. Archived from the original on April 28, 2006. Retrieved April 30, 2006.
- ^ Matthew Ahrens (November 1, 2011). "ZFS 10 year anniversary". Archived from the original on June 28, 2016. Retrieved July 24, 2012.
- ^ a b Bonwick, Jeff (October 31, 2005). "ZFS: The Last Word in Filesystems". blogs.oracle.com. Archived from the original on June 19, 2013. Retrieved June 22, 2013.
- ^ "Sun Celebrates Successful One-Year Anniversary of OpenSolaris". Sun Microsystems. June 20, 2006. Archived from the original on September 28, 2008. Retrieved April 30, 2018.
- ^ Michael Singer (January 25, 2005). "Sun Cracks Open Solaris". InternetNews.com. Archived from the original on June 22, 2011. Retrieved April 12, 2010.
- ^ "ZFS FAQ at OpenSolaris.org". Sun Microsystems. Archived from the original on May 15, 2011. Retrieved May 18, 2011.
The largest SI prefix we liked was 'zetta' ('yotta' was out of the question)
- ^ Jeff Bonwick (May 3, 2006). "You say zeta, I say zetta". Jeff Bonwick's Blog. Archived from the original on February 23, 2017. Retrieved April 21, 2017.
So we finally decided to unpimp the name back to ZFS, which doesn't stand for anything.
- ^ "Oracle and NetApp dismiss ZFS lawsuits". theregister.co.uk. September 9, 2010. Archived from the original on September 9, 2017. Retrieved December 24, 2013.
- ^ The Extended file system (Ext) has metadata structure copied from UFS. "Rémy Card (Interview, April 1998)". April Association. April 19, 1999. Archived from the original on February 4, 2012. Retrieved February 8, 2012. (In French)
- ^ Vijayan Prabhakaran (2006). "IRON FILE SYSTEMS" (PDF). Doctor of Philosophy in Computer Sciences. University of Wisconsin-Madison. Archived (PDF) from the original on April 29, 2011. Retrieved June 9, 2012.
- ^ "Parity Lost and Parity Regained". Archived from the original on June 15, 2010. Retrieved November 29, 2010.
- ^ "An Analysis of Data Corruption in the Storage Stack" (PDF). Archived (PDF) from the original on June 15, 2010. Retrieved November 29, 2010.
- ^ "Impact of Disk Corruption on Open-Source DBMS" (PDF). Archived (PDF) from the original on June 15, 2010. Retrieved November 29, 2010.
- ^ Kadav, Asim; Rajimwale, Abhishek. "Reliability Analysis of ZFS" (PDF). Archived (PDF) from the original on September 21, 2013. Retrieved September 19, 2013.
- ^ Yupu Zhang; Abhishek Rajimwale; Andrea Arpaci-Dusseau; Remzi H. Arpaci-Dusseau (2010). "End-to-end data integrity for file systems: a ZFS case study" (PDF). USENIX Conference on File and Storage Technologies. CiteSeerX 10.1.1.154.3979. S2CID 5722163. Wikidata Q111972797. Retrieved December 6, 2010.
- ^ Larabel, Michael. "Benchmarking ZFS and UFS On FreeBSD vs. EXT4 & Btrfs On Linux". Phoronix Media 2012. Archived from the original on November 29, 2016. Retrieved November 21, 2012.
- ^ Larabel, Michael. "Can DragonFlyBSD's HAMMER Compete With Btrfs, ZFS?". Phoronix Media 2012. Archived from the original on November 29, 2016. Retrieved November 21, 2012.
- ^ a b c Bonwick, Jeff (December 8, 2005). "ZFS End-to-End Data Integrity". blogs.oracle.com. Archived from the original on April 3, 2012. Retrieved September 19, 2013.
- ^ Cook, Tim (November 16, 2009). "Demonstrating ZFS Self-Healing". blogs.oracle.com. Archived from the original on August 12, 2011. Retrieved February 1, 2015.
- ^ Ranch, Richard (May 4, 2007). "ZFS, copies, and data protection". blogs.oracle.com. Archived from the original on August 18, 2016. Retrieved February 2, 2015.
- ^ a b "zpoolconcepts.7 — OpenZFS documentation". openzfs.github.io. Retrieved April 5, 2023.
- ^ "ZFS Without Tears: Using ZFS without ECC memory". www.csparks.com. December 2015. Archived from the original on January 13, 2021. Retrieved June 16, 2020.
- ^ wdc.custhelp.com. "Difference between Desktop edition and RAID (Enterprise) edition drives". Archived from the original on January 5, 2015. Retrieved September 8, 2011.
- ^ a b c d Bonwick, Jeff (November 17, 2005). "RAID-Z". Jeff Bonwick's Blog. Oracle Blogs. Archived from the original on December 16, 2014. Retrieved February 1, 2015.
- ^ a b Leventhal, Adam (December 17, 2009). "Triple-Parity RAID and Beyond". ACM Queue. 7 (11): 30. doi:10.1145/1661785.1670144.
- ^ "ZFS Raidz Performance, Capacity and integrity". calomel.org. Archived from the original on November 27, 2017. Retrieved June 23, 2017.
- ^ "Why RAID 6 stops working in 2019". ZDNet. February 22, 2010. Archived from the original on October 31, 2014. Retrieved October 26, 2014.
- ^ "No fsck utility equivalent exists for ZFS. This utility has traditionally served two purposes, those of file system repair and file system validation." "Checking ZFS File System Integrity". Oracle. Archived from the original on January 31, 2013. Retrieved November 25, 2012.
- ^ "ZFS Scrubs". freenas.org. Archived from the original on November 27, 2012. Retrieved November 25, 2012.
- ^ "You should also run a scrub prior to replacing devices or temporarily reducing a pool's redundancy to ensure that all devices are currently operational." "ZFS Best Practices Guide". solarisinternals.com. Archived from the original on September 5, 2015. Retrieved November 25, 2012.
- ^ Jeff Bonwick. "128-bit storage: are you high?". oracle.com. Archived from the original on May 29, 2015. Retrieved May 29, 2015.
- ^ "ZFS: Boils the Ocean, Consumes the Moon (Dave Brillhart's Blog)". Archived from the original on December 8, 2015. Retrieved December 19, 2015.
- ^ "Solaris ZFS Administration Guide". Oracle Corporation. Archived from the original on January 13, 2021. Retrieved February 11, 2011.
- ^ "Encrypting ZFS File Systems". Archived from the original on June 23, 2011. Retrieved May 2, 2011.
- ^ "Having my secured cake and Cloning it too (aka Encryption + Dedup with ZFS)". Archived from the original on May 29, 2013. Retrieved October 9, 2012.
- ^ "ZFS—Debian Wiki". wiki.debian.org. Archived from the original on September 8, 2019. Retrieved December 10, 2019.
- ^ "Proposal: Consider adding warnings against using zfs native encryption along with send/recv in production". Github. Archived from the original on August 15, 2024. Retrieved August 15, 2024.
- ^ "PSA: ZFS has a data corruption bug when using native encryption and send/recv". Reddit. February 12, 2024. Archived from the original on August 15, 2024. Retrieved August 15, 2024.
- ^ "ZFS Fragmentation: Long-term Solutions". Github. Retrieved August 15, 2024.
- ^ "What are the best practices to keep ZFS from being too fragmented". Lawrence Systems. March 9, 2023. Archived from the original on August 15, 2024. Retrieved August 15, 2024.
- ^ "Solaris ZFS Enables Hybrid Storage Pools—Shatters Economic and Performance Barriers" (PDF). Sun.com. September 7, 2010. Archived (PDF) from the original on October 17, 2011. Retrieved November 4, 2011.
- ^ Gregg, Brendan. "ZFS L2ARC". Brendan's blog. Dtrace.org. Archived from the original on November 6, 2011. Retrieved October 5, 2012.
- ^ Gregg, Brendan (October 8, 2009). "Hybrid Storage Pool: Top Speeds". Brendan's blog. Dtrace.org. Archived from the original on April 5, 2016. Retrieved August 15, 2017.
- ^ "Solaris ZFS Performance Tuning: Synchronous Writes and the ZIL". Constantin.glez.de. July 20, 2010. Archived from the original on June 23, 2012. Retrieved October 5, 2012.
- ^ a b c "Release zfs-0.8.0". GitHub. OpenZFS. May 23, 2019. Archived from the original on July 28, 2021. Retrieved July 3, 2021.
- ^ "ZFS On-Disk Specification" (PDF). Sun Microsystems, Inc. 2006. Archived from the original (PDF) on December 30, 2008. See section 2.4.
- ^ "RAIDZ — OpenZFS documentation". openzfs.github.io. Archived from the original on February 9, 2023. Retrieved February 9, 2023.
- ^ Eric Sproul (May 21, 2009). "ZFS Nuts and Bolts". slideshare.net. pp. 30–31. Archived from the original on June 22, 2014. Retrieved June 8, 2014.
- ^ "ZFS Deduplication". blogs.oracle.com. Archived from the original on December 24, 2019. Retrieved November 25, 2019.
- ^ Gary Sims (January 4, 2012). "Building ZFS Based Network Attached Storage Using FreeNAS 8". TrainSignal Training. TrainSignal, Inc. Archived from the original (Blog) on May 7, 2012. Retrieved June 9, 2012.
- ^ Ray Van Dolson (May 2011). "[zfs-discuss] Summary: Deduplication Memory Requirements". zfs-discuss mailing list. Archived from the original on April 25, 2012.
- ^ "ZFSTuningGuide". Archived from the original on January 16, 2012. Retrieved January 3, 2012.
- ^ Chris Mellor (October 12, 2012). "GreenBytes brandishes full-fat clone VDI pumper". The Register. Archived from the original on March 24, 2013. Retrieved August 29, 2013.
- ^ Chris Mellor (June 1, 2012). "Newcomer gets out its box, plans to sell it cheaply to all comers". The Register. Archived from the original on August 12, 2013. Retrieved August 29, 2013.
- ^ Chris Mellor (December 11, 2014). "Dedupe, dedupe... dedupe, dedupe, dedupe: Oracle polishes ZFS diamond". The Register. Archived from the original on July 7, 2017. Retrieved December 17, 2014.
- ^ "Checksums and Their Use in ZFS". github.com. September 2, 2018. Archived from the original on July 19, 2019. Retrieved July 11, 2019.
- ^ "Solaris ZFS Administration Guide". Chapter 6 Managing ZFS File Systems. Archived from the original on February 5, 2011. Retrieved March 17, 2009.
- ^ "Smokin' Mirrors". blogs.oracle.com. May 2, 2006. Archived from the original on December 16, 2011. Retrieved February 13, 2012.
- ^ "ZFS Block Allocation". Jeff Bonwick's Weblog. November 4, 2006. Archived from the original on November 2, 2012. Retrieved February 23, 2007.
- ^ "Ditto Blocks — The Amazing Tape Repellent". Flippin' off bits Weblog. May 12, 2006. Archived from the original on May 26, 2013. Retrieved March 1, 2007.
- ^ "Adding new disks and ditto block behaviour". Archived from the original on August 23, 2011. Retrieved October 19, 2009.
- ^ "OpenSolaris.org". Sun Microsystems. Archived from the original on May 8, 2009. Retrieved May 22, 2009.
- ^ "What's new in Solaris 11 Express 2010.11" (PDF). Oracle. Archived (PDF) from the original on November 16, 2010. Retrieved November 17, 2010.
- ^ "10. Sharing — FreeNAS User Guide 9.3 Table of Contents". doc.freenas.org. Archived from the original on January 7, 2017. Retrieved February 23, 2017.
- ^ "Bug ID 4852783: reduce pool capacity". OpenSolaris Project. Archived from the original on June 29, 2009. Retrieved March 28, 2009.
- ^ Goebbels, Mario (April 19, 2007). "Permanently removing vdevs from a pool". zfs-discuss (Mailing list).[permanent dead link] archive link Archived January 13, 2021, at the Wayback Machine
- ^ Chris Siebenmann Information on future vdev removal Archived August 11, 2016, at the Wayback Machine, Univ Toronto, blog, quote: informal Twitter announcement by Alex Reece Archived August 11, 2016, at the Wayback Machine
- ^ "Data Management Features—What's New in Oracle® Solaris 11.4". Archived from the original on September 24, 2019. Retrieved October 9, 2019.
- ^ "Expand-O-Matic RAID Z". Adam Leventhal. April 7, 2008. Archived from the original on December 28, 2011. Retrieved April 16, 2012.
- ^ "ZFS Toy". SourceForge.net. Retrieved April 12, 2022.
- ^ "zpoolconcepts(7)". OpenZFS documentation. OpenZFS. June 2, 2021. Retrieved April 12, 2021.
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.
- ^ "zpool(1M)". Download.oracle.com. June 11, 2010. Archived from the original on January 13, 2021. Retrieved November 4, 2011.
- ^ "Turbocharging ZFS Data Recovery". Archived from the original on November 29, 2018. Retrieved November 29, 2018.
- ^ "ZFS and OpenZFS". iXSystems. June 18, 2019. Archived from the original on April 18, 2020. Retrieved May 18, 2020.
- ^ "Sun rolls out its own storage appliances". techworld.com.au. November 11, 2008. Archived from the original on November 13, 2013. Retrieved November 13, 2013.
- ^ Chris Mellor (October 2, 2013). "Oracle muscles way into seat atop the benchmark with hefty ZFS filer". theregister.co.uk. Archived from the original on July 7, 2017. Retrieved July 7, 2014.
- ^ "Unified ZFS Storage Appliance built in Silicon Valley by iXsystem". ixsystems.com. Archived from the original on July 3, 2014. Retrieved July 7, 2014.
- ^ a b "TrueNAS 12 & TrueNAS SCALE are officially here!". ixsystems.com. October 29, 2020. Archived from the original on January 4, 2021. Retrieved January 2, 2021.
- ^ "ReadyDATA 516—Unified Network Storage" (PDF). netgear.com. Archived (PDF) from the original on July 15, 2014. Retrieved July 7, 2014.
- ^ Jim Salter (December 17, 2015). "rsync.net: ZFS Replication to the cloud is finally here—and it's fast". arstechnica.com. Archived from the original on August 22, 2017. Retrieved August 21, 2017.
- ^ rsync.net, Inc. "Cloud Storage with ZFS send and receive over SSH". rsync.net. Archived from the original on July 21, 2017. Retrieved August 21, 2017.
- ^ Steven Stallion / Oracle (August 13, 2010). "Update on SXCE". Iconoclastic Tendencies. Archived from the original on November 9, 2020. Retrieved April 30, 2018.
- ^ Alasdair Lumsden. "OpenSolaris cancelled, to be replaced with Solaris 11 Express". osol-discuss (Mailing list). Archived from the original on August 16, 2010. Retrieved November 24, 2014.
- ^ Solaris still sorta open, but OpenSolaris distro is dead Archived September 5, 2017, at the Wayback Machine on Ars Technica by Ryan Paul (Aug 16, 2010)
- ^ Garrett D'Amore (August 3, 2010). "Illumos - Hope and Light Springs Anew - Presented by Garrett D'Amore" (PDF). illumos.org. Archived (PDF) from the original on April 7, 2025. Retrieved August 3, 2010.
- ^ "Whither OpenSolaris? Illumos Takes Up the Mantle". November 20, 2012. Archived from the original on September 26, 2015.
- ^ Garrett D'Amore (August 13, 2010). "The Hand May Be Forced". Archived from the original on August 17, 2010. Retrieved November 14, 2013.
- ^ a b c "While under Sun Microsystems' control, there were bi-weekly snapshots of Solaris Nevada (the codename for the next-generation Solaris OS to eventually succeed Solaris 10) and this new code was then pulled into new OpenSolaris preview snapshots available at Genunix.org. The stable releases of OpenSolaris are based off of [sic] these Nevada builds." Larabel, Michael. "It Looks Like Oracle Will Stand Behind OpenSolaris". Phoronix Media. Archived from the original on November 29, 2016. Retrieved November 21, 2012.
- ^ Ljubuncic, Igor (May 23, 2011). "OpenIndiana — there's still hope". DistroWatch. Archived from the original on October 27, 2012. Retrieved November 21, 2012.
- ^ "Welcome to Project OpenIndiana!". Project OpenIndiana. September 10, 2010. Archived from the original on November 27, 2012. Retrieved September 14, 2010.
- ^ "ZFS Pool Versions". Oracle Corporation. 2022. Archived from the original on December 21, 2022. Retrieved January 1, 2023.
Bibliography
[edit]- Watanabe, Scott (November 23, 2009). Solaris ZFS Essentials (1st ed.). Prentice Hall. p. 256. ISBN 978-0-13-700010-4. Archived from the original on October 1, 2012.
External links
[edit]- Fork Yeah! The Rise and Development of Illumos—Slide show covering much of the history of Solaris, the decision to open source by Sun, the creation of ZFS, and the events causing it to be closed sourced and forked after Oracle's acquisition.
- The best cloud File System was created before the cloud existed (archived on Dec. 15, 2018)
- Comparison of SVM mirroring and ZFS mirroring
- EON ZFS Storage (NAS) distribution
- End-to-end Data Integrity for File Systems: A ZFS Case Study
- ZFS – The Zettabyte File System (archived on Feb. 28, 2013)
- ZFS and RAID-Z: The Über-FS?
- ZFS: The Last Word In File Systems, by Jeff Bonwick and Bill Moore (archived on Aug. 29, 2017)
- Visualizing the ZFS intent log (ZIL) at the Wayback Machine (archived August 28, 2023), April 2013, by Aaron Toponce
- Features of illumos including OpenZFS
- Previous wiki page with more links: Getting Started with ZFS, Sep. 15, 2014 (archived on Dec. 30, 2018), part of the illumos documentation
Overview
Definition and Core Concepts
ZFS is an open-source file system and logical volume manager that integrates both functionalities into a single, unified system, originally engineered with 128-bit addressing for scalability to handle storage capacities up to 256 quadrillion zettabytes (2^{128} bytes). Developed by Sun Microsystems and initially named the Zettabyte File System to reflect its capacity ambitions, it is now commonly referred to simply as ZFS and maintained as an open-source project under the Common Development and Distribution License (CDDL) by the OpenZFS community. This design addresses limitations in traditional storage systems by combining file system semantics with volume management, enabling efficient handling of massive datasets without the complexities of separate layers.[1] Central to ZFS's architecture are several key concepts that define its storage organization. A zpool (ZFS pool) represents the top-level storage construct, aggregating physical devices into a single, manageable entity that serves as the root of the ZFS hierarchy and provides raw storage capacity.[7] Within a zpool, storage is logically divided into datasets, which encompass file systems, block volumes, and similar entities; datasets dynamically share the pool's space, allowing quotas and reservations while eliminating fixed-size allocations.[1] The fundamental building blocks of a zpool are vdevs (virtual devices), which group one or more physical storage devices—such as disks or partitions—into configurations that support redundancy, performance, or expansion.[1] ZFS's pooled storage model fundamentally simplifies administration by removing the need for traditional disk partitioning and volume slicing, as space is allocated on demand from the shared pool across all datasets.[7] A primary benefit is end-to-end data integrity, achieved through 256-bit checksums on all data and metadata, coupled with a copy-on-write transactional paradigm that ensures atomic updates and prevents silent corruption. This approach allows ZFS to verify and repair data proactively, providing robust protection in environments prone to hardware faults.[1]Design Principles and Goals
ZFS was developed with three core goals in mind: providing strong data integrity to prevent data corruption, simplifying storage administration to reduce complexity for users, and enabling immense scalability through 128-bit addressing, supporting capacities up to 256 quadrillion zettabytes (2^{128} bytes). These objectives addressed longstanding limitations in traditional file systems, aiming to create a robust solution for modern storage needs without relying on hardware-specific assumptions.[8] Central to ZFS's design principles is the pooled storage model, which eliminates the traditional concept of fixed volumes and allows dynamic allocation of storage resources across disks, treating them similarly to memory modules in a virtual memory system.[9] This approach promotes flexibility by enabling storage to be shared and expanded seamlessly, while software-based redundancy mechanisms ensure reliability independent of specific hardware configurations.[10] Additionally, the system incorporates transactional consistency through a copy-on-write mechanism, ensuring atomic updates and maintaining data consistency even in the face of failures. The design drew from lessons learned in previous file systems like the Unix File System (UFS), particularly tackling issues such as fragmentation that led to inefficient space utilization and bit rot, where silent data corruption occurs over time due to media degradation or transmission errors.[8] By prioritizing end-to-end verification and distrusting hardware components, ZFS aimed to mitigate these risks proactively.[9] ZFS was targeted primarily at enterprise servers and network-attached storage (NAS) environments, with a focus on data centers managing petabyte-scale datasets, where reliability and ease of management are paramount for handling large volumes of critical data.History
Origins at Sun Microsystems (2001–2010)
ZFS development commenced in the summer of 2001 at Sun Microsystems, led by file system architect Jeff Bonwick, who formed a core team including Matthew Ahrens and Bill Moore to create a next-generation pooled storage system.[3] The initiative stemmed from Sun's recognition of the growing complexities in managing large-scale enterprise storage on SPARC systems running Solaris, where traditional file systems like UFS required cumbersome volume managers to handle expanding capacities beyond the terabyte scale, leading to administrative overhead and reliability issues in data centers.[11] Bonwick, drawing from prior experience with slab allocators and storage challenges, envisioned ZFS as a unified solution to simplify administration while ensuring scalability for Sun's high-end server market.[12] The project was publicly announced on September 14, 2004, highlighting its innovative approach to storage pooling and data integrity, though full implementation continued in parallel with Solaris enhancements.[13] Key early milestones included the introduction of core concepts like pooled storage resources, which replaced rigid volume-based partitioning with dynamic allocation across devices. In June 2006, ZFS was first integrated into the Solaris 10 6/06 update release, marking its production availability and enabling users to create ZFS file systems alongside legacy options.[14] ZFS source code was released as open-source software under the Common Development and Distribution License (CDDL) in November 2005 as part of the OpenSolaris project, fostering community contributions while remaining proprietary in commercial Solaris distributions until the 2006 integration.[15] Initial adoption was confined to Solaris platforms, primarily on SPARC and x86 architectures, where it gained traction among enterprise users for simplifying storage management in Sun's server ecosystems. By the late 2000s, experimental ports emerged, with a FreeBSD integration appearing in FreeBSD 7.0 in 2008 and initial Linux porting efforts beginning around the same time, though these remained non-production and Solaris-centric during Sun's tenure.[3]Oracle Acquisition and OpenZFS Emergence (2010–Present)
In January 2010, Oracle Corporation completed its acquisition of Sun Microsystems for $7.4 billion, gaining control over Solaris and ZFS.[16] Following the acquisition, ZFS was integrated as the default file system in Oracle Solaris 11, released in November 2011, providing advanced data management capabilities including built-in redundancy and scalability.[17] However, Oracle transitioned ZFS development toward closed-source practices, which slowed innovation and restricted community access to new features, prompting concerns among open-source developers about the future of the technology.[18] In response to Oracle's shift, the open-source community initiated a fork of ZFS, culminating in the official announcement of the OpenZFS project in September 2013.[19] This collaborative effort, led by developers from the illumos, FreeBSD, and Linux ecosystems, aimed to unify and advance ZFS development independently of Oracle, maintaining compatibility with existing Solaris ZFS pool formats up to version 35.[3] The fork addressed the fragmentation caused by the acquisition, with the first stable release of ZFS on Linux occurring in 2013 under OpenZFS 0.6, enabling broader platform adoption.[3] Subsequent OpenZFS releases marked significant advancements. OpenZFS 2.0, released in 2017, aligned development across platforms, introduced persistent L2ARC, sequential resilvering, and other performance improvements.[20] OpenZFS 2.1, released in 2021, introduced dRAID (distributed RAID) for faster rebuilds with distributed spares and support for CPU/memory hotplugging.[21] OpenZFS 2.2, released in 2023, introduced block cloning for efficient file duplication, corrective zfs receive for healing corrupted data, and support for Linux 6.5.[22] As of November 2025, OpenZFS 2.3.5 (released in January 2025, with point releases up to November) introduced RAIDZ expansion for adding disks to existing vdevs without downtime, fast deduplication, direct I/O for improved NVMe performance, and support for longer filenames.[23][24] Ongoing community efforts focus on RAID-Z5 and RAID-Z6 optimizations, highlighted by the RAIDZ expansion feature in 2.3, which enables incremental addition of disks to existing vdevs without downtime or rebuilding. Preparations for OpenZFS 2.4 include RC1 with enhancements like default user/group/project quotas and uncached I/O improvements.[25] Licensing tensions persist, as ZFS's Common Development and Distribution License (CDDL) is incompatible with the GNU General Public License (GPL) of the Linux kernel, necessitating separate distribution and modules rather than in-kernel integration.[26]Architecture
Pooled Storage and Datasets
ZFS employs a pooled storage model that aggregates multiple physical storage devices into a single logical unit known as a storage pool, thereby eliminating the need for traditional volume managers and fixed-size partitions.[27] This approach allows all datasets within the pool to share the available space dynamically, with no predefined allocations limiting individual file systems or volumes.[27] Storage pools are created using thezpool create command, which combines whole disks or partitions into virtual devices (vdevs) without requiring slicing or formatting in advance.[28]
Virtual devices, or vdevs, form the building blocks of a ZFS pool and define its physical organization.[27] Common vdev types include stripes for simple aggregation of devices, mirrors for duplicating data across disks, and RAID-Z variants for parity-based redundancy across multiple disks.[6] Once created, the pool presents a unified namespace from which datasets can draw storage as needed, supporting flexible growth without disrupting operations.[27]
Datasets in ZFS represent the logical containers for data and include several types: file systems for POSIX-compliant hierarchical storage, volumes (zvols) that emulate block devices for use with legacy applications, and snapshots that capture point-in-time read-only views of other datasets.[29][30] ZFS file systems, in particular, mount directly and support features like quotas and reservations to manage space allocation within the pool.[6]
Each dataset inherits properties from its parent but can override them for customization, such as setting mountpoints to control where file systems appear in the directory hierarchy or enabling compression to reduce storage footprint.[31] These properties facilitate administrative control, allowing operators to apply settings like compression=on across hierarchies for efficient data handling.[32]
Pools support online expansion by adding new vdevs with the zpool add command, which immediately increases available capacity without downtime or data migration.[6] Hot spares can also be designated using zpool add pool spare device, enabling automatic replacement of failed components to maintain availability.[32] This expandability ensures that storage can scale incrementally as needs grow.
During pool creation, the ashift property specifies the alignment shift value, determining the minimum block size (e.g., 512 bytes for ashift=9 or 4 KiB for ashift=12) for optimal alignment with modern disk sector sizes and efficient capacity utilization. As the foundational layer, ZFS pools enable advanced features like data integrity verification and redundancy mechanisms by organizing storage in a way that supports end-to-end checksumming and fault-tolerant layouts.[33]
Copy-on-Write Transactional Model
ZFS employs a copy-on-write (COW) transactional model to manage updates atomically, ensuring that the on-disk file system state remains consistent at all times. In this model, any modification to data or metadata results in the allocation of new blocks on disk rather than overwriting existing ones; the original blocks are preserved until the entire transaction completes successfully. This prevents partial writes from corrupting the file system, as a crash during an update leaves the prior consistent state intact.[34] Writes are organized into transaction groups (TXGs), which batch multiple file system operations into cohesive units synced to stable storage approximately every five seconds. Each TXG processes incoming writes by directing them to unused space on disk, updating in-memory metadata structures, and then committing the group only if all components succeed; failed operations within a TXG are discarded, maintaining atomicity across the batch. The ZFS intent log (ZIL) captures synchronous writes for immediate durability, but the core TXG mechanism handles the bulk asynchronous updates.[35][36] Atomic commitment of a TXG occurs via uberblocks, which act as root pointers to the pool's metadata trees and are written at the end of each group. A new uberblock references the updated locations of modified blocks and metadata, while older uberblocks in a fixed ring buffer (typically 128 entries) remain until overwritten by subsequent cycles; on boot, ZFS scans this ring to select the uberblock with the highest TXG number as the valid root. Old data persists until the new uberblock takes effect, avoiding any risk of inconsistent metadata.[37] In implementation, ZFS structures metadata as balanced trees of block pointers, where each pointer embeds the target block's location, birth TXG, and checksum. Modifying a leaf block involves writing a new version with its checksum, then recursively copying and updating parent pointers up the tree—only committing via the uberblock once all levels are safely persisted. This hierarchical COW propagation ensures end-to-end consistency without traditional locking for reads during writes.[38] The model's benefits include guaranteed crash consistency, as reboots always resume from a complete prior TXG, eliminating needs for file system checks or repair tools. It also precludes partial write scenarios that could lead to data loss or corruption. By retaining unmodified blocks post-modification, the approach enables lightweight snapshots that reference the state at a specific TXG without halting I/O.[34]Core Features
Data Integrity and Self-Healing
ZFS ensures data integrity through end-to-end checksums computed for every block of data and metadata. These checksums, typically 256-bit in length, employ either the Fletcher-4 algorithm by default or the cryptographically stronger SHA-256 option, allowing administrators to select based on performance and security needs.[6] The checksum for a given block is generated from its content and stored separately in the parent block pointer within ZFS's Merkle tree structure, rather than alongside the data itself, enabling verification across the entire I/O path from application to storage device.[33] This separation detects silent data corruption, such as bit rot, misdirected writes, or hardware faults, that traditional filesystems might overlook.[39] Self-healing in ZFS activates upon checksum mismatch detection during data reads or proactive scans, automatically repairing affected blocks using redundant copies available through configurations like mirroring or RAID-Z. If corruption is found in one copy, ZFS retrieves the verified data from a healthy redundant source, reconstructs the block, and overwrites the erroneous version, thereby preventing bit rot propagation and maintaining pool consistency without user intervention.[33][6] This process relies on the underlying redundancy to ensure a correct copy exists, providing proactive protection against degradation over time.[40] The scrubbing process enhances self-healing by performing periodic, comprehensive scans of the entire storage pool to proactively verify checksums against all blocks. During a scrub, ZFS traverses the metadata tree, reads each block, recomputes its checksum, and compares it to the stored value; mismatches trigger self-healing repairs where redundancy allows, with operations prioritized low to minimize impact on normal I/O.[33][40] Scrubs are essential for detecting latent errors not encountered in routine access patterns, ensuring long-term data reliability across the pool.[39] Metadata in ZFS receives enhanced protection to safeguard the filesystem's structural integrity, with all metadata maintained in at least two copies via ditto blocks distributed across different devices when possible. Pool-wide metadata uses three ditto blocks, while filesystem metadata employs two, allowing recovery from single-block corruption without pool-wide failure.[39][41]Redundancy with RAID-Z and Mirroring
ZFS implements redundancy through virtual devices (vdevs) configured as either mirrors or RAID-Z groups, enabling fault tolerance without relying on hardware RAID controllers. These configurations allow ZFS to detect and repair data corruption using its built-in checksums and self-healing mechanisms, where redundant copies or parity data are used to reconstruct lost information. By managing I/O directly at the software level, ZFS ensures end-to-end data integrity, avoiding the pitfalls of hardware RAID such as inconsistent metadata or unverified parity. Mirroring in ZFS creates exact copies of data across multiple devices within a vdev, similar to traditional RAID-1 but extended to support up to three-way (or more) replication for higher fault tolerance. A two-way mirror withstands one device failure, while a three-way mirror can tolerate two failures, with the usable capacity limited to the size of a single device regardless of the number of mirrors. Data is written synchronously to all devices in the mirror, providing fast read performance by allowing parallel access and quick rebuilds through simple block copies rather than complex parity computations, making it particularly suitable for solid-state drives (SSDs). To create a mirrored pool, thezpool create command uses the mirror keyword followed by the device paths, such as zpool create tank mirror /dev/dsk/c1t0d0 /dev/dsk/c1t1d0; multiple mirror vdevs can be added to stripe data across them for increased capacity and performance. While different vdev types can be combined in a single pool, nesting is not supported for standard vdevs, and the pool's redundancy level is determined by the least redundant vdev. Vdev types cannot be converted after creation, limiting certain post-creation modifications.[42][43][44]
RAID-Z extends parity-based redundancy inspired by RAID-5, but with dynamic stripe widths and integrated safeguards against the "write hole" issue, where partial writes due to power failure could desynchronize data and parity. In a RAID-Z vdev, data blocks are striped across multiple devices with distributed parity information computed using finite field arithmetic, allowing reconstruction of lost data without fixed stripe sizes that plague traditional RAID. The variants include RAID-Z1 with single parity (tolerating one device failure), RAID-Z2 with double parity (tolerating two failures), and RAID-Z3 with triple parity (tolerating three failures), suitable for large-scale deployments where capacity efficiency is prioritized over mirroring's simplicity. For example, a RAID-Z1 vdev with three devices provides capacity equivalent to two devices while protecting against one failure; creation uses the raidz, raidz1, raidz2, or raidz3 keywords in zpool create, such as zpool create tank raidz /dev/dsk/c1t0d0 /dev/dsk/c1t1d0 /dev/dsk/c1t2d0. ZFS supports wide stripes in RAID-Z, accommodating up to 1024 devices per vdev to maximize capacity in enterprise environments, though practical limits are often lower due to hardware constraints. Like mirrors, RAID-Z vdevs integrate with ZFS's copy-on-write model for atomic updates, and once established, the pool's topology remains fixed without support for type conversion.[45][46]
Advanced Features
Snapshots, Clones, and Replication
ZFS snapshots provide read-only, point-in-time images of datasets, capturing the state of a filesystem or volume at a specific moment. These snapshots are created atomically, ensuring consistency without interrupting ongoing operations, and can be generated manually using thezfs snapshot command or automatically through dataset properties like snapshot_limit or scheduled tasks.[6][47]
Leveraging ZFS's copy-on-write (COW) transactional model, snapshots are highly space-efficient, initially consuming minimal additional storage as they share unchanged blocks with the active dataset; space usage only increases for blocks modified after the snapshot is taken.[48] This design allows multiple snapshots to coexist with low overhead, enabling features like rapid recovery from errors or versioning of data changes. Snapshots are accessible via the .zfs/snapshot directory within the dataset, facilitating file-level restores without full dataset rollbacks.[49]
Clones extend snapshot functionality by creating writable copies that initially share the same blocks as the source snapshot, promoting efficient duplication for development or testing environments. A clone is generated using the zfs clone command, specifying a snapshot as the origin, and behaves as a full dataset until modifications occur, at which point it allocates new space for altered data via COW.[50] Clones depend on their origin snapshot, preventing its deletion until the clone is destroyed or promoted; promotion via zfs promote reverses the parent-child relationship, making the clone the primary dataset and independent of the snapshot, while the original dataset becomes a clone dependent on the origin snapshot. This allows the original dataset to be renamed or destroyed if no longer needed. However, the demoted original dataset now depends on the snapshot, so attempting to destroy the snapshot fails with an error such as "snapshot has dependent clones". To resolve this, verify dependencies using zfs list -o name,origin,clones or zfs get clones <snapshot>. If the demoted dataset is unneeded, destroy it first with zfs destroy <original_dataset>, then destroy the snapshot. Use -r or -R with caution for recursive destruction, as these can remove unintended dependents. This mechanism supports use cases such as branching datasets for software testing or creating isolated environments without duplicating storage.[49][51]
Replication in ZFS utilizes the zfs send and zfs receive commands to stream snapshot data, enabling efficient backup and synchronization across pools or systems, including over networks via tools like SSH. Full streams replicate an entire snapshot, while incremental streams transmit only changes between two snapshots, reducing bandwidth and time for ongoing replication tasks.[52] These streams can recreate snapshots, clones, or entire hierarchies on the receiving end, supporting disaster recovery and remote mirroring; for example, zfs send -i older@snap newer@snap | ssh remote zfs receive pool/[dataset](/page/Data_set) performs an incremental update.[53] Since OpenZFS 2.2, block cloning enhances replication efficiency for file-level copies, though it requires careful configuration to avoid known issues.[54]
Common use cases for these features include data backup through periodic snapshots and incremental sends, application testing via disposable clones, and versioning to track changes in critical datasets like databases or user files. By combining snapshots with replication, ZFS enables resilient workflows, such as rolling back to previous states or maintaining offsite copies with minimal resource overhead.[6][55]
Compression, Deduplication, and Encryption
ZFS supports inline compression to reduce storage requirements by transparently compressing data blocks during writes, with the default algorithm being LZ4 for its balance of speed and moderate compression ratios.[56] Other supported algorithms include gzip (levels 1-9 for varying ratios at the cost of higher CPU usage), and zstd (levels 1-19, offering gzip-like ratios with LZ4-like performance, integrated into OpenZFS for enhanced flexibility).[56][57] Compression is applied at the dataset level via thecompression property and operates on fixed-size blocks, providing space savings particularly effective for text, logs, and databases while adding minimal overhead on modern hardware.[58]
Deduplication in ZFS eliminates redundant data at the block level by computing a 256-bit SHA-256 checksum for each block and storing unique blocks only once, using the Deduplication Table (DDT) as an on-disk hash table implemented via the ZFS Attribute Processor (ZAP).[59] The DDT resides in the pool's metadata and requires significant RAM for caching to avoid performance degradation, making it suitable for environments with high redundancy like virtual machine storage where identical OS images or application blocks are common.[59] Enabled per-dataset with the dedup property (e.g., sha256), it integrates with the copy-on-write model but demands careful consideration of memory resources, as the table can grow substantially with unique blocks.[60]
Native encryption, introduced in OpenZFS 0.8.0 and matured in version 2.2.0, provides at-rest protection at the dataset or zvol level using AES algorithms, specifically AES-128-CCM, AES-256-CCM, or AES-256-GCM for authenticated encryption.[61] Keys are managed per-dataset, with a user-supplied master key (passphrase-derived or raw) wrapping child keys for inheritance, stored encrypted in the pool's metadata to enable seamless access across mounts without re-prompting.[62] Encryption is transparent and hardware-accelerated where available, supporting features like snapshots while ensuring data confidentiality without impacting the self-healing checksums.[61]
These features interact sequentially during writes: data is first compressed (if enabled), then checked for deduplication against the DDT using the post-compression checksum, and finally encrypted before storage, optimizing efficiency by applying reductions before security layers.[63] In OpenZFS 2.2.4 (released May 2024), fast deduplication enhancements reduced legacy overhead for inline processing.[64]
Performance and Optimization
Caching Mechanisms
ZFS employs a multi-tiered caching strategy to enhance I/O performance by minimizing access times to frequently used data and optimizing write operations. The primary tier is the Adaptive Replacement Cache (ARC), which operates in main memory as an in-RAM cache for filesystem and volume data. Unlike traditional Least Recently Used (LRU) policies, ARC uses an adaptive algorithm that maintains four lists—recently used (ghost and in-use) for both frequently and recently accessed data—to better predict future accesses and reduce cache misses. This design improves hit rates for read-heavy workloads by dynamically adjusting based on access patterns.[59] Extending ARC beyond available RAM, the Level 2 Adaptive Replacement Cache (L2ARC) utilizes secondary read caching on fast solid-state drives (SSDs), acting as an overflow for hot data evicted from ARC. L2ARC prefetches data likely to be reused, storing it on SSDs to bridge the speed gap between RAM and spinning disks, thereby accelerating subsequent reads without redundant disk seeks. It employs a similar adaptive eviction policy to ARC, ensuring only valuable blocks are retained, though it lacks redundancy and relies on the primary pool for data persistence.[65] For write optimization, the ZFS Intent Log (ZIL) records synchronous write operations to ensure durability, while a Separate Log device (SLOG) can offload this to a dedicated fast storage medium, such as an SSD, NVRAM, or small partitions on NVMe drives, to accelerate acknowledgment of sync writes by committing ZIL transactions to low-latency storage, allowing main data to proceed asynchronously to slower pool devices and maintaining sync=standard behavior. The ZIL temporarily holds transactions until they are committed to the main pool, reducing latency for applications requiring immediate persistence, like databases; without SLOG, it defaults to the pool's slower devices, but adding SLOG can dramatically cut write times by isolating log I/O. SLOG devices support mirroring for redundancy but are not striped across multiple logs for performance.[66] Introduced in OpenZFS 2.0, the special virtual device (VDEV) class dedicates fast storage, typically SSDs in mirrored configuration, for metadata and small blocks, improving access to critical filesystem structures and tiny files that would otherwise burden slower HDDs. Metadata, including block pointers and directory entries, is always allocated to special VDEVs, while data blocks up to a configurable size (via the special_small_blocks property) can also be placed there, enhancing overall pool responsiveness for metadata-intensive operations without affecting larger file storage. This class integrates seamlessly with existing pools and requires redundancy to maintain data integrity. As of OpenZFS 2.4.0 (released in 2025), hybrid allocation classes enhance special VDEVs for better integration in pools with mixed data types.[67][24] Underpinning these mechanisms, ZFS transaction groups (TXGs) batch multiple write transactions into cohesive units, syncing them to stable storage approximately every 5 seconds to amortize disk I/O overhead. Each TXG collects changes in memory during an open phase, quiesces for validation, and then commits atomically, leveraging copy-on-write to ensure consistency while minimizing random writes and enabling efficient checkpointing. This grouping reduces the frequency of physical disk commits, boosting throughput for asynchronous workloads.[35]Read/Write Efficiency and Dynamic Striping
ZFS employs variable block sizes to optimize storage efficiency and performance for diverse workloads. Block sizes range from 512 bytes up to 16 MB and are dynamically selected based on the size of data written, with the maximum determined by the dataset's recordsize property (default 128 KB, configurable up to 16 MB via the zfs_max_recordsize module parameter).[68] Administrators can set it to any power-of-two value within the supported range to better suit specific applications, such as databases that benefit from fixed-size records.[69] This adaptive sizing reduces fragmentation and improves I/O throughput by aligning blocks with typical read/write operations, unlike fixed-block systems that may waste space on small files or underutilize larger ones.[9] Dynamic striping in ZFS enables flexible expansion and balanced data distribution without predefined RAID stripe widths. Data is automatically striped across all top-level virtual devices (vdevs) in a storage pool at write time, allowing the system to allocate blocks based on current capacity, performance needs, and device health.[70] When new vdevs are added, subsequent writes incorporate them into the striping pattern, while existing data remains in place until naturally reallocated through the copy-on-write mechanism, ensuring seamless pool growth without downtime or data migration.[9] This approach contrasts with traditional RAID arrays by eliminating fixed stripe sets, providing better scalability for large pools where vdevs may vary in type, such as mirrors or RAID-Z configurations.[70] To enhance read performance for sequential workloads, ZFS implements prefetching and scanning algorithms that predictively fetch data blocks. The zfetch mechanism analyzes read patterns at the file level, detecting linear access sequences—forward or backward—and initiating asynchronous reads for anticipated blocks, often in multiple independent streams.[71] This prefetching caches data in the Adaptive Replacement Cache (ARC) before it is requested, reducing latency for streaming applications like video playback or high-performance computing tasks, such as matrix operations.[9] Scanning complements this by evaluating access stride and length to adjust prefetch aggressiveness, ensuring efficient handling of both short bursts and long sequential scans without excessive unnecessary I/O.[71] ZFS supports endianness adaptation to ensure portability across heterogeneous architectures, including big-endian and little-endian systems. During writes, data is stored in the host's native endianness, with a flag embedded in the block pointer indicating the format.[9] On reads, ZFS checks this flag and performs byte-swapping only if the current host's endianness differs, allowing seamless access to pools created on platforms like SPARC (big-endian) from x86 (little-endian) systems without format conversion tools.[9] This host-neutral on-disk layout maintains data integrity and simplifies cross-architecture migrations in enterprise environments.[9]Benchmarking Performance
Accurate assessment of ZFS pool performance requires careful benchmarking practices to avoid distortions introduced by features such as the Adaptive Replacement Cache (ARC), recordsize misalignment, compression, and asynchronous write handling. The fio (Flexible I/O Tester) tool is widely recommended for precise simulation of workloads on ZFS, as it provides granular control over I/O parameters to mitigate common pitfalls. Key recommended practices include using--direct=1 for direct I/O to bypass operating system page caching, --end_fsync=1 or --fsync=1 to flush writes to stable storage, aligning --bs with the dataset's recordsize (default 128 KB) to prevent amplification, testing with large files exceeding available RAM using uncompressible data to negate caching and compression effects, specifying unique --offset for multi-job tests to avoid overlapping regions, and validating results with zpool iostat or iostat to confirm actual disk activity.[72][73]
The dd command can serve for quick sequential throughput tests (e.g., dd if=/dev/zero of=/pool/test bs=1M conv=fdatasync), but it is less accurate than fio due to limited control over caching and synchronization.
iperf is appropriate for benchmarking network performance when evaluating ZFS shares over protocols such as NFS, SMB, or iSCSI, though it does not measure local pool I/O.
Benchmarks should be tailored to the intended workload, for example using 4 KB random I/O for databases or 64 KB to 128 KB for virtual machines.
Management and Administration
Pools, Devices, and Quotas
ZFS storage pools, known as zpools, serve as the fundamental unit of storage management, aggregating one or more virtual devices (vdevs) into a unified namespace for datasets. Vdevs can include individual disks, mirrors, or RAID-Z configurations, where RAID-Z provides redundancy similar to traditional RAID levels but integrated natively into ZFS. Pools support dynamic expansion by adding new vdevs using thezpool add command, which increases capacity without downtime; since OpenZFS 2.3, RAID-Z vdevs can also be expanded by adding disks directly to existing groups using zpool online -e followed by reconfiguration, allowing incremental growth without full vdev replacement.[74] Though vdevs cannot be removed once added except for specific types like hot spares, cache devices, or log devices via zpool remove. Cache devices, such as those used for L2ARC, can be removed even if faulted after successful pool import; administrators first verify the pool state with zpool status (expecting ONLINE or DEGRADED), detach the faulted device using zpool remove <pool-name> <faulted-gptid>, clear residual errors via zpool clear <pool-name>, and optionally export then re-import the pool for a clean state.[75][42][77]
Device management in ZFS emphasizes resilience and flexibility, allowing administrators to designate hot spares—idle disks reserved for automatic replacement of failed devices in the pool. Hot spares are added pool-wide with zpool add pool spare device and activate automatically via the ZFS Event Daemon (ZED) upon detecting a faulted vdev component, initiating a resilvering process to reconstruct data.[78][42][75] Failed drives can be replaced online using zpool replace pool old-device new-device, which detaches the faulty device and attaches the replacement, preserving pool availability during the transition. This approach ensures minimal disruption, as ZFS handles device failures at the pool level without requiring full pool recreation. The pool property autoonline=on (default off) enables ZED to automatically online faulted or offline devices.[79]
Quotas in ZFS enforce space limits at the dataset level, preventing any single filesystem, user, or group from monopolizing pool resources. The quota property sets a total limit on the space consumable by a dataset and its descendants, including snapshots, while refquota applies only to the dataset itself, excluding snapshot overhead. User and group quotas, enabled via userquota@user or groupquota@group properties, track and cap space usage by file ownership, with commands like zfs userspace providing detailed accounting. Reservations complement quotas by guaranteeing minimum space allocation; the reservation property reserves space exclusively for a dataset, ensuring availability even under pool pressure, whereas refreservation excludes snapshots from the guarantee. These mechanisms support fine-grained control, such as setting a 10 GB quota on a user dataset with zfs set quota=10G pool/user, promoting efficient resource distribution across multi-tenant environments.[80][6][81]
ZFS properties provide tunable configuration for datasets, influencing behavior like performance and storage efficiency, and support hierarchical inheritance to simplify administration. Properties are set using the zfs set command, such as zfs set compression=lz4 pool/[dataset](/page/Data_set) to enable inline compression, which reduces stored data size transparently without application changes. The recordsize property defines the maximum block size for files in a dataset, defaulting to 128 KB and tunable for workloads like databases (e.g., 8 KB for optimal alignment), affecting I/O patterns and compression ratios. Inheritance occurs automatically from parent datasets unless overridden locally; the zfs inherit command restores a property to its inherited value, propagating changes efficiently across the hierarchy—for instance, setting compression at the pool level applies to all child datasets unless explicitly unset. This model allows centralized tuning while permitting dataset-specific adjustments, enhancing manageability in large-scale deployments.[82][83]
Dataset creation in ZFS is lightweight and instantaneous, requiring no pre-formatting or space allocation, as the filesystem metadata is generated on-the-fly atop the existing pool. The zfs create command instantiates a new dataset—such as a filesystem or volume—immediately mountable and usable, with properties inherited from the parent; for example, zfs create pool/home/user establishes a new filesystem without consuming additional blocks until data is written. This design enables rapid provisioning of numerous datasets, ideal for scenarios like user home directories or project spaces, where administrative overhead is minimized compared to traditional filesystems.[84][85][6]
Scrubbing, Resilvering, and Maintenance
Scrubbing is a proactive maintenance operation in ZFS that involves a command-initiated full scan of all data and metadata within a storage pool to verify checksum integrity. Thezpool scrub command initiates or resumes this process, reading every block and comparing its checksum against stored values to detect silent data corruption.[86] If discrepancies are found and redundant copies exist, ZFS automatically repairs the affected blocks through self-healing mechanisms.[87] Administrators can pause an ongoing scrub with zpool scrub -p to minimize resource impact during peak loads, resuming it later without restarting from the beginning; stopping it entirely uses zpool scrub -s.[86] The progress and any errors detected during scrubbing are monitored via the zpool status command, which displays scan completion percentage, throughput, and error counts.[77]
To control the performance impact of scrubbing, ZFS employs an I/O scheduler that prioritizes scrub operations separately from user workloads, classifying them into distinct queues for async reads and writes.[20] In earlier implementations, module parameters like zfs_scrub_delay allowed manual throttling of scrub speed, but modern OpenZFS versions (2.0 and later) rely on dynamic I/O prioritization and queue management for rate limiting, reducing interference with foreground tasks.[20] Scrubs are recommended monthly for production pools to ensure ongoing data integrity, though they can significantly load the system, especially on large pools.[6]
Resilvering is the reactive process of rebuilding data onto a replacement device following a failure in a redundant pool configuration, such as RAID-Z or mirrors. It is automatically triggered when using zpool replace old_device new_device or zpool attach device new_device, copying data from surviving vdevs to the new device while verifying checksums. In OpenZFS 2.0 and later, sequential resilvering mode—enabled via the -s flag on zpool replace or attach for mirrored vdevs—optimizes the process by performing reads and writes in a linear fashion, significantly speeding up rebuild times on large or sequential-access drives like SMR HDDs.[88] The operation ensures pool redundancy is restored, with progress trackable via zpool status, which reports the estimated time remaining and bytes processed.[77]
Routine maintenance of ZFS pools includes exporting and importing for safe relocation or troubleshooting, as well as ongoing status monitoring. The zpool export poolname command unmounts all datasets, clears pool state from the system, and prepares it for physical transfer to another host, preventing accidental access during moves.[77] Importing follows with zpool import poolname, which scans for available pools (optionally specifying a device directory with -d) and brings them online; missing log devices can be forced with -m if non-critical.[89] The zpool status command provides comprehensive health overviews, detailing vdev states, error histories, scrub/resilver progress, and configuration, with the -v option for verbose output including per-device errors.[77] Regular use of these commands helps administrators track pool performance and preempt issues.
ZFS handles errors through states like "degraded," where the pool remains operational but with reduced fault tolerance due to one or more faulted devices, provided sufficient replicas prevent data loss.[87] In this state, I/O continues using available redundancy, but further failures risk unrepairable corruption; zpool status flags such conditions with warnings to restore redundancy promptly.[77] For automated mitigation, hot spares designated via zpool add poolname spare device activate automatically when ZED detects faults, initiating resilvering without manual intervention. This requires ZED to be running and configured, ensuring proactive replacement in enterprise environments.
Limitations
Resource Consumption and Scalability
ZFS requires a minimum of 768 MB of RAM for installing a system with a ZFS root file system, though 1 GB is recommended for improved overall performance. In practical deployments, at least 8 GB of RAM is advised to support the Adaptive Replacement Cache (ARC), ZFS's primary in-memory cache, which dynamically allocates up to half of available system memory by default. The ARC reduces disk I/O by caching frequently accessed blocks, but its overhead can strain systems with limited RAM, potentially leading to swapping and degraded performance if memory pressure is high.[90][91] When enabling deduplication, RAM demands escalate substantially, as the deduplication table (DDT) must reside in memory for efficient operation; approximately 5 GB of RAM is needed per terabyte of pool data, assuming a 64 KB average block size. This memory-intensive nature makes deduplication suitable only for datasets with high duplication ratios and ample RAM, often limiting its use in resource-constrained environments. Without sufficient memory, deduplication can cause excessive cache misses and performance bottlenecks.[92] Theoretically, ZFS supports pool sizes up to 256 zebibytes (ZiB), enabling massive scalability for data centers and enterprise storage. However, practical limits arise from the number of virtual devices (vdevs) in a pool; while there is no enforced maximum vdev count, exceeding dozens can introduce overhead in metadata management, I/O parallelism, and resilvering times, potentially bottlenecking performance on systems with limited CPU or bus bandwidth. Optimal scalability is achieved by balancing vdev count with hardware capabilities, typically favoring more narrower vdevs for better throughput over fewer wide ones.[93][94] Synchronous writes represent a key performance bottleneck in ZFS, particularly on HDD-based pools without a Separate Log (SLOG) device, as they require immediate persistence to stable storage, resulting in latencies of tens to hundreds of milliseconds per operation. Adding an SLOG—usually a fast SSD dedicated to the ZFS Intent Log (ZIL)—mitigates this by offloading sync writes to low-latency media, improving throughput by orders of magnitude for workloads like databases. High I/O demands on mechanical drives further exacerbate bottlenecks in large pools, where sequential access patterns may still underutilize bandwidth compared to SSDs.[95][96] ZFS lacks fully native, automatic TRIM support in older or certain implementations, where it can be unstable and lead to I/O stalls; instead, manual or periodic trimming via thezpool trim command is available to notify underlying SSDs of unused blocks, aiding garbage collection and longevity. In large-scale pools comprising numerous HDDs, power consumption rises significantly—often exceeding hundreds of watts at idle—due to ZFS's pool-level management, which hinders individual drive spin-down and keeps multiple devices active even during low-activity periods.[97][98][99]
Compatibility and Licensing Constraints
ZFS's licensing under the Common Development and Distribution License (CDDL) creates significant barriers to integration with the Linux kernel, which is governed by the GNU General Public License (GPL). The CDDL and GPL are incompatible, preventing ZFS from being included as a native module in the mainline Linux kernel, as combining them would violate both licenses' terms on derivative works. This incompatibility stems from the CDDL's requirement for source code availability in certain distributions, which conflicts with the GPL's copyleft provisions, leading organizations like the Free Software Foundation to deem such combinations a potential copyright infringement. As of 2025, Linux kernel versions 6.12 and later introduce enhanced protections for kernel symbols, complicating the loading of non-GPL out-of-tree modules like ZFS, though DKMS remains a viable workaround for supported kernels.[100][26][101] Despite these licensing hurdles, ZFS exhibits strong portability across implementations due to its adaptive endianness, allowing pools to be read on systems with different byte orders—big-endian or little-endian—since the endianness is explicitly stored with the data objects. This enables seamless migration of ZFS datasets between architectures, such as from x86 to PowerPC systems, without data reformatting. However, version mismatches between ZFS implementations can arise if newer features (e.g., those enabled via pool properties) are used that are not supported in older versions, potentially rendering pools unimportable on legacy systems unless compatibility modes are set. OpenZFS maintains backward compatibility for pools at version 28 or higher across supported platforms, ensuring interoperability where feature flags align.[102][103] Platform constraints further limit ZFS deployment: it lacks native support on mobile operating systems like Android or iOS, where kernel architectures and resource models do not accommodate ZFS's requirements for block device management and advanced features. On Windows, support is restricted to third-party experimental ports, such as early efforts in the OpenZFS project, which remain immature and unsuitable for production use without significant caveats. These limitations stem from ZFS's origins in Solaris and its evolution within Unix-like ecosystems, making adaptation to non-POSIX environments challenging. Workarounds for Linux deployment include using Dynamic Kernel Module Support (DKMS) to compile ZFS modules against the running kernel, bypassing mainline inclusion while distributing binaries separately to avoid GPL conflicts. Alternatively, the zfs-fuse implementation runs ZFS entirely in user space via the FUSE framework, offering a GPL-compatible path but at the cost of reduced performance compared to kernel-level integration. The illumos distribution serves as the primary reference implementation for ZFS development, providing a stable base for testing and ensuring consistency across forks like OpenZFS.[104][105]Data Recovery
Built-in Recovery Tools
ZFS provides several integrated mechanisms for data recovery, leveraging its copy-on-write architecture and redundancy features to restore integrity without external intervention. These tools enable administrators to recover from device failures, corruption, or accidental changes while minimizing downtime. Central to this capability is the ability to import pools from disk labels, which contain metadata about the pool's configuration and state, allowing ZFS to reconstruct the storage topology even if the system has crashed or devices have been moved.[89] Thezpool import command facilitates recovery by scanning available devices for pool labels and importing the pool into the system namespace. In standard operation, it identifies and mounts healthy pools automatically; for damaged configurations, options like -f (force) override import restrictions, such as mismatched pool GUIDs or temporary outages, while -d specifies alternate search directories for labels. For severely compromised pools, recovery mode (-F) attempts to salvage by discarding recent transactions, potentially restoring importability at the cost of recent data. Exporting a pool via zpool export before maintenance complements this by cleanly unmounting datasets and updating labels, aiding subsequent imports on different systems or after hardware changes. Pools with missing devices, such as log mirrors, can be force-imported using -m to bypass validation and resume operations, though full redundancy should be restored promptly.[89][106][107]
Scrub-based repair is a proactive recovery process that detects and corrects data corruption through end-to-end checksum verification. Initiated via the zpool scrub command, it traverses all allocated blocks in the pool, comparing checksums against stored values; discrepancies trigger self-healing in redundant configurations like mirrors or RAID-Z, where ZFS reconstructs valid data from parity or copies and rewrites it to the affected block. This automatic healing occurs during the scrub without interrupting I/O, as ZFS prioritizes reads from healthy replicas. Post-scrub, the zpool status output details repaired errors, recommending follow-up scrubs after any recovery to verify ongoing integrity. While effective for silent corruption, scrubbing requires sufficient redundancy, such as RAID-Z vdevs, to enable repairs.[108][109]
Snapshot rollback offers a point-in-time recovery option for file systems and volumes affected by user errors or malware. ZFS snapshots capture instantaneous, read-only states, and the zfs rollback command reverts a dataset to a specified snapshot by discarding all subsequent changes, effectively restoring the prior configuration. This operation is atomic and preserves the snapshot hierarchy if -r is used for recursive rollbacks across clones or children, though it destroys newer snapshots unless promoted first. Rollback is particularly useful for quick recovery from deletions or modifications, as it leverages the copy-on-write mechanism to avoid full data rewrites. Administrators must weigh the destructive nature of rollback, which permanently loses post-snapshot data, against alternatives like cloning snapshots for selective restores.[110][111]
Device replacement supports seamless recovery from hardware failures through online resilvering, where a faulty drive is swapped without pool downtime. Using zpool replace, administrators detach a degraded or failed device and attach a new one, prompting ZFS to copy valid data from remaining replicas to the replacement via the resilvering process. This traversal prioritizes used blocks and can complete in minutes for hot-swappable scenarios or hours for large pools, depending on I/O bandwidth and data volume. The operation maintains pool availability, with zpool status monitoring progress and errors; upon completion, the old device can be removed if still attached. This feature extends to partial failures, like sector errors, where zpool online reactivates a device for targeted resilvering.[112][113]
External Recovery Methods
When built-in ZFS tools such as zpool import or zdb fail to recover a pool due to severe metadata corruption or device loss, external methods become necessary for data salvage. These approaches often involve third-party software or manual forensic techniques to reconstruct pool structures and extract files without relying on native ZFS commands. Such methods are typically employed in scenarios where the pool is unmountable, devices are physically damaged, or the metadata object set (MOS) is irreparably altered, including forensic investigations where chain-of-custody preservation is critical.[114][115] Third-party tools like UFS Explorer Professional Recovery and R-Studio provide specialized support for ZFS recovery by reconstructing RAID-Z configurations, scanning for lost partitions, and recovering files from corrupted or degraded pools. For instance, UFS Explorer allows users to connect available ZFS disks, automatically detect pool parameters, and perform sector-by-sector scans to rebuild virtual volumes and extract data, even from partially failed RAID-Z arrays. Other tools, such as ReclaiMe Pro and DiskInternals RAID Recovery, offer comparable capabilities, including automated ZFS pool detection and metadata repair for scenarios involving lost devices or formatting errors. These software solutions are particularly useful for non-experts, as they abstract low-level operations like hex editing of vdev labels or block reconstruction.[116][117] Manual recovery techniques target ZFS's on-disk structures, starting with uberblock scanning to identify valid pool states and progressing to MOS parsing for metadata reconstruction. The uberblock, located at the end of each vdev, serves as an entry point containing pointers to the MOS, which holds pool-wide configuration objects like datasets and properties; tools like zdb can scan these uberblocks with the -u flag to locate the most recent consistent version, allowing import of the pool if a viable uberblock exists. For deeper analysis, hex editors or zdb's -C option can parse the MOS directly from raw device images, revealing object sets and enabling selective file extraction by traversing indirect blocks, though this requires expertise in ZFS's copy-on-write layout to avoid further data loss. In forensic contexts, these methods preserve evidence integrity by imaging devices first and using read-only analysis to recover artifacts from destroyed pools or overwritten metadata.[118][115][114] Common scenarios for external recovery include corrupted pools where checksum mismatches prevent import, lost devices in multi-vdev setups requiring manual vdev reconstruction, and forensic cases involving tampered or partially wiped storage. In corrupted pool recovery, external tools rebuild the MOS from surviving replicas across devices, while lost device scenarios may involve attaching spares and forcing import after label verification with zdb -l. Forensic applications extend to legal or incident response, where MOS parsing uncovers historical snapshots or deleted files without altering the original media.[119][120][114] Best practices to facilitate external recovery emphasize proactive measures like performing regular zpool exports before hardware changes to ensure clean metadata states, and maintaining offsite backups following the 3-2-1 rule—three copies of data on two different media types, with one offsite—to enable restoration independent of pool failures. However, limitations arise in encrypted pools, where native ZFS encryption requires valid keys for any metadata access, potentially rendering external tools ineffective without them and necessitating key recovery or decryption prior to salvage attempts.[121][122][123]Implementations
Operating System Support
ZFS originated as a native component of the Solaris operating system, where it provides full integration with the kernel and extensive administrative tools for storage management. The illumos project, an open-source fork of OpenSolaris, maintains native ZFS support, inheriting Solaris's core features while enabling community-driven enhancements through OpenZFS. FreeBSD has incorporated ZFS as a kernel module since version 7.0, released in 2008, allowing seamless use for root filesystems, snapshots, and RAID configurations directly within the operating system.[6] On Linux, ZFS is supported via the ZFS on Linux (ZoL) project, which compiles kernel modules using DKMS to ensure compatibility with kernels from version 4.18 to 6.17 as of OpenZFS 2.3.5 in 2025.[124] This implementation is readily available in distributions such as Ubuntu, where it can be installed during setup for root-on-ZFS configurations, and Proxmox VE, which leverages ZFS for virtual machine storage and clustering. Due to licensing constraints that prevent direct inclusion in the Linux kernel—detailed further in the Limitations section—ZoL relies on external module building, though this has not hindered its widespread adoption. Support extends to other platforms with varying degrees of integration. On macOS, OpenZFS on OS X provides a ported implementation up to version 2.3.0 as of 2025, enabling ZFS pools and datasets but with limitations on features like native boot support and performance optimizations due to Apple's kernel restrictions.[125] NetBSD integrates OpenZFS starting from version 9, rebasing on FreeBSD's implementation for stable pool management, with root-on-ZFS available since version 10 via booting from an FFS root and pivoting to ZFS.[126] OpenZFS employs compatibility layers to maintain interoperability, allowing ZFS pools created on one supported operating system to be imported and used on another without data loss, provided version alignments are observed across platforms.Commercial and Open-Source Products
TrueNAS stands out as a leading open-source storage operating system that fully leverages ZFS for enterprise-grade data protection and management. TrueNAS Core, built on FreeBSD, employs ZFS as its primary filesystem to deliver features such as unlimited snapshots, inline compression, deduplication, and RAID-Z configurations for redundancy. Similarly, TrueNAS Scale, based on Debian Linux with OpenZFS, extends these capabilities to support scalable pools, thin provisioning, and integration with containerized applications, making it suitable for both home labs and production environments. Proxmox VE, an open-source platform for virtualization and container management, integrates ZFS natively for local storage backends, allowing administrators to create efficient zfspools for VM disks, container root filesystems, and backups with support for snapshots, clones, and compression.[127] Unraid, a flexible open-source NAS solution, introduced native ZFS support in version 7.0 and beyond, enabling users to configure ZFS pools or hybrid setups alongside its parity-based array for optimized media serving and data redundancy without relying solely on plugins.[128] In the commercial space, Oracle Solaris embeds ZFS deeply into its operating system, providing robust, scalable storage solutions with features like self-healing data integrity, encryption, and high-performance caching tailored for mission-critical applications in data centers. Delphix, a data virtualization platform, utilizes a customized ZFS implementation derived from Illumos to enable rapid provisioning of virtual databases, leveraging copy-on-write snapshots and efficient space management for development and testing workflows.[129] Network-attached storage (NAS) appliances have increasingly adopted ZFS for enhanced reliability. QNAP's QuTS hero operating system powers select enterprise NAS models, harnessing ZFS for bit-rot protection, quasi-RAID configurations, and self-healing to ensure data durability in hybrid HDD/SSD setups.[130] pfSense, the open-source firewall distribution from Netgate, supports ZFS installations in its Plus edition, including boot environments for safe upgrades and rollback, making it ideal for secure, resilient routing appliances with storage needs.[131] As of 2025, RHEL-compatible distributions such as Rocky Linux and AlmaLinux facilitate OpenZFS deployment through official repositories, supporting root-on-ZFS installations and advanced pool management for server and cloud workloads. Hybrid cloud products continue to incorporate ZFS elements, with solutions like those from Vast Data offering disaggregated storage that parallels ZFS principles for massive scalability, though proprietary in implementation.Development and Roadmap
Version History
ZFS was initially developed by Sun Microsystems and first integrated into Solaris 10 Update 2 (6/06) in June 2006, introducing the core file system with pool version 1, which supported basic features like snapshots, clones, and RAID-Z redundancy.[132] Following Oracle's acquisition of Sun in 2010, ZFS development continued under Oracle, with Solaris 11 released in November 2011 featuring pool version 34 and initial support for native ZFS encryption, allowing datasets to be encrypted at creation using AES algorithms integrated with the Solaris Cryptographic Framework.[133] The OpenZFS project, formed in 2013 to unify open-source ZFS development across platforms like illumos, FreeBSD, and Linux, began releasing coordinated versions starting with the 0.6 series in 2014.[3] OpenZFS 0.7.0, released in July 2017, added features such as device removal for non-redundant vdevs, raw send streams for efficient backups, and improved compatibility with Linux kernels up to 4.12. Native encryption, building on Oracle's implementation, was introduced in OpenZFS 0.8.0 in September 2019, enabling per-dataset AES encryption with support for raw and encrypted sends, though early versions had performance limitations that were later addressed.[61] OpenZFS 2.2.0, released in October 2023, brought significant enhancements including block cloning for efficient file duplication without full copies, Zstandard (zstd) early abort for faster compression of incompressible data, BLAKE3 checksums for improved security and speed, corrective zfs receive for healing corruption during restores, and quick scrubs that verify only modified blocks. This release also supported Linux kernel 6.5 and introduced better container integration for unprivileged access.[22] OpenZFS 2.3.0, released in January 2025, focused on storage flexibility and performance, introducing RAIDZ expansion to add devices to existing RAIDZ vdevs without rebuilding the pool, fast deduplication using a new on-disk table for quicker lookups, and direct I/O paths bypassing the ARC cache for NVMe-optimized workloads.[134] To maintain backward compatibility without rigid version numbering, OpenZFS uses feature flags as per-pool properties that enable specific on-disk format changes only when activated, allowing pools to remain readable on older implementations. Notable examples include the large_blocks flag, which permits block sizes up to 16 MB for improved sequential I/O on large files, and embedded_data, which stores highly compressible small blocks directly in metadata pointers to save space and reduce fragmentation.[135] Recent OpenZFS releases have deprecated legacy ARC tuning parameters in favor of the adaptive replacement cache (ARC) algorithm, which dynamically balances metadata and data caching without manual intervention.[59] For instance, zfs_arc_meta_limit_percent was removed in OpenZFS 2.2 due to a full ARC rewrite that automates metadata prioritization, simplifying configuration while improving hit rates in diverse workloads.[136]Future Innovations and OpenZFS Directions
As of November 20, 2025, the OpenZFS project continues to prioritize enhancements in performance, flexibility, and integration with emerging hardware and cloud environments. OpenZFS 2.4.0 is in release candidate stage (RC4 released November 17, 2025), with support for Linux kernels 4.18 to 6.17 and FreeBSD 13.3+. Key features in this release include default user/group/project quotas, light-weight uncached I/O fallbacks for better direct I/O handling, unified allocation throttling to balance I/O across vdev types, and AVX2-accelerated AES-GCM encryption for up to 80% performance gains on compatible hardware such as AMD Zen 3 and similar CPUs. Additional enhancements encompass support for ZIL placement on special vdevs, extending the special_small_blocks property to non-power-of-two sizes (including for ZVOL writes), and various gang block improvements. These updates focus on stability across evolving environments.[137][138][139] Key innovations in the roadmap include the AnyRaid vdev type, which enables pooling of disks with varying sizes to maximize usable capacity without the rigidity of traditional RAID-Z configurations, while maintaining data redundancy through mirroring or parity options like AnyRaid-Z. This addresses long-standing requests for organic storage growth in heterogeneous setups. Additionally, efforts are underway to redesign pool labels to support larger sector sizes up to 128 KiB, expanding the rewind window for recovery and embedding pool configurations directly in metadata for easier management. Cloud integration is advancing through optimizations for AWS EBS, such as spread writes to mitigate hotspots and early asynchronous flushing for reduced latency, alongside explorations of ZFS atop object storage like S3 via tools such as ZeroFS. Deduplication sees refinements with Fast Dedup, an inline mechanism using log-structured tables to identify duplicates during writes, improving efficiency over legacy hash lookups without requiring full scans.[140][141][142] TRIM support remains a core feature, with autotrim enabled by default on compatible devices to maintain SSD performance by discarding unused blocks, though ongoing refinements target better handling in mixed-media pools. While AI-driven management tools have been discussed in community contexts, current priorities emphasize manual tuning due to risks of misconfiguration from automated systems lacking ZFS-specific nuances. Potential future additions include support for Shingled Magnetic Recording (SMR) drives and Block Reference Trees (BRT) for faster cloning operations.[20][143][140] The OpenZFS Developer Summit in October 2025, held in Portland, Oregon, highlighted community-driven progress, including discussions on AnyRaid implementation and performance funding from contributors like Klara Systems, which sponsored label redesign and AWS features. These events foster collaboration on high-impact areas, with outcomes emphasizing scalable architectures for AI/ML workloads and unstructured data growth. Funding efforts support developer time for performance optimizations, such as unified allocation throttling to balance I/O across vdevs.[144][145] Challenges persist in kernel compatibility, as OpenZFS must adapt to frequent Linux upstream changes, with 2.4 extending support to kernel 6.17 while deprecating older modules. Hardware evolution, including NVMe for high-throughput arrays and Compute Express Link (CXL) for memory disaggregation, requires targeted optimizations to avoid I/O bottlenecks and firmware incompatibilities seen in some NVMe drives. These issues drive roadmap items like enhanced special vdevs for metadata and logs on fast media.[142][146][140]References
- https://docs.freebsd.org/en/books/[handbook](/page/Handbook)/zfs/
