Hubbry Logo
Non-RAID drive architecturesNon-RAID drive architecturesMain
Open search
Non-RAID drive architectures
Community hub
Non-RAID drive architectures
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Non-RAID drive architectures
Non-RAID drive architectures
from Wikipedia

The most widespread standard for configuring multiple hard disk drives is RAID (redundant array of inexpensive/independent disks), which comes in a number of standard configurations and non-standard configurations. Non-RAID drive architectures also exist, and are referred to by acronyms with tongue-in-cheek similarity to RAID:

  • JBOD (just a bunch of disks): described multiple hard disk drives operated as individual independent hard disk drives.
  • SPAN or BIG: A method of combining the free space on multiple hard disk drives from "JBoD" to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.[1]
  • MAID (massive array of idle drives): an architecture using hundreds to thousands of hard disk drives for providing nearline storage of data, primarily designed for write once, read occasionally (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy.

JBOD

[edit]

JBOD (just a bunch of disks or just a bunch of drives) is an architecture using multiple hard drives exposed as individual devices. Hard drives may be treated independently or may be combined into one or more logical volumes using a volume manager like LVM or mdadm, or a device-spanning filesystem like btrfs; such volumes are usually called "spanned" or "linear | SPAN | BIG".[2][3][4] A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume. Unlike a RAID 0 (striped) volume, the capacity of a linear volume is not limited by the smallest member drive multiplied by the total number of member drives, but the capacity simply adds up, however, the speed does not multiply like it does on a RAID 0.[5][6] Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level.

Concatenation (SPAN, BIG)

[edit]
Diagram of a SPAN/BIG (JBOD) setup.

Concatenation or spanning of drives is not one of the numbered RAID levels, but it is a popular method for combining multiple physical disk drives into a single logical disk. It provides no data redundancy. Drives are merely concatenated together, end to beginning, so they appear to be a single large disk, known as SPAN or BIG.

In the adjacent diagram, data are concatenated from the end of disk 0 (block A63) to the beginning of disk 1 (block A64); end of disk 1 (block A91) to the beginning of disk 2 (block A92). If RAID 0 were used, then disk 0 and disk 2 would be truncated to 28 blocks, the size of the smallest disk in the array (disk 1) for a total size of 84 blocks.

What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity[a] and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.[1][7]

Implementations

[edit]

The initial release of Microsoft's Windows Home Server employs drive extender technology, whereby an array of independent drives is combined by the OS to form a single pool of available storage. This storage is presented to the user as a single set of network shares. Drive extender technology expands on the normal features of concatenation by providing data redundancy through software – a shared folder can be marked for duplication, which signals to the OS that a copy of the data should be kept on multiple physical drives, whilst the user will only ever see a single instance of their data.[8] This feature was removed from Windows Home Server in its subsequent major release.[9]

The btrfs filesystem can span multiple devices of different sizes, including RAID 0/1/10 configurations, storing 1 to 4 redundant copies of both data and metadata.[10] (A flawed RAID 5/6 also exists, but can result in data loss.)[10] For RAID 1, the devices must have complementary sizes. For example, a filesystem spanning two 500 GB devices and one 1 TB device could provide RAID1 for all data, while a filesystem spanning a 1 TB device and a single 500 GB device could only provide RAID1 for 500 GB of data.

The mdadm and LVM services likewise allow combining spanning and RAID.

The ZFS filesystem can likewise pool multiple devices of different sizes and implement RAID, though it is less flexible, requiring the creation of virtual devices of fixed size on each device before pooling.[11]

In enterprise environments, enclosures are used to expand a server's data storage by using JBOD[12] devices. This is often a convenient way to scale-up storage when needed by daisy-chaining additional disk shelves.[13]

MAID

[edit]

MAID (massive array of idle drives) is an architecture using hundreds to thousands of hard drives for providing nearline storage of data. MAID is designed for write once, read occasionally (WORO) applications. Hard drives are not spun-up until they are needed.[14][15][16]

Compared to RAID technology, MAID has increased storage density, and decreased cost, electrical power, and cooling requirements. However, these advantages are at the cost of much increased latency, significantly lower throughput, and decreased redundancy. Drives designed for multiple spin-up/down cycles (e.g. laptop drives) are significantly more expensive.[17] Latency may be as high as tens of seconds.[18] MAID can supplement or replace tape libraries in hierarchical storage management.[15]

To allow a more gradual tradeoff between access time and power savings, some MAIDs such as Nexsan's AutoMAID incorporate drives capable of spinning down to a lower speed.[19] Large scale disk storage systems based on MAID architectures allow dense packaging of drives and are designed to have only 25% of disks spinning at any one time.[18]

See also

[edit]

Explanatory notes

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Non-RAID drive architectures encompass storage configurations that connect multiple disk drives without employing the , striping, or parity features characteristic of systems, prioritizing simplicity, capacity expansion, and cost efficiency over . These setups treat drives as independent units or aggregate them into logical volumes, allowing data to be written sequentially across disks without distributed processing or error correction at the hardware level. Unlike , which virtualizes drives into arrays for performance and reliability gains, non-RAID approaches rely on the operating system or software for any additional . The primary types of non-RAID drive architectures include JBOD (Just a Bunch of Disks), concatenation (such as spanned volumes), and MAID (Massive Array of Idle Disks). In JBOD, each drive functions autonomously within a shared enclosure, appearing as separate volumes to the system and enabling individual access or replacement without affecting others. Concatenation combines multiple drives into a single logical unit, where data fills one drive sequentially before overflowing to the next, effectively pooling capacity but without enhancing read/write speeds. MAID extends this by powering down idle disks to reduce energy consumption, suitable for archival applications. All configurations support drives of varying sizes and can scale by adding new disks, but they lack built-in mechanisms like mirroring or parity, making them unsuitable for environments requiring high availability. Key advantages of non- architectures lie in their straightforward and : they incur no overhead from controllers, utilize 100% of available storage without parity allocation, and offer flexibility for heterogeneous drive mixes. However, these benefits come at the cost of vulnerability; a single drive failure can result in total for concatenated setups, and there is no automatic recovery or performance optimization, often necessitating external backups or software-based protection. In comparison to levels such as 0 (striping without ) or 5 (striping with parity), non- options are simpler but provide neither the speed boosts nor the resilience, positioning them as complementary rather than competitive solutions. Non-RAID drive architectures find application in scenarios where raw capacity and ease of management outweigh needs, such as archival storage, media libraries, repositories, and software-defined storage environments in data centers. For instance, JBOD enclosures are commonly used in to expand storage pools dynamically, while concatenated volumes suit cold with infrequent access, and MAID optimizes for power savings in large-scale archives. Although hardware support for JBOD is available in many SAS controllers, it typically omits features like write caching to maintain simplicity, underscoring the architecture's focus on unadorned drive aggregation.

Overview

Definition and Principles

Non-RAID drive architectures refer to configurations of multiple hard disk drives (HDDs) that operate without the , parity calculations, or mechanisms characteristic of systems. In these setups, drives are managed either as independent units or combined linearly to extend storage capacity, prioritizing simplicity over performance optimization or . The fundamental principles of non-RAID architectures emphasize treating drives as discrete entities or sequentially appending their spaces to form larger s, without distributing across drives for load balancing or correction. This approach relies on basic or independent access, allowing the operating system to view multiple physical drives as separate logical devices or a single extended , but it provides no inherent protection against single-drive failures. is typically handled through host software, such as volume managers, or simple hardware controllers that expose drives directly to the system. Concepts of non-RAID aggregation, such as , originated in the late 1980s with early volume management tools in systems. These architectures emerged more prominently in the alongside tools like the Logical Volume Manager (LVM), which was initially developed in 1998 to enable flexible storage pooling without RAID complexities. Key characteristics include the absence of any built-in , making backups essential for reliability, and dependence on software or for configuration, with modern implementations extending to solid-state drives (SSDs) for similar capacity-focused applications.

Comparison to RAID

Non-RAID drive architectures primarily aggregate storage capacity from multiple drives without incorporating redundancy or performance-enhancing techniques like striping or , in contrast to systems, which are designed to provide through data duplication or parity (e.g., 1 mirroring or 5/6 parity) and improved throughput via parallel access (e.g., 0 striping). In non-RAID setups, JBOD presents drives independently as separate volumes, while spanning concatenates data across drives in a linear fashion, sequentially filling one drive before moving to the next and maximizing usable space. Both treat the storage as vulnerable to from any single drive failure, unlike levels that can sustain one or more drive failures depending on the configuration. Reliability in non-RAID architectures lacks the inherent of , as there is no mechanism for or reconstruction if a drive fails; for instance, in a spanned volume, the failure of any single drive renders the entire logical unit inaccessible until manual intervention or backups are used, whereas independent JBOD limits loss to the failed drive. This contrasts sharply with 1, 5, or 6, where allows continued operation and data rebuilding post-failure. Performance-wise, non-RAID configurations do not benefit from the parallel I/O operations enabled by striping, resulting in read/write speeds constrained to those of individual drives rather than scaled across the array, making them unsuitable for high-throughput workloads. Non-RAID architectures are best suited for applications prioritizing cost and simplicity over data protection, such as archival storage, backups, or cold data tiers where external (e.g., offsite copies) mitigates risks, whereas is favored in environments requiring and rapid access, like databases or virtualized servers. JBOD, as a representative non-RAID example, pools capacity effectively for these low-access scenarios without the overhead of RAID controllers. As of 2025, non-RAID approaches continue to find application in budget NAS systems for home or small office use and certain cloud object storage setups emphasizing capacity over hardware-level , while maintains dominance in enterprise servers for mission-critical reliability.

JBOD

Description and Operation

JBOD, or Just a Bunch Of Disks, is a non-RAID storage architecture that exposes multiple physical drives as separate logical devices directly to the operating system, allowing each drive to function independently without any data distribution or redundancy mechanisms. In this configuration, drives are connected through a controller such as a SAS expander or a host bus adapter (HBA), which facilitates connectivity but does not perform striping, mirroring, or any other data aggregation; instead, each drive operates autonomously, with the operating system handling all data access and management. During setup, JBOD drives appear as individual volumes within operating system tools, such as Windows Disk Management or utilities like , enabling users to partition, format, and manage each drive separately without requiring array-level configuration. This independent presentation allows for straightforward integration, as the system recognizes each drive by its , such as /dev/sdX in environments. Technically, JBOD supports heterogeneous drive sizes and types, including mixtures of HDDs and SSDs, SAS and interfaces, and varying capacities, providing flexibility in deployment without uniformity constraints. The total usable capacity equals the sum of the individual drive capacities, with no allocation for parity, , or other overhead, resulting in zero metadata footprint for array management. Common hardware implementations include JBOD enclosures equipped with dual SAS modules and power supplies for , as well as server backplanes that connect directly to an HBA, effectively bypassing traditional controllers to pass drives through unaltered. As of 2025, high-density JBOD systems incorporate advanced technologies like Seagate's Mozaic hard drive platform for increased capacity per watt. Unlike , which linearly appends data across drives into a single logical volume, JBOD maintains complete separation of drives at the hardware and OS levels.

Advantages and Limitations

One key advantage of JBOD is its maximum storage utilization, as it employs the full capacity of all connected drives without the overhead of schemes that waste space on parity or . This approach also provides flexibility in mixing drives of varying sizes and speeds within the same system, allowing users to incorporate heterogeneous hardware without compatibility issues. Additionally, JBOD incurs low costs since it requires no specialized controllers or software, relying instead on standard drive interfaces. Expansion is straightforward, enabling the independent addition of drives to increase capacity without disrupting existing volumes. However, JBOD offers no inherent , meaning the failure of a single drive results in the loss of only its data but introduces management complexities in tracking and recovering individual volumes. It provides no performance benefits from , such as striping, leading to speeds limited to those of individual drives rather than aggregated throughput. Without proactive balancing, uneven wear can occur across drives due to disparate usage patterns in independent operations. Furthermore, managing multiple separate volumes generates administrative overhead, including separate backups and monitoring for each drive. In practical terms, JBOD suits non-critical where capacity trumps , such as archival or temporary files. As of , it remains common in home media servers for aggregating large media libraries, though it remains vulnerable to silent without regular backups or integrity checks. Compared to within non-RAID setups, JBOD's treatment of drives as independent units simplifies volume management by avoiding single points of failure inherent in spanned configurations. In contrast to RAID's , which trades capacity for at the cost of added , JBOD prioritizes and efficiency for low-risk applications.

Concatenation

Principles and Mechanism

Concatenation is a non-RAID technique that appends the free space of multiple physical drives sequentially to form one contiguous logical disk, enabling the creation of a larger storage volume than any single drive can provide. In its mechanism, data writes proceed linearly by filling the entire capacity of the first drive before proceeding to the second, continuing this process across all included drives until the logical volume is full. Read operations follow a similar linear path, mapping logical offsets to the appropriate physical drive and sector based on the cumulative capacities of preceding drives. The core principles of emphasize simplicity and capacity expansion without any data duplication, parity computation, or performance optimization through striping, making it suitable for scenarios where is handled externally. It facilitates dynamic resizing, such as adding drives to extend the volume or removing them after , relying on volume management software to adjust the configuration. Metadata structures, such as extent maps in logical volume managers or extended partition tables, record drive boundaries and mappings to maintain address translation integrity across the spanned space. Technically, concatenation accommodates unequal drive sizes by sequentially utilizing the full capacity of each drive, yielding a total logical size equal to the sum of all participating drives without wasting space on larger ones in the chain. A drive failure renders the entire logical volume inaccessible, although data on unaffected drives remains physically intact and can potentially be recovered using specialized tools or by reconfiguring the volume. This approach builds briefly on JBOD concepts by unifying independent drives into a shared addressable space rather than presenting them separately.

Implementations

Software implementations of concatenation are prevalent in Unix-like operating systems, particularly , where tools like and LVM enable linear array formation and volume spanning. The utility supports a linear mode that concatenates multiple block devices into a single logical device by sequentially appending their storage spaces, without parity or mirroring, allowing data to overflow from one device to the next as needed. Similarly, Logical Volume Manager (LVM) facilitates by combining extents from multiple physical volumes into a single logical volume, providing flexible resizing and snapshot capabilities for the spanned storage. Filesystems such as and extend this functionality with multi-device support; allows spanning across devices in a single profile (non-RAID) configuration, enabling snapshots and subvolumes on the concatenated storage pool. achieves similar spanning through pools composed of multiple single-disk vdevs, supporting snapshots while treating the aggregate as a unified without redundancy. In Windows environments, concatenation is realized at the filesystem and disk management levels. NTFS on dynamic disks supports spanned volumes, which sequentially combine unallocated space from multiple physical disks into one logical volume, managed via Disk Management or DiskPart utilities. This approach allows transparent extension of storage capacity across heterogeneous drives. On macOS, Core Storage, deprecated since macOS High Sierra, previously supported concatenation through logical volume groups spanning multiple physical volumes, configurable via Disk Utility for creating unified storage spaces with features like snapshots. Hardware realizations of emerged in the 1990s with controller supporting non- modes. Adaptec's AAC-series RAID controllers featured "Volume Set" and SPAM (Spanning and Mirroring) modes, enabling firmware-level concatenation of drives into a single logical unit for bootable or data volumes. Modern enterprise JBOD enclosures from vendors like and HP, such as the ME5 series and HPE D3000, support spanned modes through SAS expander in passthrough (JBOD) configuration, allowing host software to concatenate drives for large-scale storage pools. These enclosures provide high-density connectivity without built-in RAID processing. A notable deprecated implementation was Windows Home Server's Drive Extender, active from 2007 to 2011, which automatically spanned multiple internal drives into a pooled storage space with optional duplication for redundancy, simplifying home NAS setups. By 2013, discontinued it due to reliability concerns, shifting focus to ReFS-based storage spaces. As of 2025, concatenation finds adaptations in containerized and cloud environments. In Docker, multi-device volumes can be concatenated using underlying LVM or device-mapper layers for persistent storage across host disks, supporting orchestration in without overhead. Cloud providers like AWS enable concatenation of multiple Elastic Block Store (EBS) volumes via LVM on EC2 instances, offering scalable non-RAID block storage for applications requiring simple capacity extension. Similar approaches apply in Azure Managed Disks and Cloud Persistent Disks, where users attach and span volumes at the OS level for cost-effective, high-availability setups.

MAID

Concept and Design

Massive Array of Idle Disks () is a storage architecture comprising hundreds to thousands of hard disk drives (HDDs), where the majority of drives remain powered down and spun down in an idle state until data access is required, thereby prioritizing power efficiency for large-scale, persistent . The design principles of revolve around hierarchical data access, employing a small subset of always-active cache drives to service frequent read and write requests, while deferring less urgent operations to idle drives that spin up only on demand. This approach introduces a spin-up latency of 10-15 seconds for idle drives, balancing savings against acceptable delays for infrequent access patterns. systems often incorporate to aggregate capacity across drives linearly, forming the basis for scalable storage pools. Key architecture components include an intelligent controller that oversees drive power states, spin-up scheduling, and between cache and idle tiers; integration with nearline storage environments for seamless archival handling; and a metadata subsystem to track data locations and optimize access routing across the array. MAID architectures typically scale to 100 or more drives per array, with potential for thousands in enterprise configurations, and are optimized for write-once-read-occasionally (WORO) workloads such as archival . The concept was introduced in 2002 through research at the , proposing it as an energy-efficient alternative to traditional for archival applications. It was subsequently commercialized in the mid-2000s by vendors including Copan Systems, with broader adoption by major providers like EMC, , , and incorporating MAID principles into their storage offerings.

Applications and Benefits

MAID finds primary applications in nearline storage within data centers, where it supports the retention of infrequently accessed such as historical records and compliance archives. It is also employed for media archiving, enabling efficient storage of large volumes of video, images, and audio files that require occasional retrieval. Additionally, MAID serves as a repository for enterprise environments, providing a disk-based alternative to tape for secondary copies of . In systems, it supplements tape libraries by offering faster access times for cold while maintaining lower operational overhead than always-on solutions. The architecture delivers significant benefits, particularly in power efficiency, achieving up to 87% reduction in energy consumption by idling the majority of drives during periods of inactivity. This leads to high storage density, with systems capable of delivering terabytes per rack unit, as exemplified by configurations supporting over 1 petabyte in compact footprints. Consequently, operational costs for cooling and maintenance are lowered, with annual savings estimated at thousands of dollars per large array due to reduced power draw and heat generation. Unlike RAID, which prioritizes higher throughput for active workloads, MAID excels in scenarios emphasizing capacity and efficiency over speed. In context, MAID's high access latency—stemming from drive spin-up times of 3 to 8 seconds—renders it unsuitable for real-time or high-performance workloads. It provides no inherent redundancy, necessitating external backup mechanisms to ensure data integrity against drive failures. While its relevance has somewhat declined by 2025 with the widespread adoption of SSDs for nearline roles, as solid-state drives offer lower latency and power use without mechanical idling, efforts to revive MAID continue, such as using host-managed shingled magnetic recording (HM-SMR) drives for green storage alternatives to tape. Enterprise case studies highlight 's role in cost-effective petabyte-scale archival; for instance, Copan Systems' Revolution platform was adopted for high-density backup and archiving in data centers, enabling up to 8 petabytes in a single square meter while minimizing energy costs. Similarly, Fujitsu's ETERNUS systems with MAID ECO mode have been deployed for nearline storage, yielding 20% or greater power reductions in configurations with hundreds of drives. Looking ahead, is evolving through hybrid HDD/SSD variants that combine SSDs for low-latency caching with idled HDDs for bulk storage, enhancing suitability for environments where space and power constraints are acute, as explored in 2025 research on tiered SSD+MAID models for improved and carbon .

References

  1. https://www.redbooks.[ibm](/page/IBM).com/redpapers/pdfs/redp5234.pdf
Add your contribution
Related Hubs
User Avatar
No comments yet.