Hubbry Logo
Direct-access storage deviceDirect-access storage deviceMain
Open search
Direct-access storage device
Community hub
Direct-access storage device
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Direct-access storage device
Direct-access storage device
from Wikipedia
IBM 2311 DASD, introduced in 1964

A direct-access storage device (DASD) (pronounced /ˈdæzd/) is a secondary storage device in which "each physical record has a discrete location and a unique address". The term was coined by IBM to describe devices that allowed random access to data, the main examples being drum memory and hard disk drives.[1] Later, optical disc drives and flash memory units are also classified as DASD.[2][3]

The term DASD contrasts with sequential access storage device such as a magnetic tape drive, and unit record equipment such as a punched card device. A record on a DASD can be accessed without having to read through intervening records from the current location, whereas reading anything other than the "next" record on tape or deck of cards requires skipping over intervening records, and requires a proportionally long time to access a distant point in a medium. Access methods for DASD include sequential, partitioned, indexed, and direct.

The DASD storage class includes both fixed and removable media.

Architecture

[edit]

IBM mainframes access I/O devices including DASD through channels, a type of subordinate mini-processor. Channel programs write to, read from, and control the given device.[4] IBM direct access storage devices prior to System/360 have a variety of architectures, as do newer devices outside of the S/360 line, but the DASD in IBM mainframe for S/360 to IBM Z use only three DASD architectue

CTR (CHR)

[edit]

The operating system uses a four byte relative track and record (TTR) for some access methods and for others an eight-byte extent-bin-cylinder-track-record block address, or MBBCCHHR[5] Channel programs address DASD using a six byte seek address[6] and a five byte record identifier[7] (CCHHR).

  • BB representing the Bin (from 2321 data cells),
  • CC representing the Cylinder,
  • HH representing the Head (or track), and
  • R representing the Record (block) number.

For devices prior to extended address volumes, the seek address is

Device Byte
0 1 2 3 4 5
Disk 0 0 Cylinder Head
Drum 0 0 0 Cylinder 0 Head
Data cell 0 cell subcell strip Cylinder Head

When the 2321 data cell was discontinued in January 1975,[8] the addressing scheme and the device itself was referred to as CHR or CTR for cylinder-track-record, as the bin number was always 0.

IBM refers to the data records programmers work with as logical records, and the format on DASD[a] as blocks or physical records. One block might contain several logical (or user) records or, in some schemes, called spanned records, partial logical records.

Physical records can have any size up to the limit of a track, but some devices have a track overflow feature that allows breaking a large block into track-size segments within the same cylinder.

The queued-access methods, such as QSAM, are responsible for blocking and deblocking logical records as they are written to or read from external media. The basic-access methods, such as BSAM, require the user program to do it.

CKD

[edit]

CKD is an acronym for Count Key Data, the physical layout of a block on a DASD device, and should not be confused with BBCCH and CCHHR, which are the addresses used by the channel program. CTR in this context may refer to either type of address, depending on the channel command.[citation needed]

FBA

[edit]

In 1979 IBM introduced Fixed-block architecture (FBA) for mainframes. At the programming level, these devices do not use the traditional CHR addressing, but reference fixed-length blocks by number, much like sectors in mini-computers. More correctly, the application programmer remains unaware of the underlying storage arrangement, which stores the data in fixed physical block lengths of 512, 1024, 2048, or 4096, depending on the device type. As part of the FBA interface IBM introduced new channel commands for asynchronous operation that are very similar to those introduced for ECKD.

For some applications, FBA not only offers simplicity, but an increase in throughput.

FBA is supported by VM/370 and DOS/VSE, but not MVS[b] or successor operating systems in the OS/360 line.

FCP attached SCSI

[edit]

Processors with FICON channels can access SCSI drives using Fibre Channel Protocol (FCP). While z/VM and z/VSE fully support FCP, z/OS provides only limited support through IOSFBA.

Access

[edit]

Some programming interface macros and routines are collectively referred to as access methods with names ending in Access Method.

DOS/360 and successors

[edit]

DOS/360 through z/VSE support datasets on DASD with the following access methods:[citation needed]

OS/360 and successors

[edit]

OS/360 through z/OS support datasets on DASD with the following access methods:[citation needed]

In MVS, starting with OS/VS2 Release 2 and continuing through z/OS, all of the access methods including EXCP[VR], use the privileged STARTIO macro.

Terminology

[edit]

IBM in its 1964 first version of the "IBM System/360 System Summary" used the term File to collectively described devices now called DASD. Files provided "random-access storage'"[4] At the same time IBM's product reference manual described such devices as "direct-access storage devices[9]" without any acronym.

An early public use of the acronym DASD is in IBM's March 1966 manual, "Data File Handbook.[10]" The earliest non-IBM use of the acronym DASD found by the "Google ngram viewer" to refer to storage devices dates from 1968.[11] From then on use of the term grew exponentially until 1990 after which its usage declined substantially.[12]

Both drums and data cells have disappeared as products, so DASD remains as a synonym of disk, flash and optical devices. Modern DASD used in mainframes only very rarely consist of single disk-drives. Most commonly "DASD" means large disk arrays utilizing RAID schemes. Current devices emulate CKD on FBA hardware.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A direct-access storage device (DASD) is a type of secondary storage that enables to be retrieved or modified by specifying its exact physical or , allowing without sequentially scanning prior , in contrast to sequential-access media such as magnetic tapes. This capability makes DASDs essential for efficient in computing systems, particularly in environments requiring quick access to large volumes of information. The concept of DASD originated with IBM's development of the Random Access Method of Accounting and Control (RAMAC) system in the early 1950s, culminating in the introduction of the IBM 350 Disk Storage Unit in 1956 as the world's first commercial hard disk drive. This 24-inch diameter disk pack, capable of storing 5 million characters across 50 platters, marked a significant advancement over punch cards and tapes by providing direct access at speeds up to 8,800 characters per second. With the launch of the IBM System/360 mainframe family in 1964, DASD technology became standardized, integrating rotating magnetic disk drives as core components for online transaction processing and data management in enterprise computing. Over decades, DASD evolved from early head-per-track designs to multi-platter Winchester technology, with notable models including the 3370 (introduced in 1972, offering 571 megabytes per unit) and the 3380 (1980, providing up to 2.52 billion characters of storage with reduced energy consumption). Later advancements, such as the 3390 in 1989, increased capacities to several gigabytes while improving reliability through error-correcting codes and faster access times. Today, the term DASD persists primarily in (mainframe) contexts, encompassing both traditional rotating disk drives and modern solid-state drives, which serve as fixed or removable volumes in logical volume managers for operating systems like and AIX. These devices typically use block-oriented access methods, with block sizes ranging from 512 to 22,000 bytes depending on the format (e.g., Count Key Data or Fixed Block Architecture).

Overview

Definition and Characteristics

A direct-access storage device (DASD) is a type of secondary storage that enables to by specifying a unique for each physical record, allowing retrieval without the need to scan preceding sequentially. This contrasts briefly with sequential-access devices like magnetic tapes, which require reading from the beginning to reach a specific record. The term DASD was coined by in 1964 to describe storage compatible with its System/360 mainframe architecture, initially referring to technologies such as hard disk drives, magnetic drums, and data cells. Key characteristics of DASD include non-volatility, meaning data persists without power, and block-oriented , where is stored and accessed in fixed-size blocks for efficient handling. These devices support high-speed through mechanical or electronic mechanisms involving seek time—the duration to position the read/write head—and rotational latency for spinning media, enabling rapid data location compared to linear media. Over time, DASD capacities have evolved dramatically, starting from several megabytes per unit in the to terabytes in contemporary implementations, reflecting advances in and materials. Representative examples of DASD encompass hard disk drives (HDDs) using magnetic platters, solid-state drives (SSDs) based on , magnetic drum memory as an early form, and even optical discs for read-intensive applications. By providing prerequisite random-access capabilities, DASD has been foundational in enabling multitasking operating systems and database management systems, which rely on quick, addressable data retrieval to support concurrent operations and indexed queries.

Comparison with Sequential-access Devices

Direct-access storage devices (DASD) fundamentally differ from sequential-access devices in their data retrieval mechanisms, enabling efficient random access that transformed computing applications. DASD utilize absolute addressing, such as cylinder-head-sector (CHS) or logical block addressing (LBA), to position the read/write head directly at the target location on the medium. This approach yields a constant average access time, approximating O(1) time complexity for retrieving any record, independent of its position relative to previously accessed data. In contrast, sequential-access devices like magnetic tapes require linear traversal from the current head position to the desired record, resulting in O(n) time complexity for non-sequential operations, where n scales with the physical distance on the medium. Performance characteristics underscore these access pattern disparities. For DASD implemented as hard disk drives (HDDs), typical seek times—the duration to move the head to the correct track—range from 5 to 10 milliseconds, while rotational latency, the average wait for the target sector to rotate under the head, is 4.2 milliseconds (maximum 8.3 ms) on 7200 RPM drives. Sequential devices, however, prioritize linear efficiency: modern (LTO) generations deliver high sustained throughput of 300 to 400 MB/s for sequential reads and writes, but is severely limited, with average positioning times of 50 to 60 seconds due to the need for rewinding or fast-forwarding across potentially gigabytes of tape. These differences directly influence suitable use cases. DASD excel in environments demanding frequent random I/O, such as for query processing and operating file management, where direct record retrieval supports real-time operations without scanning intervening . Sequential devices are better suited for ordered streams, including backups, archival storage, and audit logs, leveraging their high throughput for bulk transfers while tolerating slower repositioning. The shift to DASD enabled pivotal advancements in , particularly by supporting random I/O in interactive workloads like (OLTP), which replaced tape-driven with responsive, record-oriented systems capable of handling concurrent user requests efficiently.

History

Origins in IBM Systems

While the concept of direct-access storage devices predated the System/360 (as detailed in the introduction), the term "DASD" and its emerged as a critical component of 's System/360, announced on April 7, 1964, to address the fragmentation caused by IBM's prior incompatible product lines, such as the commercial and the scientific IBM 7090, by standardizing storage and architectures across a unified family of machines. This initiative aimed to enable seamless and compatibility, replacing disparate systems that had hindered and program portability in the early 1960s. Pivotal to this ecosystem was the 2311 Disk Storage Drive, introduced in 1964 alongside the System/360, which provided 7.25 megabytes of storage per removable disk pack spindle using six 14-inch disks and supported for efficient . Another early innovation, the 2321 Data Cell Drive, announced in 1964 and shipped starting in 1965, utilized removable cartridges containing 200 short strips to achieve up to 400 megabytes of capacity in a single unit, marking an ambitious attempt at high-density, cartridge-based mass storage. However, the 2321 faced persistent reliability challenges, including frequent tape jams in its complex retrieval mechanism—derisively called the "noodle picker"—leading to its withdrawal from marketing in January 1975. The term "direct-access storage device" originated in IBM's technical documentation for the System/360 era, with the acronym DASD first appearing in the March 1966 Data File Handbook, which described these devices as enabling random record retrieval on magnetically coated disks or strips, in contrast to sequential media like punched cards and magnetic tapes. This terminology underscored the need for random-access capabilities to support emerging multiprogramming environments in the System/360, where multiple programs could concurrently access shared data without the delays inherent in sequential storage methods such as tape reels or card decks.

Evolution and Modern Usage

In the , advanced DASD technology by transitioning from removable disk packs to non-removable, sealed assemblies, exemplified by the IBM 3350 introduced in 1976, which offered 317.5 MB per spindle in a fixed Head Disk Assembly (HDA) to enhance reliability and reduce maintenance. This shift addressed limitations of earlier , such as the IBM 3330, by eliminating pack handling and improving through sealed environments that minimized . By the late and into the , DASD usage in mainframe documentation proliferated, with the term becoming standard for high-capacity storage systems. During the and , DASD evolved toward array-based configurations, as seen with the 3990 Storage Control introduced in , which supported multiple disk drives with caching and improved data paths to handle growing I/O demands in enterprise environments. This era marked the peak of the DASD term in contexts, where it encompassed controllers managing arrays that incorporated early redundancy features akin to precursors, enabling scalable storage for business-critical applications. By the early , annual DASD capacity growth had moderated from 60% in the prior decade, reflecting maturation of the technology. Entering the 2000s, DASD integrated solid-state drives (SSDs) and , particularly in Systems, with the 2012 introduction of Flash Express (generally available December 2012) providing a high-performance tier using flash cards to accelerate paging and I/O operations. Contemporary z Systems, including the announced in April 2025, support hybrid configurations combining HDDs and SSDs, allowing dynamic allocation for workloads requiring low latency, such as , with enterprise drives exceeding 30 TB per unit as of 2025. In , DASD now often refers to virtualized block storage, including AWS Elastic Block Store (EBS) volumes used in mainframe emulation environments like to provide DASD-like volumes for testing and development in hybrid clouds. The DASD term has declined outside IBM ecosystems, largely replaced by generic "disk storage" or "block storage" in broader computing, though it endures in for defining datasets and volumes. This evolution has underpinned infrastructures by delivering scalable, direct-access capabilities; modern enterprise drives routinely surpass 20 TB per unit, supporting exabyte-scale analytics.

Architectures and Data Formats

Count Key Data (CKD)

Count Key Data (CKD) is a variable-length record format for direct-access storage devices (DASD) developed by , serving as the foundational data organization method for mainframe storage since its introduction with the System/360 family in 1964. This format enables efficient to records on disk tracks, optimizing for the parallel channel architecture of early mainframe systems. CKD remains the standard for DASD volumes in OS/360, , and environments, where it supports core functions like volume table of contents (VTOC) management and direct access methods such as BDAM and BSAM. The structure of a CKD track begins with a home address, which identifies the track's physical location using the CCHH (cylinder-head-head) format, followed by a track descriptor record (Record 0) that provides metadata without a key or data field. Subsequent data records on the track each consist of three primary fields: the Count field, an optional Key field, and the Data field. The Count field is an 8-byte area containing the full CCHHR (cylinder-head-head-record) identifier for precise record addressing, along with the lengths of the Key and Data fields to enable hardware-level navigation and gap management between records. The Key field, if present, ranges from 1 to 255 bytes and holds an identifier (such as an account number or part identifier) used by applications for record selection. The Data field carries the variable-length user data, allowing records to vary in size up to the remaining track capacity. Addressing in CKD relies on the CCHHR scheme embedded in the Count field of each record, which specifies the cylinder, head (surface), and record number within the track, facilitating direct physical access optimized for mainframe I/O channels. For records exceeding the available space on a single track—such as large variable-length entries—CKD supports track overflow, where the Count field includes an alternate track address to continue the data on the next available track, ensuring continuity without fragmentation. This mechanism is particularly useful in handling oversized records in sequential or indexed processing. In usage, CKD serves as the default format for DASD in OS/360, MVS, and z/OS, introduced alongside System/360 to support legacy and modern mainframe workloads. It excels in managing indexed sequential files through the Indexed Sequential Access Method (ISAM), where the Key field acts as a hardware-accelerated index for rapid direct retrieval and updates in prime, index, and overflow areas. ISAM datasets on CKD DASD organize records sequentially with track, cylinder, and master indexes, enabling efficient QISAM sequential scans or BISAM direct access via macros like GET and PUT. The advantages of CKD lie in its flexibility for variable-length data, which minimizes storage waste by accommodating records of differing sizes without fixed padding, making it ideal for diverse mainframe applications like partitioned data sets and EXCP-level I/O. However, a key limitation arises in contemporary environments, where underlying disk drives employ fixed-block architecture (FBA); CKD must be emulated via microcode or software in storage controllers, introducing complexity in mapping variable records to fixed blocks and potential overhead in and space utilization. This emulation ensures but contrasts with the simpler native access of FBA systems.

Fixed Block Architecture (FBA)

Fixed Block Architecture (FBA) represents a data format for direct-access storage devices that organizes storage into fixed-length blocks, serving as a simpler alternative to the variable-length Count Key Data (CKD) predecessor format. Introduced in 1979 with the 3310 and 3370 drives, FBA was designed to streamline data access by eliminating the need for key fields and variable record handling, thereby reducing complexity in certain mainframe environments. Subsequent devices, including the 3375 and 3380 models, also supported FBA formatting, expanding its applicability while maintaining compatibility through software emulation of CKD features where required. In FBA, each track is divided into a fixed number of uniform blocks, typically ranging from 512 to 4096 bytes in size, with no dedicated key field per block to simplify the and minimize overhead. Data addressing relies on sequential relative block numbers (RBNs) starting from block 0, rather than record-specific identifiers, which enables efficient linear access and aligns well with block-oriented protocols. This block-based numbering reduces seek and transfer overhead, particularly for drives resembling architectures, and supports features like an indexed Volume Table of Contents (VTOC) formatted similarly to VSAM relative record datasets for space management. FBA gained native support in operating systems such as VM/370, DOS/VSE, and their successors and z/VSE, where it facilitated direct use for minidisks and volumes without extensive reformatting. In contrast, and environments do not support FBA natively and require software emulation to interface with such devices, limiting its adoption in larger-scale systems. The architecture's advantages include easier integration with open systems standards due to its block-oriented design, making it suitable for smaller mainframes and hybrid environments where compatibility with non-IBM peripherals is beneficial.

Fibre Channel Protocol (FCP) and Other Protocols

The (FCP) enables the attachment of () storage devices to Systems mainframes using Fibre Connection (FICON) channels, allowing these devices to function as direct-access storage devices (DASD). FCP, which transports commands over networks, was introduced in the late 1990s alongside FICON channels in 1998, providing a high-speed serial interface that superseded earlier parallel channel technologies. This protocol supports access to industry-standard disks, including those in storage arrays, through single- or multi-channel switches, facilitating integration of commodity hardware into mainframe environments. In IBM z Systems, FCP receives full support in operating systems such as and z/VSE, where disks can be directly accessed or emulated as traditional DASD volumes for guest and system use. For example, employs an FBA emulation layer to present FCP-attached logical units (LUNs) as Fixed Block Architecture (FBA) devices, compatible with a range of applications. In contrast, provides only partial support for FCP, primarily through emulation to mimic Count Key Data (CKD) volumes, as native handling is limited and not optimized for its traditional DASD workflows. This emulation allows to use FCP devices but imposes constraints, such as reduced performance for certain access patterns and restrictions on device sharing without N_Port ID Virtualization (NPIV). Technically, FCP devices on z Systems are addressed using an 8-byte MBBCCHHR format, which specifies the model, block, , and record for emulated tracks, enabling precise mapping of SCSI LUNs to mainframe device numbers (e.g., 0.0.xxxx subchannels). This addressing scheme supports the use of commodity drives as DASD by assigning them unique worldwide port names (WWPNs) and LUNs via and masking in the (SAN). FCP's integration with RAID-configured storage, such as the DS8000 series, further enhances reliability by presenting redundant array volumes over FICON/FCP links, with ports configurable for either or FICON protocols. Preceding FCP, the Enterprise Systems Connection (ESCON) protocol, introduced in 1990, served as a fiber-optic serial interface for connecting mainframe channels to DASD and tape devices, offering up to 17 MB/s bandwidth over distances of 9 km with switches. ESCON acted as a bridge to modern Fibre Channel-based protocols like FICON and FCP but was phased out in favor of higher-speed alternatives. For non-mainframe DASD environments, contemporary protocols such as Serial ATA (SATA) and —including NVMe over PCIe—provide direct attachments for hard disk drives and solid-state drives, emphasizing plug-and-play connectivity in distributed systems. Despite its advantages, FCP in environments is often limited to cost-effective storage pools where CKD emulation is applied, as the operating system prioritizes native ECKD (Extended CKD) for performance-critical workloads, avoiding direct overhead. This preference stems from 's legacy optimization for mainframe-specific DASD geometries, making FCP more suitable for auxiliary or /z/VM-based storage tiers rather than primary production volumes.

Access Methods

In DOS/360 and Successors

DOS/360, introduced by in 1966, was designed for smaller System/360 configurations with limited memory and processing capabilities, serving as a disk-based operating system for batch-oriented environments on direct-access storage devices (DASD). It supported single-tasking operations, with multitasking introduced in later variants like DOS/VS in the 1970s, emphasizing efficient I/O for applications running in constrained virtual storage environments up to 256 KB. Successors such as DOS/VS and DOS/VSE extended these capabilities to System/370 hardware, incorporating virtual storage management while retaining core DASD handling for smaller-scale systems compared to OS/360 equivalents. Access to DASD in DOS/360 relied on basic macros for sequential and direct operations, with BSAM providing unbuffered sequential read/write access to records on DASD volumes, suitable for simple file processing without advanced queuing. QSAM enhanced this by introducing buffering and label processing, allowing queued I/O requests to improve throughput for sequential datasets on DASD, though it required careful buffer allocation to avoid storage overflows in low-memory setups. For low-level direct access, the EXCP macro enabled programmers to issue custom channel programs directly to DASD controllers, bypassing higher-level access methods for optimized control over seek and transfer operations, particularly useful in performance-critical batch jobs. DOS/360 organized DASD data into partitioned datasets using the Partitioned Access Method (PAM), where multiple sequential files or members (e.g., load modules or ) shared a single DASD extent, managed via a directory for member lookup. Addressing within these datasets employed the 3-byte relative track (TTR) scheme, specifying a relative track number (TT) and record number (R) from the dataset's starting extent, facilitating direct positioning without full cylinder-head-record calculations. Primarily, DOS/360 and its early successors supported the Count Key Data (CKD) format for DASD, where records included count fields for length and optional keys for indexing, enabling variable-length blocks on tracks. Later versions, such as DOS/VSE with Advanced Functions in the late , introduced Fixed Block Architecture (FBA) support for compatible SCSI-like devices, using fixed 512-byte blocks addressed by relative numbers to simplify I/O in virtual storage contexts while maintaining with CKD volumes. Limited multitasking in these systems restricted concurrent DASD access to a single task or supervisor-managed queues, prioritizing reliability over complex multiprogramming.

In OS/360 and Successors

OS/360, introduced by in 1966, represented a major advancement in operating systems for large-scale mainframes, providing multiprogramming capabilities and robust support for direct-access storage devices (DASD). This system and its successors, including (Multiple Virtual Storage) from 1974 and z/OS from 2000, emphasized efficient DASD access through specialized methods tailored for high-volume, shared environments. Access methods in this lineage built on channel-based I/O, enabling concurrent processing and device independence while natively supporting Count Key Data (CKD) formats on devices like the IBM 2311 and 2314 disk packs. Key access methods included Basic Sequential Access Method (BSAM) and Queued Sequential Access Method (QSAM) for sequential and direct processing of DASD data sets. BSAM offered low-level control using READ and WRITE macros for block transfers, requiring programmer-managed buffering and supporting fixed (F), variable (V), and undefined (U) record formats on direct-access volumes. QSAM extended BSAM with automated buffering (simple or exchange modes) via GET and PUT macros, optimizing overlap of I/O and computation in multiprogrammed settings while maintaining compatibility for update-in-place operations. For indexed access, the Indexed Sequential Access Method (ISAM) organized records by keys (1-255 bytes) across prime and overflow areas, using and track indexes for rapid retrieval; it supported direct key-based GET and PUT operations, though reorganization was needed to manage overflow chains. DASD interactions relied on channel programs executed via Control Word Chains (CCWs), initiated through the EXCP macro for direct device control or integrated into higher-level methods. Addressing used a 3-byte relative track address (TTR) scheme, specifying a relative track number and record position relative to the data set start, enabling precise block location on CKD volumes; for example, NOTE and POINT macros in BSAM facilitated TTR-based repositioning. While CKD was native, Fixed Block Architecture (FBA) support emerged in successors like and through emulation layers, allowing compatibility with fixed-block devices via VSAM or utilities without altering core addressing. In the evolution to OS/VS1, OS/VS2, , and , the (VSAM), introduced in 1973, superseded ISAM by providing advanced indexing and clustering for DASD. VSAM organized data into clusters—combining index and data components managed by an integrated catalog—supporting key-sequenced (KSDS), entry-sequenced (ESDS), and relative record (RRDS) datasets with balanced-tree indexes for efficient random and , akin to structures. It used control intervals (512 bytes to 32 KB) and areas for space management, including CI/CA splits to handle insertions dynamically, reducing reorganization needs compared to ISAM. Error handling incorporated (ECC) on DASD tracks, detecting and correcting single-burst errors in count, key, and data fields via check bytes; in environments, correctable errors were handled transparently by storage directors like the 3880, while uncorrectable ones triggered retries and in SYS1.LOGREC for recovery. This framework ensured across the OS/360 lineage, scaling to modern features like extended addressability.

In Contemporary z Systems

In contemporary IBM z Systems, direct-access storage devices (DASD) are integral to the and operating environments of the 2020s, providing high-performance storage for mission-critical workloads through and support for (FCP)-attached solid-state drives (SSDs) and hard disk drives (HDDs). These systems leverage the DS8000 series, which emulates traditional DASD volumes while enabling FCP connectivity for open-systems storage integration, allowing to access SSD-based arrays for enhanced I/O throughput in hybrid configurations. Access methods in these environments emphasize and automated management, with Extended VSAM providing scalable handling and VSAM Record Level Sharing (RLS) enabling concurrent read/write access across multiple systems in a Parallel Sysplex. VSAM RLS, introduced to support multisystem sharing, integrates with the Coupling Facility for lock management, allowing applications like and batch jobs to share VSAM without serialization overhead. Complementing this, System Managed Storage () automates DASD allocation by defining storage groups, classes, and management policies, dynamically selecting volumes based on data attributes and performance needs to optimize space utilization and I/O efficiency. Addressing in modern z Systems uses an 8-byte extended format (MBBCCHHR), where M denotes the model, BB the block, CC the , and HR the head and record, supporting volumes exceeding 65,520 cylinders via Extended Addressability (EAV) for capacities up to 1.18 exabytes per . This scheme facilitates precise record location in virtualized environments, particularly for database integration; for instance, for stores table spaces and indexes on DASD volumes managed through , leveraging FBA or CKD formats for efficient query processing and recovery. Performance tuning often incorporates zHyperWrite, a caching mechanism that enables parallel synchronous writes to primary and secondary storage in Peer-to-Peer Remote Copy (PPRC) setups, significantly reducing log write latency for Db2 transactions in mirrored configurations. Emerging capabilities extend DASD functionality to hybrid cloud architectures, where z/OS supports bursting workloads to remote storage pools via DFSMS integration with Object Storage, enabling automatic tiering of less frequently accessed data sets while maintaining Sysplex-wide visibility. Security is bolstered by (and later 140-3) compliant on DS8000 arrays, providing data-at-rest protection through hardware-based (AES) keys managed at the drive level, ensuring compliance for regulated industries without impacting I/O performance.

Terminology and Addressing

Key Concepts and Terms

In direct-access storage devices (DASD), a logical record represents the basic unit of user as perceived by applications and operating systems, consisting of related processed as a single entity, such as a line of text or a database entry. In contrast, a physical block is the smallest unit of data transfer between the device and the host system, grouping one or more logical records along with any necessary headers or metadata for storage efficiency on the physical medium. A track refers to the circular path along the surface of a rotating disk platter where is magnetically recorded, spanning a full 360° revolution and capable of holding multiple records or blocks. A is formed by the vertical alignment of tracks at the same radial position across all platters in a multi-platter assembly, allowing simultaneous access by multiple read/write heads to minimize seek times during operations. Key physical components of DASD include the spindle, which is the motorized shaft that rotates the stack of disk platters at a constant speed, typically measured in revolutions per minute (RPM), to enable data access. The head assembly consists of the read/write heads mounted on actuator arms that position over specific tracks to perform data transfer, often operating in close proximity to the platter surfaces within a sealed environment to prevent contamination. In cases where a record exceeds the capacity of a single track, it may use overflow, extending the data across subsequent tracks while maintaining logical continuity through indexing or chaining mechanisms. In environments, a serves as the equivalent of a file in other systems, defined as a named collection of related logical records stored and retrieved via an assigned identifier, supporting various organizations like sequential or indexed. A , on the other hand, denotes the physical storage unit, such as a or head-disk assembly (HDA), which can contain multiple datasets and is mounted as a single addressable entity for system access. Performance in DASD is characterized by latency, the time delay before data transfer begins, comprising seek time (positioning the head to the target and track) plus rotational delay (waiting for the desired sector to rotate under the head, averaging half a revolution). Throughput for operations is commonly measured in operations per second (), quantifying the device's capacity to handle non-sequential reads and writes, which is critical for workloads involving scattered .

Addressing Schemes

In IBM's early direct-access storage device (DASD) systems, addressing schemes evolved to accommodate varying levels of device complexity and capacity. For DOS/360, locations on DASD volumes were primarily specified using a 3-byte relative track address known as TTR (Track-Track-Record), where the first two bytes represented the relative track number and the third byte indicated the record position within that track. This scheme facilitated efficient access in smaller-scale environments by abstracting the physical geometry into relative positions from the start of a . With the introduction of OS/360, addressing shifted to a 4-byte absolute track format called CCHH (Cylinder-Head), which directly encoded the number (two bytes) and head number (two bytes), enabling precise specification of physical locations on multi-cylinder, multi-head devices. This absolute format supported larger volumes and was integral to channel programs that interacted with DASD hardware. As DASD capacities grew, particularly with Extended Addressability Volumes (EAV) in , the addressing scheme expanded to an 8-byte absolute format, MBBCCHHR (model byte, device bytes, , head, record), to handle volumes exceeding 65,520 cylinders by incorporating device model identification, device-specific bytes, and extended addressing. This format is employed in modern environments using the (FCP) for SCSI-based DASD, allowing compatibility with both traditional CKD (Count Key Data) and FBA (Fixed Block Architecture) devices while supporting up to 2.1 billion tracks per volume. IBM DASD addressing distinguishes between absolute and relative modes to balance precision and portability. Absolute addressing, using formats like CCHH or MBBCCHHR, specifies exact physical locations on a , which is essential for low-level I/O operations but ties programs to specific hardware geometries. Relative addressing, such as TTR, expresses positions offset from the beginning of a or extent, promoting device independence by converting to absolute addresses at runtime via routines. The Volume Table of Contents (VTOC), a specialized on each DASD , manages allocation by maintaining Data Set Control Blocks (DSCBs) that record extent locations in absolute or relative terms, enabling the Dynamic Allocation and Data Set Management (DADSM) routines to search, allocate, and deallocate space without duplicating physical tracks. Outside IBM's ecosystem, non-IBM DASD and general disk drives typically employ (LBA), a linear scheme that identifies data blocks by sequential integers starting from 0, with each block traditionally sized at 512 bytes, abstracting underlying physical structures like sectors and tracks for broader compatibility across protocols such as and ATA.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.