Recent from talks
Nothing was collected or created yet.
ISCSI
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Internet Small Computer Systems Interface (iSCSI; /aɪˈskʌzi/ ⓘ eye-SKUZ-ee) is an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network. iSCSI facilitates data transfers over intranets and to manage storage over long distances. It can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.
The protocol allows clients (called initiators) to send SCSI commands (CDBs) to storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into storage arrays while providing clients (such as database and web servers) with the illusion of locally attached SCSI disks.[1] It mainly competes with Fibre Channel, but unlike traditional Fibre Channel which usually requires dedicated cabling,[a] iSCSI can be run over long distances using existing network infrastructure.[2] iSCSI was pioneered by IBM and Cisco in 1998 and submitted as a draft standard in March 2000.[3]
Concepts
[edit]In essence, iSCSI allows two hosts to negotiate and then exchange SCSI commands using Internet Protocol (IP) networks. By doing this, iSCSI takes a popular high-performance local storage bus and emulates it over a wide range of networks, creating a storage area network (SAN). Unlike some SAN protocols, iSCSI requires no dedicated cabling; it can be run over existing IP infrastructure. As a result, iSCSI is often seen as a low-cost alternative to Fibre Channel, which requires dedicated infrastructure except in its FCoE (Fibre Channel over Ethernet) form. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN), due to competition for a fixed amount of bandwidth.
Although iSCSI can communicate with arbitrary types of SCSI devices, system administrators almost always use it to allow servers (such as database servers) to access disk volumes on storage arrays. iSCSI SANs often have one of two objectives:
- Storage consolidation
- Organizations move disparate storage resources from servers around their network to central locations, often in data centers; this allows for more efficiency in the allocation of storage, as the storage itself is no longer tied to a particular server. In a SAN environment, a server can be allocated a new disk volume without any changes to hardware or cabling.
- Disaster recovery
- Organizations mirror storage resources from one data center to a remote data center, which can serve as a hot / standby in the event of a prolonged outage. In particular, iSCSI SANs allow entire disk arrays to be migrated across a WAN with minimal configuration changes, in effect making storage "routable" in the same manner as network traffic.
Initiator
[edit]An initiator functions as an iSCSI client. An initiator typically serves the same purpose to a computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI devices (like hard drives and tape changers), an iSCSI initiator sends SCSI commands over an IP network. An initiator falls into two broad types:
A software initiator uses code to implement iSCSI. Typically, this happens in a kernel-resident device driver that uses the existing network card (NIC) and network stack to emulate SCSI devices for a computer by speaking the iSCSI protocol. Software initiators are available for most popular operating systems and are the most common method of deploying iSCSI.
A hardware initiator uses dedicated hardware, typically in combination with firmware running on that hardware, to implement iSCSI. A hardware initiator mitigates the overhead of iSCSI and TCP processing and Ethernet interrupts, and therefore may improve the performance of servers that use iSCSI. An iSCSI host bus adapter (more commonly, HBA) implements a hardware initiator. A typical HBA is packaged as a combination of a Gigabit (or 10 Gigabit) Ethernet network interface controller, some kind of TCP/IP offload engine (TOE) technology and a SCSI bus adapter, which is how it appears to the operating system. An iSCSI HBA can include PCI option ROM to allow booting from an iSCSI SAN.
An iSCSI offload engine, or iSOE card, offers an alternative to a full iSCSI HBA. An iSOE "offloads" the iSCSI initiator operations for this particular network interface from the host processor, freeing up CPU cycles for the main host applications. iSCSI HBAs or iSOEs are used when the additional performance enhancement justifies the additional expense of using an HBA for iSCSI,[4] rather than using a software-based iSCSI client (initiator). iSOE may be implemented with additional services such as TCP offload engine (TOE) to further reduce host server CPU usage.
Target
[edit]The iSCSI specification refers to a storage resource located on an iSCSI server (more generally, one of potentially many instances of iSCSI storage nodes running on that server) as a target.
An iSCSI target is often a dedicated network-connected hard disk storage device, but may also be a general-purpose computer, since as with initiators, software to provide an iSCSI target is available for most mainstream operating systems.
Common deployment scenarios for an iSCSI target include:
Storage array
[edit]In a data center or enterprise environment, an iSCSI target often resides in a large storage array. These arrays can be in the form of commodity hardware with free-software-based iSCSI implementations, or as commercial products such as in StorTrends, Pure Storage, HP StorageWorks, EqualLogic, Tegile Systems, Nimble storage, IBM Storwize family, Isilon, NetApp filer, Dell EMC, Kaminario, NS-series, CX4, VNX, VNXe, VMAX, Hitachi Data Systems HNAS, or Pivot3 vSTAC.
A storage array usually provides distinct iSCSI targets for numerous clients.[5]
Software target
[edit]Nearly all modern mainstream server operating systems (such as BSD, Linux, Solaris or Windows Server) can provide iSCSI target functionality, either as a built-in feature or with supplemental software. Some specific-purpose operating systems implement iSCSI target support.
Logical unit number
[edit]In SCSI terminology, LU stands for logical unit, which is specified by a unique logical unit number. A LUN represents an individually addressable (logical) SCSI device that is part of a physical SCSI device (target). In an iSCSI environment, LUNs are essentially numbered disk drives. An initiator negotiates with a target to establish connectivity to a LUN; the result is an iSCSI connection that emulates a connection to a SCSI hard disk. Initiators treat iSCSI LUNs the same way as they would a raw SCSI or IDE hard drive; for instance, rather than mounting remote directories as would be done in NFS or CIFS environments, iSCSI systems format and directly manage filesystems on iSCSI LUNs.
In enterprise deployments, LUNs usually represent subsets of large RAID disk arrays, often allocated one per client. iSCSI imposes no rules or restrictions on multiple computers sharing individual LUNs; it leaves shared access to a single underlying filesystem as a task for the operating system.
Network booting
[edit]For general data storage on an already-booted computer, any type of generic network interface may be used to access iSCSI devices.[citation needed] However, a generic consumer-grade network interface is not able to boot a diskless computer from a remote iSCSI data source.[citation needed] Instead, it is commonplace for a server to load its initial operating system from a TFTP server or local boot device, and then use iSCSI for data storage once booting from the local device has finished.[citation needed]
A separate DHCP server may be configured to assist interfaces equipped with network boot capability to be able to boot over iSCSI. In this case, the network interface looks for a DHCP server offering a PXE or bootp boot image.[6] This is used to kick off the iSCSI remote boot process, using the booting network interface's MAC address to direct the computer to the correct iSCSI boot target[citation needed]. One can then use a software-only approach to load a small boot program which can in turn mount a remote iSCSI target as if it was a local SCSI drive and then fire the boot process from said iSCSI target[citation needed]. This can be achieved using an existing Preboot Execution Environment (PXE) boot ROM, which is available on many wired Ethernet adapters. The boot code can also be loaded from CD/DVD, floppy disk (or floppy disk image) and USB storage, or it can replace existing PXE boot code on adapters that can be re-flashed.[7] The most popular free software to offer iSCSI boot support is iPXE.[8]
Most Intel Ethernet controllers for servers support iSCSI boot.[9]
Addressing
[edit]iSCSI uses TCP (typically TCP ports 860 and 3260) for the protocols itself, with higher-level names used to address the objects within the protocol. Special names refer to both iSCSI initiators and targets. iSCSI provides three name-formats:
- iSCSI Qualified Name (IQN)
- Format: The iSCSI Qualified Name is documented in RFC 3720, with further examples of names in RFC 3721. Briefly, the fields are:
- literal iqn (iSCSI Qualified Name)
- date (yyyy-mm) that the naming authority took ownership of the domain
- reversed domain name of the authority (e.g. org.alpinelinux, com.example, to.yp.cr)
- Optional ":" prefixing a storage target name specified by the naming authority.
- From the RFC:[10]
| Type | . | Date | . | Naming Auth | : | String defined by example.com Naming Authority |
|---|---|---|---|---|---|---|
| iqn | . | 1992-01 | . | com.example | : | storage:diskarrays-sn-a8675309 |
| iqn | . | 1992-01 | . | com.example | ||
| iqn | . | 1992-01 | . | com.example | : | storage.tape1.sys1.xyz |
| iqn | . | 1992-01 | . | com.example | : | storage.disk2.sys1.xyz |
- Extended Unique Identifier (EUI)
- Format: eui.{EUI-64 bit address} (e.g.
eui.02004567A425678D) - T11 Network Address Authority (NAA)
- Format: naa.{NAA 64 or 128 bit identifier} (e.g.
naa.52004567BA64678D)
IQN format addresses occur most commonly. They are qualified by a date (yyyy-mm) because domain names can expire or be acquired by another entity.
The IEEE Registration authority provides EUI in accordance with the EUI-64 standard. NAA is part OUI which is provided by the IEEE Registration Authority. NAA name formats were added to iSCSI in RFC 3980, to provide compatibility with naming conventions used in Fibre Channel and Serial Attached SCSI (SAS) storage technologies.
Usually, an iSCSI participant can be defined by three or four fields:
- Hostname or IP Address (e.g., "iscsi.example.com")
- Port Number (e.g., 3260)
- iSCSI Name (e.g., the IQN "iqn.2003-01.com.ibm:00.fcd0ab21.shark128")
- An optional CHAP Secret (e.g., "secretsarefun")
iSNS
[edit]iSCSI initiators can locate appropriate storage resources using the Internet Storage Name Service (iSNS) protocol. In theory, iSNS provides iSCSI SANs with the same management model as dedicated Fibre Channel SANs. In practice, administrators can satisfy many deployment goals for iSCSI without using iSNS.
Security
[edit]Authentication
[edit]iSCSI initiators and targets prove their identity to each other using CHAP, which includes a mechanism to prevent cleartext passwords from appearing on the wire. By itself, CHAP is vulnerable to dictionary attacks, spoofing, and reflection attacks. If followed carefully, the best practices for using CHAP within iSCSI reduce the surface for these attacks and mitigate the risks.[11]
Additionally, as with all IP-based protocols, IPsec can operate at the network layer. The iSCSI negotiation protocol is designed to accommodate other authentication schemes, though interoperability issues limit their deployment.
Logical network isolation
[edit]To ensure that only valid initiators connect to storage arrays, administrators most commonly run iSCSI only over logically isolated backchannel networks. In this deployment architecture, only the management ports of storage arrays are exposed to the general-purpose internal network, and the iSCSI protocol itself is run over dedicated network segments or VLANs. This mitigates authentication concerns; unauthorized users are not physically provisioned for iSCSI, and thus cannot talk to storage arrays. However, it also creates a transitive trust problem, in that a single compromised host with an iSCSI disk can be used to attack storage resources for other hosts.
Physical network isolation
[edit]While iSCSI can be logically isolated from the general network using VLANs only, it is still no different from any other network equipment and may use any cable or port as long as there is a completed signal path between source and target. Just a single cabling mistake by a network technician can compromise the barrier of logical separation, and an accidental bridging may not be immediately detected because it does not cause network errors.
In order to further differentiate iSCSI from the regular network and prevent cabling mistakes when changing connections, administrators may implement self-defined color-coding and labeling standards, such as only using yellow-colored cables for the iSCSI connections and only blue cables for the regular network, and clearly labeling ports and switches used only for iSCSI.
While iSCSI could be implemented as just a VLAN cluster of ports on a large multi-port switch that is also used for general network usage, the administrator may instead choose to use physically separate switches dedicated to iSCSI VLANs only, to further prevent the possibility of an incorrectly connected cable plugged into the wrong port bridging the logical barrier.
Authorization
[edit]Because iSCSI aims to consolidate storage for many servers into a single storage array, iSCSI deployments require strategies to prevent unrelated initiators from accessing storage resources. As a pathological example, a single enterprise storage array could hold data for servers variously regulated by the Sarbanes–Oxley Act for corporate accounting, HIPAA for health benefits information, and PCI DSS for credit card processing. During an audit, storage systems must demonstrate controls to ensure that a server under one regime cannot access the storage assets of a server under another.
Typically, iSCSI storage arrays explicitly map initiators to specific target LUNs; an initiator authenticates not to the storage array, but to the specific storage asset it intends to use. However, because the target LUNs for SCSI commands are expressed both in the iSCSI negotiation protocol and in the underlying SCSI protocol, care must be taken to ensure that access control is provided consistently.
Confidentiality and integrity
[edit]For the most part, iSCSI operates as a cleartext protocol that provides no cryptographic protection for data in motion during SCSI transactions. As a result, an attacker who can listen in on iSCSI Ethernet traffic can:[12]
- Reconstruct and copy the files and filesystems being transferred on the wire
- Alter the contents of files by injecting fake iSCSI frames
- Corrupt filesystems being accessed by initiators, exposing servers to software flaws in poorly tested filesystem code.
These problems do not occur only with iSCSI, but rather apply to any SAN protocol without cryptographic security. IP-based security protocols, such as IPsec, can provide standards-based cryptographic protection to this traffic.
Implementations
[edit]Operating systems
[edit]The dates in the following table denote the first appearance of a native driver in each operating system. Third-party drivers for Windows and Linux were available as early as 2001, specifically for attaching IBM's IP Storage 200i appliance.[13]
| OS | First release date | Version | Features |
|---|---|---|---|
| IBM i | 2006-10 | V5R4M0 (as i5/OS) | Target, Multipath |
| VMware ESX | 2006-06 | ESX 3.0, ESX 4.0, ESXi 5.x, ESXi 6.x | Initiator, Multipath |
| AIX | 2002-10 | AIX 5.3 TL10, AIX 6.1 TL3 | Initiator, Target |
| Windows | 2003-06 | 2000, XP Pro, 2003, Vista, 2008, 2008 R2, 7, 8, Server 2012, 8.1, Server 2012 R2, 10, Server 2016, 11, Server 2019 | Initiator, Target,[b] Multipath |
| NetWare | 2003-08 | NetWare 5.1, 6.5, & OES | Initiator, Target |
| HP-UX | 2003-10 | HP 11i v1, HP 11i v2, HP 11i v3 | Initiator |
| Solaris | 2002-05 | Solaris 10, OpenSolaris | Initiator, Target, Multipath, iSER |
| Linux | 2005-06 | 2.6.12, 3.1 | Initiator (2.6.12), Target (3.1), Multipath, iSER, VAAI[c] |
| OpenBSD | 2009-10 | 4.9 | Initiator |
| NetBSD | 2002-06 | 4.0, 5.0 | Initiator (5.0), Target (4.0) |
| FreeBSD | 2008-02 | 7.0 | Initiator (7.0), Target (10.0), Multipath, iSER, VAAI[c] |
| OpenVMS | 2002-08 | 8.3-1H1 | Initiator, Multipath |
| macOS | 2008-07 | 10.4— | N/A[d] |
Targets
[edit]Most iSCSI targets involve disk, though iSCSI tape and medium-changer targets are popular as well. So far, physical devices have not featured native iSCSI interfaces on a component level. Instead, devices with Parallel SCSI or Fibre Channel interfaces are bridged by using iSCSI target software, external bridges, or controllers internal to the device enclosure.
Alternatively, it is possible to virtualize disk and tape targets. Rather than representing an actual physical device, an emulated virtual device is presented. The underlying implementation can deviate drastically from the presented target as is done with virtual tape library (VTL) products. VTLs use disk storage for storing data written to virtual tapes. As with actual physical devices, virtual targets are presented by using iSCSI target software, external bridges, or controllers internal to the device enclosure.
In the security products industry, some manufacturers use an iSCSI RAID as a target, with the initiator being either an IP-enabled encoder or camera.
Converters and bridges
[edit]Multiple systems exist that allow Fibre Channel, SCSI and SAS devices to be attached to an IP network for use via iSCSI. They can be used to allow migration from older storage technologies, access to SANs from remote servers and the linking of SANs over IP networks. An iSCSI gateway bridges IP servers to Fibre Channel SANs. The TCP connection is terminated at the gateway, which is implemented on a Fibre Channel switch or as a standalone appliance.
See also
[edit]- ATA-over-Ethernet (AoE)
- Fibre Channel over Ethernet (FCoE)
- Fibre Channel over IP (FCIP)
- HyperSCSI SCSI over Ethernet frames instead of IP (as iSCSI is)
- ISCSI Conformance Testing and Testing Tool Requirement
- iSCSI Extensions for RDMA (iSER)
- Internet Fibre Channel Protocol (iFCP)
- Internet Storage Name Service (iSNS)
- LIO Linux SCSI Target
- Network block device (NBD)
- The SCST Linux SCSI target software stack
- Service Location Protocol
Notes
[edit]- ^ Unless tunneled, such as in Fibre Channel over Ethernet or Fibre Channel over IP.
- ^ Target available only as part of Windows Unified Data Storage Server. Target available in Storage Server 2008 (excepted Basic edition).[14] Target available for Windows Server 2008 R2 as a separate download. Windows Server 2012, 2012 R2 and 2016 have built-in Microsoft iSCSI target version 3.3.
- ^ a b vStorage APIs Array Integration
- ^ macOS has neither initiator nor target coming from vendor directly. [citation needed]
References
[edit]- ^ Rouse, Margaret (May 2011). "iSCSI (Internet Small Computer System Interface)". SearchStorage. Retrieved 21 January 2019.
- ^ "ISCSI SAN: Key Benefits, Solutions & Top Providers Of Storage Area Networking". Tredent Network Solutions. Archived from the original on 12 August 2014. Retrieved 3 November 2012.
- ^ "iSCSI proof-of-concept at IBM Research Haifa". IBM. Retrieved 13 September 2013.
- ^ "Chelsio Demonstrates Next Generation 40G iSCSI at SNW Spring". chelsio.com. 2013-04-03. Retrieved 2014-06-28.
- ^ Architecture and Dependability of Large-Scale Internet Services David Oppenheimer and David A. Patterson, Berkeley, IEEE Internet Computing, September–October 2002.
- ^ "Chainloading iPXE". ipxe.org. Retrieved 2013-11-11.
- ^ "Burning iPXE into ROM". ipxe.org. Retrieved 2013-11-11.
- ^ "iPXE - Open Source Boot Firmware". ipxe.org. Retrieved 2013-11-11.
- ^ "Intel Ethernet Controllers". Intel.com. Retrieved 2012-09-18.
- ^ J. Satran; K. Meth; C. Sapuntzakis; M. Chadalapaka; E. Zeidner (April 2004). Internet Small Computer Systems Interface (iSCSI). Network Working Group. doi:10.17487/RFC3720. RFC 3720. Obsolete. sec. 3.2.6.3.1, p. 32. Obsoleted by RFC 7143.
Type "iqn." (iSCSI Qualified Name)
- ^ J. Satran; K. Meth; C. Sapuntzakis; M. Chadalapaka; E. Zeidner (April 2004). Internet Small Computer Systems Interface (iSCSI). Network Working Group. doi:10.17487/RFC3720. RFC 3720. Obsolete. sec. 8.2.1. Obsoleted by RFC 7143.
- ^ "Protecting an iSCSI SAN". VMware. Archived from the original on 3 March 2016. Retrieved 3 November 2012.
- ^ "IBM IP storage 200i general availability". IBM. Retrieved 13 September 2013.
- ^ "Windows Storage Server | NAS | File Management". Microsoft. Retrieved 2012-09-18.
Further reading
[edit]- RFC 3720 - Internet Small Computer Systems Interface (iSCSI) (obsolete)
- RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery (updated)
- RFC 3722 - String Profile for Internet Small Computer Systems Interface (iSCSI) Names
- RFC 3723 - Securing Block Storage Protocols over IP (Scope: The use of IPsec and IKE to secure iSCSI, iFCP, FCIP, iSNS and SLPv2.)
- RFC 3347 - Small Computer Systems Interface protocol over the Internet (iSCSI) Requirements and Design Considerations
- RFC 3783 - Small Computer Systems Interface (SCSI) Command Ordering Considerations with iSCSI
- RFC 3980 - T11 Network Address Authority (NAA) Naming Format for iSCSI Node Names (obsolete)
- RFC 4018 - Finding Internet Small Computer Systems Interface (iSCSI) Targets and Name Servers by Using Service Location Protocol version 2 (SLPv2)
- RFC 4173 - Bootstrapping Clients using the Internet Small Computer System Interface (iSCSI) Protocol
- RFC 4544 - Definitions of Managed Objects for Internet Small Computer System Interface (iSCSI)
- RFC 4850 - Declarative Public Extension Key for Internet Small Computer Systems Interface (iSCSI) Node Architecture (obsolete)
- RFC 4939 - Definitions of Managed Objects for iSNS (Internet Storage Name Service)
- RFC 5048 - Internet Small Computer System Interface (iSCSI) Corrections and Clarifications (obsolete)
- RFC 5047 - DA: Datamover Architecture for the Internet Small Computer System Interface (iSCSI)
- RFC 5046 - Internet Small Computer System Interface (iSCSI) Extensions for Remote Direct Memory Access (RDMA)
- RFC 7143 – Internet Small Computer System Interface (iSCSI) Protocol (consolidated)
ISCSI
View on GrokipediaFundamentals
Definition and Purpose
iSCSI, or Internet Small Computer Systems Interface, is a transport protocol for the Small Computer System Interface (SCSI) that operates on top of TCP/IP networks.[5] It was originally defined in RFC 3720 as a proposed standard by the Internet Engineering Task Force (IETF) in April 2004 and later consolidated and updated in RFC 7143 in April 2014.[6][5] The protocol encapsulates SCSI commands, data, and status within iSCSI protocol data units (PDUs) for transmission over standard IP networks, ensuring compatibility with the SCSI Architecture Model.[5] The primary purpose of iSCSI is to enable initiators, such as servers, to access remote storage targets as if they were locally attached block devices, thereby facilitating block-level storage networking over Ethernet without requiring dedicated storage area network (SAN) hardware.[7] This approach contrasts with traditional SCSI, which relies on direct physical connections or specialized transports like Fibre Channel, by leveraging ubiquitous TCP/IP for extending storage access across local area networks (LANs) or wide area networks (WANs).[7] Key benefits of iSCSI include its cost-effectiveness, as it utilizes existing Ethernet infrastructure and avoids the expense of Fibre Channel switches and host bus adapters, making it suitable for small to medium-sized enterprises.[7] It also offers scalability for large data centers by supporting high-speed Ethernet links and integration with virtualization environments, where multiple virtual machines can share remote storage resources efficiently.[8] Historically, iSCSI originated from a proof-of-concept developed by IBM in 1998, with the initial draft submitted to the IETF in 2000 and approved as a proposed standard in February 2003.[7][9]Protocol Architecture
The iSCSI protocol employs a layered architecture that maps SCSI operations onto TCP/IP networks, enabling block-level storage access over IP. At the core is the SCSI layer, which generates and processes Command Descriptor Blocks (CDBs) for commands and responses in compliance with the SCSI architecture model as defined in SAM-2.[10] The iSCSI layer then encapsulates these SCSI elements into Protocol Data Units (PDUs) suitable for transmission, handling tasks such as session management, command sequencing, and error recovery.[11] Underlying this is the TCP/IP transport layer, which provides reliable, connection-oriented delivery of the PDUs without awareness of their SCSI or iSCSI semantics.[11] This layering ensures that iSCSI maintains SCSI semantics while leveraging the ubiquity of TCP/IP for network transport.[12] Central to the iSCSI layer are PDUs, which structure all communications between initiators and targets. Each PDU begins with a Basic Header Segment (BHS) of 48 bytes, including key fields such as the opcode (specifying the PDU type, e.g., 0x01 for SCSI Command or 0x21 for SCSI Response) and the Initiator Task Tag (a unique identifier for tracking individual tasks across the session).[13] Following the BHS are optional Additional Header Segments (AHS) for extended information and one or more Data Segments, which carry SCSI payloads, text parameters, or other data, always padded to 4-byte boundaries for alignment.[14] For login-related PDUs, the structure incorporates specific formats during phases like security negotiation (for authentication parameters) and operational negotiation (for session settings such as maximum connections or error recovery levels).[15] Session establishment in iSCSI occurs through a multi-phase login process on TCP connections, distinguishing between normal sessions for full SCSI operations and discovery sessions limited to target enumeration.[16] Normal sessions begin with a leading login connection using a Target Session Identifying Handle (TSIH) of 0, progressing through three phases: security negotiation to authenticate parties and establish security parameters, login operational negotiation to agree on session-wide settings, and the full feature phase to enable SCSI command execution.[17] Discovery sessions, by contrast, restrict operations to SendTargets commands and similar discovery functions, omitting full data transfer capabilities.[18] Initiators and targets collaboratively negotiate these phases to form a session, which may span multiple TCP connections for enhanced reliability.[19] Error handling in iSCSI emphasizes integrity and recovery at the protocol level, with support for optional digests to verify PDU components. Header and data digests use CRC32C (or none) to detect corruption during transit, applied independently to the BHS/AHS and data segments.[20] Recovery mechanisms address connection failures and data loss through task reassignment (transferring active tasks to a new connection), Selective Negative Acknowledgment (SNACK) requests for retransmitting lost PDUs, and hierarchical levels including within-command recovery for partial errors and session-wide recovery via logout.[21] These features ensure robust operation over potentially unreliable networks while preserving SCSI task integrity.[22]Core Components
Initiators
iSCSI initiators serve as client-side agents on host servers that originate SCSI commands to remote targets, encapsulating them within TCP/IP packets to access storage over IP networks.[23] These components map iSCSI logical units (LUNs) presented by targets as local block devices, such as /dev/sdX in Linux systems, enabling applications to treat remote storage as if it were directly attached.[23] By establishing sessions via a login phase, initiators facilitate reliable data transfer while handling identification through unique iSCSI names and initiator session identifiers (ISIDs).[23] Initiators are available in software and hardware forms, with software variants integrated into operating systems to leverage the host CPU for protocol processing, making them the most common deployment method.[24] Hardware initiators, typically implemented as host bus adapters (HBAs) or TCP offload engines (TOEs), offload iSCSI and TCP/IP tasks from the CPU to dedicated silicon for reduced latency.[24] A prominent example of a software initiator is the Microsoft iSCSI Initiator service, a built-in Windows component that manages connections to iSCSI targets without requiring additional hardware. In operation, initiators issue SCSI read and write commands via Command Descriptor Blocks (CDBs) embedded in SCSI-Command Protocol Data Units (PDUs), using initiator task tags and command sequence numbers (CmdSN) to ensure ordered delivery.[23] They manage sessions by negotiating parameters during login, such as maximum burst length (default 262144 bytes), and support multiple TCP connections per session for enhanced throughput and redundancy through multi-path I/O (MPIO).[23] Error recovery involves levels from connection-only (level 0) to session-wide (level 2), incorporating mechanisms like selective negative acknowledgments (SNACKs) for retransmissions, task reassignment, and failover across portal groups to maintain data integrity during network disruptions.[23] Performance of initiators varies by type: software implementations introduce CPU overhead for encapsulation and error handling, potentially consuming 10-20% of cycles at high throughput, whereas hardware offloads minimize this to under 10% while providing lower latency for latency-sensitive workloads.[25] To optimize availability and performance, initiators integrate with multipathing frameworks, such as Device Mapper in Linux, which aggregates multiple paths into a single logical device for load balancing and failover.[26]Targets and Logical Units
In iSCSI, a target serves as the server-side endpoint that exposes storage resources to initiators over IP networks using TCP connections. It receives SCSI commands encapsulated within iSCSI Protocol Data Units (PDUs), executes the associated I/O operations on underlying storage, and returns responses or status information to the initiator.[27] Targets operate primarily in the Full Feature Phase following successful login negotiation, managing tasks such as command ordering via Command Sequence Numbers (CmdSN) and ensuring connection allegiance where related PDUs stay on the same TCP connection.[10] Each target is uniquely identified by an iSCSI Qualified Name (IQN), a globally unique string formatted according to RFC 3721, such asiqn.2001-04.com.example:storage:diskarrays-sn-a8675309, which combines a timestamp, naming authority, and vendor-specific identifier.[28] Targets support multiple network portals—combinations of IP addresses and TCP ports—grouped into portal groups to enable load balancing, failover, and multi-connection sessions for improved performance and reliability.[29]
iSCSI targets can be implemented in hardware or software configurations. Hardware targets are typically integrated into enterprise storage area network (SAN) arrays, where dedicated controllers handle protocol processing and storage exposure at high speeds. Software targets, in contrast, run on general-purpose servers using operating system tools to emulate storage providers; for example, the targetcli administration shell in Linux environments allows configuration of iSCSI targets backed by local block devices or file I/O on commodity hardware.[30] These implementations process incoming SCSI-Command PDUs (opcode 0x01), which include details like the Expected Data Transfer Length for I/O size, and respond with Data-In PDUs for reads or Ready to Transfer (R2T) PDUs to solicit data for writes.[31]
Logical units (LUs) represent the fundamental addressable storage entities within an iSCSI target, corresponding to SCSI logical units that appear as block devices to initiators. Each LU is identified by a 64-bit Logical Unit Number (LUN), formatted per the SCSI Architecture Model (SAM) and included in PDUs such as the SCSI Command PDU (bytes 8-15) to specify the target LU for operations.[32] LUNs are mapped to physical or virtual storage volumes on the target, enabling abstraction of underlying hardware like disks or RAID arrays, and access is scoped to the target's IQN combined with the LUN, as in iqn.1993-08.org.[debian](/page/Debian):01:abc123/lun/0. LUN masking restricts visibility and access to authorized initiators based on their IQNs, while mapping associates LUNs with specific backend storage resources to control data placement and availability.[33]
Target operations center on handling I/O workflows initiated by commands from iSCSI initiators. Upon receiving a SCSI command, the target processes it in CmdSN order, transferring data bidirectionally—sending output via Data-In PDUs for reads or requesting input via R2T PDUs for writes—before concluding with a SCSI Response PDU containing status, such as GOOD or CHECK CONDITION.[34] Task management functions, like ABORT TASK or CLEAR TASK SET, allow termination of specific LUN operations, with the LUN field specifying the affected unit.[35] Many iSCSI targets support advanced features at the LUN level, including thin provisioning to allocate storage on demand for efficient capacity utilization and snapshots to create point-in-time copies for backup or recovery. These capabilities enhance scalability in environments like virtualized data centers, where LUNs may represent thinly provisioned volumes over-allocated relative to physical backing store.
Discovery and Connectivity
Addressing Mechanisms
iSCSI employs standardized naming conventions to uniquely identify initiators and targets across IP networks, ensuring persistent and location-independent identification. The primary format is the iSCSI Qualified Name (IQN), structured asiqn.yyyy-mm.reversed-domain:unique-id, where yyyy-mm denotes the year and month of domain registration, the reversed domain follows standard DNS conventions (e.g., com.example), and the unique identifier is vendor-specific (e.g., iqn.2001-04.com.example:storage:diskarrays-sn-a8675309).[6] Alternatively, the EUI-64 format uses eui. followed by a 16-hex-digit IEEE EUI-64 identifier (e.g., eui.02004567A425678D), providing a compact alias for nodes based on hardware or software identifiers.[6] These names are globally unique, permanent, and not tied to specific hardware, with optional aliases for human-readable reference.[5]
Portal addressing facilitates endpoint connectivity by specifying targets via IP address and TCP port, with the default port being 3260 for iSCSI sessions.[6] The TargetAddress parameter in login operations supports formats such as domain name (e.g., example.com:3260,1), IPv4 (e.g., 10.0.1.1:3260,1), or IPv6 (e.g., [2001:db8::1]:3260,1), optionally including a comma-separated portal group tag for session coordination.[5] This addressing scheme enables initiators to establish TCP connections to targets over standard IP networks, abstracting SCSI commands into iSCSI Protocol Data Units (PDUs).[6]
Connection management in iSCSI organizes multiple IP/port combinations into portal groups, identified by a 16-bit portal group tag (0-65535), allowing sessions to span several network portals while maintaining consistent SCSI logical unit access.[5] During login, initiators select routes based on discovered or configured target addresses, with targets returning the servicing portal group tag in the initial response to ensure session affinity.[6] Redirection occurs if a target issues a login response with status class 0101h (Redirect), providing an alternative TargetAddress (e.g., omitting the portal group tag in redirects) to guide the initiator to another portal for load balancing or failover.[6] This mechanism supports multiple connections per session within the same portal group, enhancing reliability without requiring hardware-specific adaptations.[5]
Initial addressing security integrates with authentication protocols during login, where CHAP provides in-band verification of initiator and target identities using directional secrets, ensuring secure name resolution and connection establishment.[6] For broader protection, IKEv2 enables IPsec encapsulation, supporting IPv6 identification types like ID_IPV6_ADDR to secure addressing in dual-stack environments.[5]
iSCSI supports both IPv4 and IPv6 natively over TCP, with dual-stack compatibility allowing seamless transitions in addressing formats from early specifications.[6] RFC 3720 laid the foundation for IP-agnostic transport, while RFC 7143 refined IPv6 integration, mandating bracketed notation in TargetAddress and IKE identification for modern networks, evolving from IPv4-centric examples to full IPv6 interoperability without protocol changes.[5]
iSNS Protocol
The Internet Storage Name Service (iSNS) is an IETF standard defined in RFC 4171 that serves as a directory service for iSCSI and related storage devices on IP networks, enabling automated discovery and management akin to the Domain Name System (DNS) but tailored for storage resources.[36] It allows initiators to locate available targets dynamically without prior manual configuration of all device details, facilitating integration of iSCSI initiators, targets, and management nodes into a centralized database.[36] As a client-server protocol, iSNS operates over TCP (mandatory) or UDP (optional), using the default port 3205 for communications between iSNS servers and clients.[36] Key functions of iSNS include registration, where targets register their iSCSI Qualified Names (IQNs) and portal addresses (IP and port combinations) with the iSNS server using Device Attributes Registration (DevAttrReg) messages; discovery, where initiators query the server via Device Attributes Query (DevAttrQry) or Device Get Next (DevGetNext) messages to retrieve lists of available targets and their attributes; and state change notifications (SCNs), which alert registered clients to dynamic events such as target availability changes or failover scenarios through SCN messages.[36] These notifications support real-time updates, with message types encoded in a Type-Length-Value (TLV) format for attributes like entity identifiers and portal details.[36] For example, an SCN might notify of an object addition or removal, enabling seamless session management in storage networks.[36] iSNS offers benefits such as reduced manual configuration in large-scale environments by centralizing device information and automating target discovery, which simplifies deployment compared to static setups.[36] However, it is optional for iSCSI implementations, with alternatives including the Service Location Protocol (SLP) per RFC 2608, static configuration of target addresses, or the SendTargets method for direct queries.[37] Security considerations in iSNS, as outlined in RFC 4171, address threats like unauthorized access and message replay through recommended IPsec ESP (SHOULD per RFC 4171) for authentication and integrity (with optional confidentiality), timestamps in messages, and support for digital signatures or X.509 certificates in multicast scenarios.[36] As of 2025, while iSNS remains in use in various storage systems such as NetApp ONTAP and Broadcom fabrics, Microsoft has deprecated support for iSNS in Windows Server 2025, recommending the Server Message Block (SMB) feature as an alternative for similar functionality.[38]Deployment Features
Network Booting
Network booting with iSCSI allows diskless clients to load and execute operating systems from remote storage devices over an IP network, treating the remote iSCSI logical unit number (LUN) as a local block device during the boot sequence. This process integrates with standard network boot mechanisms like the Preboot Execution Environment (PXE), where the client's firmware—either BIOS or UEFI—initiates the connection to an iSCSI target. The initiator, embedded in the firmware or loaded via PXE, establishes an iSCSI session to access the bootable LUN containing the operating system image.[39] The boot process begins with the client broadcasting a DHCP request to obtain network configuration, including the IP address of the boot server and details for locating the iSCSI target. In PXE-enabled setups, the DHCP server responds with option 67 specifying the boot file name, which may chainload an enhanced firmware like iPXE to handle iSCSI-specific operations. Once network parameters are acquired, the client uses additional DHCP options—such as vendor-specific option 43 or the iSCSI root path in option 17 (format: "iscsi:"servername":"protocol":"port":"LUN":"targetname"")—to identify the iSCSI target. If details are incomplete, the client queries a discovery service like iSNS or SLP to resolve the target name to an IP address and port.[39][40][41] Following discovery, the iSCSI initiator in the firmware logs into the target using the obtained credentials, establishing a session over TCP. The boot firmware then reads the LUN as a block device, loading the master boot record and mounting the root filesystem to continue the operating system boot. For UEFI systems, the process aligns with EFI boot services, while BIOS uses INT13h extensions to present the remote disk; multiple boot paths can be handled by prioritizing interfaces or targets based on firmware configuration, allowing failover if the primary path fails. In advanced setups, chainloading via iPXE enables scripting for dynamic target selection or authentication, such as CHAP, before passing control to the OS loader.[39][42][40] Key requirements include firmware support for iSCSI, such as Intel iSCSI Remote Boot integrated into the NIC option ROM or BIOS, and an iSCSI target exporting a bootable LUN formatted with a compatible partition scheme (e.g., GPT for UEFI). The network must provide reliable TCP connectivity, with the target configured to allow initiator access; no local storage is needed on the client, though fallback options may be provisioned.[42][41] Common use cases include stateless computing environments, where multiple clients boot identical OS images from a central repository for simplified management and rapid deployment, and diskless workstations in educational or lab settings to reduce hardware costs and enable centralized updates. For instance, a single 40 GB master image can boot hundreds of clients using differencing virtual hard disks, saving over 90% on storage compared to local duplicates.[43][44] Challenges arise in wide area network (WAN) booting due to increased latency from geographic distance and potential packet loss, which can prolong the initial session establishment and OS loading. This is mitigated by enabling jumbo frames (MTU up to 9000 bytes) end-to-end to reduce overhead and improve throughput, though all network components must support consistent MTU sizes to avoid fragmentation. Local area network (LAN) deployments with 10 GbE or higher typically avoid these issues, emphasizing dedicated iSCSI VLANs for optimal performance.[45][43]Configuration Basics
Configuring an iSCSI initiator and target begins with assigning unique iSCSI Qualified Names (IQNs) to each node, defining network portals as IP address and TCP port combinations (typically port 3260), and setting up authentication secrets using the Challenge Handshake Authentication Protocol (CHAP). On the target side, administrators create IQNs and associate them with portal groups, which manage access to logical unit numbers (LUNs), while enabling CHAP by specifying usernames and secrets (at least 96 bits recommended for security without IPsec, with implementations required to support up to 128 bits and potentially longer) for one-way or mutual authentication.[6][46] For the initiator, the IQN is defined in a configuration file such as/etc/iscsi/initiatorname.iscsi, and CHAP credentials are entered in /etc/iscsi/iscsid.conf to match the target's settings.[47]
Practical setup on Linux systems utilizes the iscsiadm utility for discovery and login operations. Discovery employs the SendTargets method via commands like iscsiadm --mode discoverydb --type sendtargets --portal <target-ip>:3260, which queries the target for available IQNs and portals without establishing a full session. Subsequent login is performed with iscsiadm -m node -T <target-iqn> -p <target-ip>:3260 --login, establishing a persistent session that can be automated at boot by marking nodes as such in the open-iSCSI database.[47][48]
During the login phase, iSCSI negotiates session parameters using text key-value pairs to ensure compatibility and optimal performance. Key parameters include MaxConnections, which specifies the maximum number of concurrent TCP connections per session (default 1, range 1-65535, negotiated to the minimum value); HeaderDigest and DataDigest, which enable optional CRC32C checksumming for integrity (default None, per-connection negotiation); and ErrorRecoveryLevel, which defines recovery capabilities (0 for none, 1 for within-connection recovery like retransmissions, and 2 for full session-level task reassignment across connections).[6]
iSCSI supports multipathing through extensions like Multi-Path I/O (MPIO) to enhance redundancy and performance across multiple network paths. In Linux environments, the Device Mapper Multipath (DM-Multipath) subsystem aggregates paths to an iSCSI LUN into a single device, using policies such as round-robin for load balancing I/O across active paths. Persistent bindings ensure consistent LUN mapping by configuring multipath.conf with device-specific aliases, WWIDs, and path priorities, preventing device name changes on reboot and enabling failover without data disruption.
Monitoring iSCSI sessions involves tools to track performance metrics and diagnose issues. The iscsiadm command provides session statistics with --stats, reporting throughput in bytes per second, I/O operations per second (IOPS), and error counts for active connections. For troubleshooting portal failover, administrators use iscsiadm -m session to verify connection states and manually trigger failovers with --logout and --login on alternate portals, ensuring quick recovery in multipathed setups.[47][49]
