Hubbry Logo
OSI modelOSI modelMain
Open search
OSI model
Community hub
OSI model
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
OSI model
OSI model
from Wikipedia

The Open Systems Interconnection (OSI) model is a reference model developed by the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection."[2]

In the OSI reference model, the components of a communication system are distinguished in seven abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application.[3]

The model describes communications from the physical implementation of transmitting bits across a transmission medium to the highest-level representation of data of a distributed application. Each layer has well-defined functions and semantics and serves a class of functionality to the layer above it and is served by the layer below it. Established, well-known communication protocols are decomposed in software development into the model's hierarchy of function calls.

The Internet protocol suite as defined in RFC 1122 and RFC 1123 is a model of networking developed contemporarily to the OSI model, and was funded primarily by the U.S. Department of Defense. It was the foundation for the development of the Internet. It assumed the presence of generic physical links and focused primarily on the software layers of communication, with a similar but much less rigorous structure than the OSI model.

In comparison, several networking models have sought to create an intellectual framework for clarifying networking concepts and activities,[citation needed] but none have been as successful as the OSI reference model in becoming the standard model for discussing and teaching networking in the field of information technology. The model allows transparent communication through equivalent exchange of protocol data units (PDUs) between two parties, through what is known as peer-to-peer networking (also known as peer-to-peer communication). As a result, the OSI reference model has not only become an important piece among professionals and non-professionals alike, but also in all networking between one or many parties, due in large part to its commonly accepted user-friendly framework.[4]

Communication in the OSI model (example with layers 3 to 5)

History

[edit]

The development of the OSI model started in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world (see OSI protocols and Protocol Wars). In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance during the design of the Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF).

In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s.[5][6]

The Experimental Packet Switched System in the UK c. 1973–1975 identified the need for defining higher-level protocols.[5] The UK National Computing Centre publication, Why Distributed Computing, which came from considerable research into future configurations for computer systems,[7] resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977.[8][9]

Beginning in 1977, the ISO initiated a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The British Department of Trade and Industry acted as the secretariat, and universities in the United Kingdom developed prototypes of the standards.[10]

The OSI model was first defined in raw form in Washington, D.C., in February 1978 by French software engineer Hubert Zimmermann, and the refined but still draft standard was published by the ISO in 1980.[9]

The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards.[11] Although not a standard itself, it was a framework in which future standards could be defined.[12]

In May 1983,[13] the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200.

OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software.

The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems.[14] Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Network Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it.

The OSI standards documents are available from the ITU-T as the X.200 series of recommendations.[15] Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge.[16]

OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability.[17] It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.[9][18][19] However, while OSI developed its networking standards in the late 1980s,[20][page needed][21][page needed] TCP/IP came into widespread use on multi-vendor networks for internetworking.

The OSI model is still used as a reference for teaching and documentation;[22] however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing.[23] Others say the original OSI model does not fit today's networking protocols and have suggested instead a simplified approach.[24][25]

Definitions

[edit]

Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI model, abstractly describe the functionality provided to a layer N by a layer N−1, where N is one of the seven layers of protocols operating in the local host (with N=1 being the most basic layer, often represented at the bottom of a list).

At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers.

Data processing by two communicating OSI-compatible devices proceeds as follows:

  1. The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU).
  2. The PDU is passed to layer N−1, where it is known as the service data unit (SDU).
  3. At layer N−1 the SDU is concatenated with a header, a footer, or both, producing a layer N−1 PDU. It is then passed to layer N−2.
  4. The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device.
  5. At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed.

Standards documents

[edit]

The OSI model was defined in ISO/IEC 7498 which consists of the following parts:

  • ISO/IEC 7498-1 The Basic Model
  • ISO/IEC 7498-2 Security Architecture
  • ISO/IEC 7498-3 Naming and addressing
  • ISO/IEC 7498-4 Management framework

ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200.

Layer architecture

[edit]

The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model.

OSI model
Layer Protocol data unit (PDU) Function[26]
Host
layers
7 Application Data High-level protocols such as for resource sharing or remote file access, e.g. HTTP.
6 Presentation Translation of data between a networking service and an application; including character encoding, data compression and encryption/decryption
5 Session Managing communication sessions, i.e., continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes
4 Transport Segment Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing
Media
layers
3 Network Packet, Datagram[27] Structuring and managing a multi-node network, including addressing, routing and traffic control
2 Data link Frame Transmission of data frames between two nodes connected by a physical layer
1 Physical Bit, Symbol Transmission and reception of raw bit streams over a physical medium


Layer 1: Physical layer

[edit]

The physical layer is responsible for the transmission and reception of unstructured raw data between a device, such as a network interface controller, Ethernet hub, or network switch, and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals (analogue signals). Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of the network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard.

The physical layer also specifies how encoding occurs over a physical signal, such as electrical voltage or a light pulse. For example, a 1 bit might be represented on a copper wire by the transition from a 0-volt to a 5-volt signal, whereas a 0 bit might be represented by the transition from a 5-volt to a 0-volt signal. As a result, common problems occurring at the physical layer are often related to the incorrect media termination, EMI or noise scrambling, and NICs and hubs that are misconfigured or do not work correctly.

[edit]

The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them.

IEEE 802 divides the data link layer into two sublayers:[28]

  • Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data.
  • Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization.

The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 Zigbee operate at the data link layer.

The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines.

The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol.

Security, specifically (authenticated) encryption, at this layer can be applied with MACsec.

Layer 3: Network layer

[edit]

The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors.

Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it does not need to do so.

A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.[29]

Security, specifically (authenticated) encryption, at this layer can be applied with IPsec.

Layer 4: Transport layer

[edit]

The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source host to a destination host from one application to another across a network while maintaining the quality-of-service functions. Transport protocols may be connection-oriented or connectionless.

This may require breaking large protocol data units or long data streams into smaller chunks called "segments", since the network layer imposes a maximum packet size called the maximum transmission unit (MTU), which depends on the maximum packet size imposed by all data link layers on the network path between the two hosts. The amount of data in a data segment must be small enough to allow for a network-layer header and a transport-layer header. For example, for data being transferred across Ethernet, the MTU is 1500 bytes, the minimum size of a TCP header is 20 bytes, and the minimum size of an IPv4 header is 20 bytes, so the maximum segment size is 1500−(20+20) bytes, or 1460 bytes. The process of dividing data into segments is called segmentation; it is an optional function of the transport layer. Some connection-oriented transport protocols, such as TCP and the OSI connection-oriented transport protocol (COTP), perform segmentation and reassembly of segments on the receiving side; connectionless transport protocols, such as UDP and the OSI connectionless transport protocol (CLTP), usually do not.

The transport layer also controls the reliability of a given link between a source and destination host through flow control, error control, and acknowledgments of sequence and existence. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery through the acknowledgment hand-shake system. The transport layer will also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred.

Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem.

The OSI connection-oriented transport protocol defines five classes of connection-mode transport protocols, ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0–4 classes are shown in the following table:[30]

Feature name TP0 TP1 TP2 TP3 TP4
Connection-oriented network Yes Yes Yes Yes Yes
Connectionless network No No No No Yes
Concatenation and separation No Yes Yes Yes Yes
Segmentation and reassembly Yes Yes Yes Yes Yes
Error recovery No Yes Yes Yes Yes
Reinitiate connection‹The template Smallsup is being considered for deletion.› a No Yes No Yes No
Multiplexing / demultiplexing over single virtual circuit No No Yes Yes Yes
Explicit flow control No No Yes Yes Yes
Retransmission on timeout No No No No Yes
Reliable transport service No Yes No Yes Yes
a If an excessive number of PDUs are unacknowledged.

An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments.

Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer 4 protocols within OSI.

Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers.[31][32]

Layer 5: Session layer

[edit]

The session layer creates the setup, controls the connections, and ends the teardown, between two or more computers, which is called a "session". Common functions of the session layer include user logon (establishment) and user logoff (termination) functions. Including this matter, authentication methods are also built into most client software, such as FTP Client and NFS Client for Microsoft Networks. Therefore, the session layer establishes, manages and terminates the connections between the local and remote applications. The session layer also provides for full-duplex, half-duplex, or simplex operation,[citation needed] and establishes procedures for checkpointing, suspending, restarting, and terminating a session between two related streams of data, such as an audio and a video stream in a web-conferencing application. Therefore, the session layer is commonly implemented explicitly in application environments that use remote procedure calls.

Layer 6: Presentation layer

[edit]

The presentation layer establishes data formatting and data translation into a format specified by the application layer during the encapsulation of outgoing messages while being passed down the protocol stack, and possibly reversed during the deencapsulation of incoming messages when being passed up the protocol stack. For this very reason, outgoing messages during encapsulation are converted into a format specified by the application layer, while the conversion for incoming messages during deencapsulation are reversed.

The presentation layer handles protocol conversion, data encryption, data decryption, data compression, data decompression, incompatibility of data representation between operating systems, and graphic commands. The presentation layer transforms data into the form that the application layer accepts, to be sent across a network. Since the presentation layer converts data and graphics into a display format for the application layer, the presentation layer is sometimes called the syntax layer.[33] For this reason, the presentation layer negotiates the transfer of syntax structure through the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML.[4]

Layer 7: Application layer

[edit]

The application layer is the layer of the OSI model that is closest to the end user, which means both the OSI application layer and the user interact directly with a software application that implements a component of communication between the client and server, such as File Explorer and Microsoft Word. Such application programs fall outside the scope of the OSI model unless they are directly integrated into the application layer through the functions of communication, as is the case with applications such as web browsers and email programs. Other examples of software are Microsoft Network Software for File and Printer Sharing and Unix/Linux Network File System Client for access to shared file resources.

Application-layer functions typically include file sharing, message handling, and database access, through the most common protocols at the application layer, known as HTTP, FTP, SMB/CIFS, TFTP, and SMTP. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application entity and the application. For example, a reservation website might have two application entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network.[4]

Cross-layer functions

[edit]

Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer.[34] Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation[35]). These services are aimed at improving the CIA triadconfidentiality, integrity, and availability—of the transmitted data. Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols.

Specific examples of cross-layer functions include the following:

  • Security service (telecommunication)[35] as defined by ITU-T X.800 recommendation.
  • Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, Common Management Information Protocol (CMIP) and its corresponding service, Common Management Information Service (CMIS), they need to interact with every layer in order to deal with their instances.
  • Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence.[36] It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5.
  • Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided.[37][page needed]

Programming interfaces

[edit]

Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific.

For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3).

Comparison to other networking suites

[edit]

The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. This correspondence is rough: the OSI model contains idiosyncrasies not found in later systems such as the IP stack in modern Internet.[25]

Comparison with TCP/IP model

[edit]

The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled "Layering considered harmful".[47] TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network.[48]

Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner:

  • The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer.
  • The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer.
  • The internet layer performs functions as those in a subset of the OSI network layer.
  • The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer.

These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer.

The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable.[49][page needed] Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable.[49][page needed]

Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology.[49][page needed] Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as RFC 1142.[50]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Open Systems Interconnection (OSI) model is a that standardizes the functions of a telecommunication or computing system into seven abstraction layers, facilitating the exchange of information between different systems through a common set of protocols. Developed by the (ISO) and the CCITT (now ) in the late to address challenges among diverse computer networks, it was first published in 1984 as ISO 7498, with the current version codified as ISO/IEC 7498-1:1994. The model's layered architecture promotes modularity, allowing developers to design, implement, and troubleshoot network protocols independently at each level while ensuring seamless communication across the stack. At its core, the OSI model organizes network operations from the physical transmission of bits to high-level application interactions, providing a universal reference for understanding how data moves through a network. Its primary purpose is to enable open systems—computers and devices from different vendors—to interconnect reliably, fostering global standardization in networking technologies. Unlike implementation-specific models like TCP/IP, the OSI framework is purely descriptive, serving as an educational and analytical tool rather than a rigid protocol suite, though it influences modern standards such as those used in the . The seven layers of the OSI model, numbered from bottom to top, each handle distinct aspects of communication:
  • Layer 1: Physical – Responsible for the transmission and reception of raw bit streams over a physical medium, such as cables or wireless signals, defining electrical, mechanical, and procedural specifications.
  • Layer 2: – Provides node-to-node data transfer, , and framing, using MAC addresses to manage access to the physical medium (e.g., Ethernet protocols).
  • Layer 3: Network – Manages logical addressing, , and forwarding of packets across multiple networks, enabling devices to find optimal paths (e.g., IP protocols like IPv4 and ).
  • Layer 4: – Ensures end-to-end delivery of data, including segmentation, flow control, and error recovery, with protocols like TCP for reliable transmission or UDP for faster, connectionless service.
  • Layer 5: Session – Establishes, maintains, and terminates communication sessions between applications, handling synchronization and dialog control for coordinated exchanges.
  • Layer 6: – Translates data between the application layer and the network, managing syntax, , compression, and format conversion (e.g., converting data to ASCII or ).
  • Layer 7: Application – Interfaces directly with end-user applications, providing network services such as , , and web browsing (e.g., protocols like HTTP, SMTP, and FTP).
This structure not only simplifies complex network designs but also aids in diagnosing issues by isolating problems to specific layers, making it a foundational concept in computer networking education and practice.

Overview

Purpose and Scope

The Open Systems Interconnection (OSI) model is a seven-layer reference model developed by the (ISO) to enable open systems interconnection. Its primary purpose is to provide a common basis for coordinating the development of standards that facilitate communication between diverse computer systems by abstracting network functions into modular layers. This abstraction allows systems from different vendors to interoperate without proprietary dependencies, addressing the silos created by manufacturer-specific networking protocols prevalent in the . The scope of the OSI model encompasses both theoretical understanding of network communications and practical applications in protocol design, with an emphasis on vendor-neutral standards that promote global compatibility. It serves as a framework for standardizing how data is exchanged across networks, applicable to a wide range of technologies while remaining independent of specific implementations. Key benefits of the OSI model include enhanced interoperability among heterogeneous systems, modularity that simplifies protocol development and integration of new technologies, and easier troubleshooting by isolating issues to specific layers. Additionally, it establishes a common language for networking professionals, fostering consistent terminology and approaches in education, design, and maintenance.

Key Principles

The OSI reference model is founded on the principle of layering, which decomposes the complex process of open systems interconnection into a structured of seven layers. Each layer is responsible for a distinct set of functions, providing standardized services to the layer immediately above it while utilizing the services offered by the layer below. This hierarchical arrangement enforces strict boundaries between layers, preventing direct dependencies and enabling modular design where each layer operates as an autonomous entity within the overall architecture. A fundamental aspect of this model is , which allows each layer to conceal the specific details of its internal mechanisms and protocols from adjacent layers. By presenting only a simplified interface of services and , abstraction facilitates the independent evolution of individual layers, permitting updates or replacements in one layer's without necessitating changes in others, thereby enhancing and adaptability in diverse network environments. Service access points (SAPs) define the critical interfaces between consecutive layers, serving as the designated points through which an upper layer requests and receives services from the lower layer. These access points encapsulate the interactions, ensuring that exchange occurs in a controlled and standardized manner, with protocol units passed across the boundary via such as request, indication, response, and confirm. The model employs communication as a logical , wherein entities residing in the same layer on different communicating systems interact directly through their respective protocols to fulfill layer-specific objectives. This communication is abstracted from the underlying layers, relying on the services provided below to protocol data units between peers, thus maintaining the integrity of the layered separation while enabling end-to-end functionality across interconnected systems. Independence among layers is a guiding that underscores the model's robustness, stipulating that alterations to the internal operations or technologies within one layer should have no impact on the functionality of other layers, except in cases where the defined interfaces or service specifications are modified. This separation promotes among heterogeneous systems by isolating implementation choices and allowing for at individual layers without disrupting the broader interconnection framework.

Historical Development

Origins in ISO Standardization

The development of the OSI model originated from efforts within the (ISO) to establish a universal framework for network interoperability amid the rise of incompatible proprietary systems in the 1970s. In 1977, ISO's Technical Committee 97 (TC 97) formed Subcommittee 16 (SC 16) specifically to address "Open Systems Interconnection," tasked with creating an architectural model that would enable diverse computer systems from different vendors to communicate seamlessly. This initiative was driven by the need to counteract vendor-specific protocols, such as IBM's (SNA) introduced in 1974 and Digital Equipment Corporation's (DEC) network architectures, which locked users into single-vendor ecosystems and hindered multivendor integration. The subcommittee's work drew inspiration from earlier networking experiments, including the ARPANET's packet-switching concepts developed in the late 1960s and Xerox Network Systems (XNS), a layered protocol stack pioneered by Xerox in the mid-1970s that emphasized modularity and interoperability. However, SC 16's focus remained on crafting vendor-neutral international standards rather than adopting any single prior implementation, with the first plenary meeting occurring in March 1978, where initial architectural principles were outlined. Key contributors included delegates from Europe and the United States, such as French engineer Hubert Zimmermann, who played a central role in drafting the layered structure; U.S. representatives like Charles Bachman (chairman of SC 16) from Honeywell and John Day from the University of Illinois; and British physicist Donald Davies, whose work on packet switching at the National Physical Laboratory influenced the model's foundational ideas. Initial drafts of the reference model emerged in the late 1970s through collaborative sessions involving representatives from 23 member countries, culminating in the publication of ISO 7498, "Information Processing Systems—Open Systems Interconnection—Basic Reference Model," as an international standard in October 1984. This document formalized the seven-layer architecture, providing a conceptual blueprint for open networking that prioritized modularity, transparency, and global applicability over proprietary constraints.

Evolution and Key Milestones

Following the initial publication of the OSI Reference Model in 1984 as ISO 7498, subsequent developments focused on enhancing its applicability through specialized addenda and revisions. In 1989, the (ISO) introduced ISO 7498-2, which defined a architecture for the OSI model, outlining services such as , , data , and across the layers. Concurrently, the (then CCITT) approved Recommendation X.290 in November 1988, establishing a methodology and framework for OSI protocols, including general concepts for abstract test suites to ensure . These updates addressed critical gaps in and verification, enabling more robust implementations of OSI-compliant systems. The model underwent a significant revision process starting in 1988, culminating in the publication of ISO/IEC 7498-1:1994, which refined the basic by incorporating prior addenda (such as connectionless modes), clarifying layer interactions (e.g., prohibiting relays), and aligning with emerging standards like ISO 9545 for structure. Additionally, ISO/IEC 7498-4:1989 provided a dedicated framework, defining OSI concepts, including fault, configuration, , , and functions to support ongoing network operations. This 1994 edition emphasized stability and coordination for standards development, serving as a foundational update without major structural overhauls. Adoption of the OSI model gained momentum in the late 1980s through international policies and collaborations. The (precursor to the ) endorsed OSI via its Open Systems policy, with national governments across Europe mandating its use in procurements to promote vendor by the mid-1980s. Simultaneously, the model was integrated into recommendations, building on the 1984 alliance between ISO and CCITT (now ), which harmonized OSI principles with telecommunication protocols for global consistency. By the 1990s, direct implementations of declined sharply due to the rise of the simpler, royalty-free TCP/IP suite, which dominated growth and U.S. government priorities after 1992. Despite this, the OSI model endured as a vital educational and conceptual tool, providing a structured framework for teaching network principles and influencing protocol design in academia and industry training programs. As of 2025, the OSI model retains relevance in contemporary standards and architectures. It informs cybersecurity frameworks like ISO/IEC 27001 for in industrial control systems and broader networks. In 5G and emerging 6G discussions, OSI layers—particularly 1), 5), and 6)—guide , security, and AI-integrated connectivity in non-terrestrial networks and spectrum innovations.

Definitions and Standards

Core Definitions

In the OSI model, a protocol is defined as a set of rules and formats (semantic and syntactic) that governs the interaction between peer entities at the same layer to perform specific functions. These rules ensure consistent communication behavior across open systems, enabling without regard to underlying hardware differences. A service, in contrast, refers to the capabilities provided by a given layer (N) and all layers below it to the adjacent higher layer (N+1) through a defined interface at their boundary. This service abstraction allows the higher layer to request functions such as data transfer or error handling without needing to understand the implementation details of the lower layers. Services form the foundation for modular system design in the OSI . The is the fundamental unit of data exchanged by a protocol at a specific layer, comprising protocol control information and, optionally, user data passed from the higher layer. For instance, at the , the PDU consists of bits; at the , it takes the form of frames. PDUs facilitate peer-to-peer communication by encapsulating data as it traverses layers, preserving the integrity of . OSI services are categorized into connection-oriented and connectionless modes, which determine how data transfer occurs between layers. A connection-oriented service establishes an association, or connection, between entities before data exchange, providing explicit identification for the transfer and agreement on service parameters; this mode supports reliable, sequenced delivery akin to a . For example, it ensures data units are delivered in order and acknowledges receipt, suitable for applications requiring guaranteed transmission. Conversely, a connectionless service transmits data without establishing a prior connection or maintaining logical relationships between units, treating each as an independent for efficient, . This mode prioritizes speed over reliability, as in scenarios where occasional loss is tolerable. Addressing schemes in the OSI model provide unambiguous identifiers for entities or service access points at each layer, enabling precise and delivery of PDUs within and across open systems. These schemes are layer-specific; for example, the uses hardware addresses like MAC addresses to identify devices on a local . Such mechanisms support the model's goal of hierarchical communication without requiring global knowledge at lower layers.

Relevant Standards Documents

The OSI model's foundational framework is formalized in the ISO/IEC 7498 series of international standards, developed by the (ISO) and the (IEC). The core document, ISO/IEC 7498-1:1994, titled "Information technology — Open systems interconnection — Basic Reference Model — Part 1: The basic model," defines the seven-layer architecture and principles for open systems interconnection, providing a common basis for coordinating standards development in network protocols and services; it incorporates amendments from the original 1984 edition (ISO 7498) to refine concepts like layering and service definitions. Complementing this, ISO/IEC 7498-2:1989, "Information processing systems — Open Systems Interconnection — Basic Reference Model — Part 2: Security architecture," extends the model by specifying general security services (such as and ) and mechanisms applicable across layers, positioning them within the reference model to support secure communications between open systems. ISO/IEC 7498-3:1997, "Information technology — Open Systems Interconnection — Basic Reference Model — Part 3: Naming and addressing," establishes mechanisms for identifying and locating objects in the OSI environment, including definitions for names, addresses, naming domains, and authorities to ensure consistent resolution in distributed systems. Additionally, ISO/IEC 7498-4:1989, "Information processing systems — Open Systems Interconnection — Basic Reference Model — Part 4: Management framework," outlines a structure for OSI management activities, including scopes like fault, configuration, , , and management, to guide the development of related standards for monitoring and controlling interconnected systems. Related standards from the Telecommunication Standardization Sector () align closely with the ISO/IEC 7498 series, particularly in the X.200 recommendation series for data networks and open systems; for instance, ITU-T Recommendation X.200 (1994), "Information technology — Open Systems Interconnection — Basic Reference Model: The basic model," is identical to ISO/IEC 7498-1:1994 and serves as an overview for OSI conformance in telecommunication contexts. As of 2025, these editions remain the current versions with no major revisions published, reflecting the model's enduring conceptual stability; they are available for purchase in digital formats (PDF) through the official ISO online store, with previews often accessible via the ISO standards database.

Layered Architecture

Design Principles

The OSI model's layered architecture is founded on several principles that ensure its effectiveness as a framework for open systems interconnection. These principles guide the division of network functions into distinct layers, promoting and manageability in diverse environments. Central to this is the concept of layering, which structures communication functions hierarchically while maintaining independence between components. Modularity forms a foundational , treating each layer as a self-contained module responsible for specific functions, such as transmission or error detection, without dependency on internal implementations of other layers. This allows for parallel development, where different teams or vendors can work on individual layers independently, facilitating easier updates, testing, and integration across heterogeneous systems. By encapsulating functionality within modules, the model reduces and enhances reusability, as changes in one layer's implementation do not necessitate revisions elsewhere, provided the defined interfaces remain consistent. Hierarchy establishes a strict top-down dependency structure among layers, where each layer relies on services from the layer below and provides services to the layer above through well-defined interfaces. This vertical organization ensures orderly data flow and abstraction, with higher layers focusing on user-oriented tasks while lower layers handle transmission details. Interactions between layers are mediated by service primitives—request, indication, response, and confirm—which standardize communication: a request from a higher layer triggers an indication in the lower layer, potentially eliciting a response that leads to a confirm back to the originator. This primitive-based mechanism enforces reliable, sequenced service invocation, preventing direct interlayer bypassing and maintaining architectural integrity. Openness is embedded in the model's design to promote compatibility across diverse systems from different manufacturers, achieved through internationally agreed-upon standards that specify protocols and interfaces without constraints. By defining open systems as those adhering to these standards, the OSI model enables seamless , allowing equipment and software from various sources to interoperate as if part of a unified network, a key enabler for global communication infrastructures. Completeness ensures the model encompasses all essential aspects of communication, spanning from physical in the lowest layer to high-level application interactions in the uppermost layer. This comprehensive coverage addresses the full spectrum of network operations, including bit-level hardware concerns, , session , and , providing a holistic for implementing end-to-end connectivity without gaps in functionality. Flexibility is inherent in the architecture's support for both connection-oriented and connectionless operations across layers, accommodating varied communication needs such as reliable, sequenced delivery (connection-oriented) or efficient, datagram-style transmission (connectionless). This duality allows the model to adapt to different technologies and applications, from real-time streaming to file transfers, while the layered permits evolution in individual layers without disrupting the overall structure.

Encapsulation Process

In the OSI model, encapsulation refers to the process by which data is progressively wrapped with protocol-specific information as it travels downward through the layers from the application to the physical layer on the sending system. This wrapping adds headers (and sometimes trailers) to the original data at each layer, enabling each to perform its functions independently while ensuring reliable transmission across interconnected systems. The process is defined in the OSI Reference Model (ISO/IEC 7498-1), which standardizes how layers interact to facilitate open systems interconnection. During the downward journey, application-layer data begins as a Protocol Data Unit (PDU), typically called "data," and is passed to the presentation layer, where it is encapsulated with a header for tasks like encryption, compression, and format conversion. This presentation PDU is then handed to the session layer, which adds a header for session management, synchronization, and dialog control, forming a session PDU. The session PDU reaches the transport layer, which segments the data and adds a header with control information such as sequence numbers for reassembly, creating a segment (or Transport PDU). The transport segment is passed to the network layer, which adds a header with logical addressing details like source and destination IP addresses, transforming it into a packet. This packet reaches the data link layer, which appends a header (including MAC addresses) and possibly a trailer for error detection, resulting in a frame. Finally, the frame is converted into a bit stream at the physical layer for transmission over the medium, without additional encapsulation but with signal encoding. The overall PDU transformation sequence is: application data → presentation data → session data → transport segment → network packet → data link frame → physical bits. On the receiving system, de-encapsulation reverses this process in an upward journey from the physical to the application layer. The physical layer receives the bit stream and reconstructs the frame, which the data link layer strips of its header and trailer to yield the packet. The network layer removes its header to retrieve the segment, and the transport layer strips the segment header to reassemble the data, passing it to the session layer. The session layer removes its header to handle synchronization and dialog, then passes to the presentation layer, which strips its header to perform decryption, decompression, and format conversion before delivering the original data to the application layer. This layer-by-layer stripping ensures that control information is processed only by the appropriate layer, restoring the data for application use. A generic example of data packet traversal illustrates this: an application generates user , which is encapsulated downward—adding presentation formatting/, session , sequencing, network , and framing—into bits for transmission; upon arrival, the bits are de-encapsulated upward, with each layer stripping its additions (e.g., presentation conversion and session management) to deliver the intact to the destination application. This bidirectional encapsulation maintains , allowing changes in one layer without affecting others, as outlined in the OSI model's layered .

The Seven Layers

Physical Layer

The , Layer 1 of the OSI , serves as the foundational component responsible for the transparent transmission of raw bit streams between communicating devices over a physical medium. It defines the electrical, mechanical, procedural, and functional specifications necessary to activate, maintain, and deactivate a bit-level physical , ensuring compatibility between open systems without regard to the underlying representation or semantics. This layer operates independently of higher-layer protocols, focusing solely on the physical aspects of signal propagation to enable reliable bit delivery. Core functions of the Physical Layer include bit synchronization, which aligns the timing clocks of sender and receiver to accurately delineate individual bits within the stream, and control, which governs the transmission speed to match the medium's capabilities. It also specifies transmission modes—simplex for unidirectional flow, half-duplex for bidirectional but not simultaneous exchange, or full-duplex for concurrent —and handles the or deactivation of physical circuits. These functions ensure the electrical, optical, or electromagnetic signals representing bits are generated and interpreted correctly, without any structuring or handling. The Physical Layer accommodates diverse transmission media, such as twisted-pair copper wiring for short-range connections, coaxial cables for broadband signals, fiber optic cables for high-speed long-distance optical transmission, and wireless media using radio frequencies for untethered communication. Supported network topologies include bus (linear shared medium), (centralized hub connections), ring (circular daisy-chaining), and (interconnected nodes), each influencing how signals propagate and collide on the medium. For instance, twisted-pair and fiber optics commonly underpin topologies in modern deployments. Key standards exemplify these specifications: (now TIA/EIA-232-F) defines serial point-to-point interfaces with voltage levels of +3 V to +15 V for logic 0 (space) and -3 V to -15 V for logic 1 (mark), employing single-ended unbalanced signaling for distances up to 50 feet at rates to 20 kbps. Similarly, the standard outlines (PHY) parameters, including encoding schemes and interfaces for twisted-pair, coaxial, and fiber media, supporting speeds from 1 Mb/s to 400 Gb/s via methods like encoding for synchronization. Signaling at this layer typically involves analog modulation techniques, such as or for wireless and optical links, to superimpose digital bits onto continuous carrier waves. The Data Link Layer, the second layer in the OSI reference model, provides node-to-node data transfer services across a single physical link or by organizing bits from the into logical frames and ensuring reliable delivery between directly connected devices. It handles the of data transmission, insertion of control information for error management, and regulation of access to the shared medium, operating exclusively within local boundaries without involvement in . Key functions of the Data Link Layer include framing, which involves encapsulating network-layer packets into frames by adding headers and trailers to delineate data boundaries and enable synchronization. Physical addressing is achieved through Media Access Control (MAC) addresses, 48-bit unique identifiers assigned to network interfaces for local delivery within the segment. Error detection and correction mechanisms, such as Cyclic Redundancy Check (CRC), append a checksum to frames to identify transmission errors like bit flips, with CRC using polynomial division to generate a remainder that verifies integrity upon receipt. Flow control regulates the rate of data transmission to prevent overwhelming the receiver, often through techniques like sliding window protocols that manage buffer capacities. The Data Link Layer is subdivided into two sublayers: the Logical Link Control (LLC) sublayer and the Media Access Control (MAC) sublayer, as defined in the IEEE 802 standards to separate multiplexing and medium access functions. The LLC sublayer, specified in IEEE 802.2, provides multiplexing and demultiplexing of protocols above it using Service Access Points (SAPs), such as Destination SAP (DSAP) and Source SAP (SSAP), and supports connectionless or connection-oriented services for reliable data exchange. The MAC sublayer manages access to the physical medium, resolving contention in shared environments through methods like Carrier Sense Multiple Access with Collision Detection (CSMA/CD) for detecting and resolving simultaneous transmissions, or token passing, where a control token circulates to grant sequential access rights. Prominent standards governing the Data Link Layer include the IEEE 802 series, which map to OSI Layer 2 for local area networks (LANs). defines Ethernet, incorporating MAC framing, CRC for error detection, and CSMA/CD for half-duplex operations on wired segments. specifies the MAC for wireless LANs (), using CSMA with Collision Avoidance (CSMA/CA) to mitigate hidden node problems and support frame acknowledgments for reliability. Common protocols at this layer include (PPP), a byte-oriented standard for establishing direct connections over serial links, providing framing, authentication, and multilink capabilities without assuming a specific physical medium. (HDLC), a bit-oriented ISO protocol, supports synchronous transmission with flags for framing, fields, and optional error correction via retransmission. These protocols operate in half-duplex mode, allowing bidirectional communication but not simultaneously, or full-duplex mode, enabling simultaneous transmit and receive without , as in modern switched Ethernet networks. Overall, the Data Link Layer ensures hop-by-hop reliability in a single network segment, transforming raw physical signals into structured, error-checked frames for efficient local communication.

Network Layer

The Network Layer, designated as layer 3 in the OSI reference model, provides the functional and procedural means of transferring variable-length data sequences (packets) from a source host on one network to a destination host on a potentially different network, while maintaining quality of service characteristics for the established connection. This layer establishes the foundation for internetwork communication by abstracting the underlying subnetwork technologies, enabling end-to-end data delivery across multiple interconnected networks without regard to the specific routing or switching mechanisms employed. Unlike the Data Link Layer, which operates within a single physical link, the Network Layer extends scope to multi-network environments, marking the onset of true end-to-end addressing and path selection. Key functions of the Network Layer include logical addressing, , fragmentation and reassembly, and basic congestion control. Logical addressing assigns unique identifiers (such as network service access points or NSAPs) to hosts and networks, facilitating packet identification and delivery independent of physical locations. involves path determination through the use of routing tables and algorithms, such as those the shortest path between nodes, to forward packets toward their destination across intermediate systems (routers). Fragmentation breaks down oversized packets to conform to subnetwork (MTU) limits, with reassembly performed at the destination, ensuring compatibility across diverse network types. Congestion control mechanisms monitor network load and adjust traffic flow to prevent overload, though this is typically best-effort rather than guaranteed. The Network Layer supports two primary operational approaches: connectionless () mode, where each packet is routed independently without prior setup, and connection-oriented () mode, which establishes a logical path before data transfer for sequenced delivery. The connectionless mode, exemplified by protocols like the Connectionless Network Protocol (CLNP) defined in ISO/IEC 8473, treats each packet as a self-contained unit, promoting flexibility but offering no inherent reliability or ordering. In contrast, connection-oriented operation pre-allocates resources along the path, akin to virtual circuits, to support applications requiring consistent performance. These conceptual roles, outlined in ISO/IEC 7498-1, emphasize the layer's independence from specific implementations, allowing diverse protocols to interoperate within the OSI framework.

Transport Layer

The Transport Layer, designated as layer 4 in the OSI , provides transparent end-to-end data transfer services between peer entities in different systems, ensuring reliable communication independent of the underlying characteristics. This layer bridges the gap between the network layer's host-to-host delivery and the higher layers' need for process-to-process communication, focusing on host systems rather than intermediate . Its primary role is to segment application data into transport protocol data units (TPDUs) for transmission and reassemble them at the destination, while offering to allow multiple applications to share the same network connection via transport service access points (TSAPs). The Transport Layer supports two main service types: connection-oriented and connectionless. In connection-oriented service, it establishes a virtual connection before data transfer, enabling reliable delivery through mechanisms like sequence numbering for ordering TPDUs, acknowledgments to confirm receipt, retransmissions for lost or corrupted data, and windowing to manage flow and prevent congestion. This mode is specified in ISO/IEC 8073, which defines five protocol classes tailored to network reliability: Class 0 for simple, error-free networks with minimal functions; Class 1 for basic recovery on networks prone to signal loss; Class 2 for with optional flow control; Class 3 combining and recovery; and Class 4 for full end-to-end detection and recovery using checksums on unreliable networks. In contrast, connectionless service transfers data without prior setup, prioritizing simplicity and speed for applications tolerating potential loss, with functions limited to via addressing and optional detection via checksums, but without segmentation, reassembly, or recovery mechanisms. This mode is defined in ISO/IEC 8602, which operates over either connectionless or connection-oriented network services. End-to-end error recovery and flow control in the ensure and efficient transmission across diverse network conditions, distinguishing it from layer's focus on . For instance, in connection-oriented protocols, flow control uses credit-based windowing where the receiver advertises available buffer space through acknowledgment TPDUs, preventing overload. These mechanisms collectively provide the reliability guarantees needed for upper-layer services, such as those in the , without assuming specific network paths.

Session Layer

The session layer, the fifth layer in the Open Systems Interconnection (OSI) reference model, is responsible for establishing, managing, and terminating communication sessions between applications on different devices, ensuring coordinated and reliable dialogue over potentially unreliable transport connections. It provides mechanisms for dialog control and synchronization, allowing applications to maintain stateful interactions without directly handling lower-layer complexities. Key functions of the session layer include session establishment via the CONNECT Service Protocol Data Unit (SPDU), which initiates a connection between session entities; maintenance through ongoing data transfer SPDUs that support continuous communication; and termination using the DISCONNECT SPDU to cleanly end the session. Dialog control is achieved by regulating the direction and mode of communication, supporting (one-way), half-duplex (bidirectional but alternating), or full-duplex (simultaneous bidirectional) operations, primarily through token-based mechanisms that determine which entity may transmit at a given time. Synchronization features enable recovery from interruptions in long-running sessions by defining checkpoints, such as minor sync points (via MINOR SYNC POINT SPDU) for lightweight pauses and major sync points (via MAJOR SYNC POINT SPDU) for more robust markers, allowing resynchronization with the RESYNCHRONIZE SPDU to resume from the last agreed point without full restart. In multi-party sessions, token management coordinates access by using GIVE TOKENS and PLEASE TOKENS SPDUs to transfer control tokens among participants, preventing conflicts and ensuring orderly interaction. The protocol operates in two modes: token mode, which enforces strict control via token possession for activities like data sending, and no-token mode, which permits freer data exchange without requiring token ownership, alongside activity management through ACTIVITY START and ACTIVITY END SPDUs to delineate logical units of work within the session. These functions are standardized in ISO/IEC 8327-1:1996, which specifies the connection-oriented session protocol for OSI environments, identical to ITU-T Recommendation X.225.

Presentation Layer

The Presentation Layer, the sixth layer in the Open Systems Interconnection (OSI) model, ensures that data exchanged between applications on different systems is in a compatible format by handling translation, formatting, and representation differences. It acts as an intermediary between the Application Layer and the Session Layer, providing independence from variations in data syntax and semantics to enable seamless interoperability across heterogeneous environments. This layer transforms data into a standardized form suitable for network transmission while preserving its integrity and meaning. Key functions of the Presentation Layer include data syntax translation, compression, and encryption/decryption. Syntax translation involves converting between different data representations, such as character encodings from ASCII to EBCDIC, to accommodate diverse system architectures. Compression reduces data volume for efficient transmission without loss of information, while encryption/decryption applies basic primitives to secure data confidentiality during transfer, with more advanced security mechanisms addressed elsewhere. These operations ensure that the receiving system can correctly interpret the data regardless of originating platform differences. The layer employs Abstract Syntax Notation One (ASN.1) to define abstract data structures, types, values, and constraints independently of specific machine or language implementations, facilitating the description of information for protocol exchanges. supports the creation of an abstract syntax that outlines the logical structure of data. To enable actual transmission, the Presentation Layer converts this abstract syntax into a transfer syntax using standardized encoding rules, which map internal representations to network-compatible formats. Relevant standards include ISO/IEC 8824, which specifies for basic notation in defining abstract syntax, and ISO/IEC 8825, which outlines encoding rules such as Basic Encoding Rules (BER) for deriving transfer syntaxes from definitions. These standards, developed under the OSI framework, promote consistent data handling across open systems.

Application Layer

The , designated as Layer 7 in the OSI , serves as the interface between end-user applications and the underlying network services, enabling distributed applications to communicate effectively across open systems. It provides a set of standardized services that allow application processes to access the OSI environment without needing to manage the intricacies of lower-layer protocols or data formatting. According to the OSI Basic , this layer focuses on user-oriented functionalities, such as initiating and managing network-based tasks, while abstracting away the details of , , and physical transmission. Key functions of the Application Layer include supporting common network services like file transfer, electronic messaging, and directory services, which facilitate interoperability among diverse systems. It does not include direct user interfaces—those are handled by the application software itself—but rather supplies the necessary primitives for applications to request and receive network resources. For instance, it enables applications to establish associations with remote peers, transfer data, and manage sessions at a high level of abstraction. The layer's design emphasizes modularity, allowing specific services to be combined to meet application needs, thereby promoting standardization in open interconnection environments. The structure and components of the Application Layer are formally defined in ISO/IEC 9548-1, which outlines the architectural framework, including association control service elements (ACSE) for managing application associations and common application service elements (CASE) that provide reusable functionalities across multiple applications. Specific Application Service Elements (SASE) tailor these services to particular tasks, ensuring that the layer remains conceptual yet adaptable to various implementations. This standard establishes guidelines for how application processes interact with the OSI stack, focusing on service primitives like request, indication, response, and to handle distributed operations efficiently. Representative protocols operating at the Application Layer include the , Access, and Management (FTAM) protocol, standardized in ISO 8571, which provides mechanisms for initiating , accessing remote file stores, and performing management operations such as deletion and attribute retrieval across heterogeneous systems. Another example is the Message Handling System, developed by , which defines a comprehensive framework for electronic services, including message submission, transfer, and delivery, thereby supporting email-like functionalities in an OSI-compliant manner. These protocols exemplify the layer's role in delivering high-level, application-specific services while maintaining compatibility with the OSI model's principles of layered abstraction.

Interlayer Interactions

Cross-Layer Functions

Cross-layer functions in the OSI model encompass mechanisms that enable interactions and optimizations across multiple layers, diverging from the model's ideal of strict isolation to address real-world networking challenges such as varying channel conditions and constraints. These functions facilitate between layers, allowing for joint decision-making that enhances overall system performance, particularly in environments where traditional layering can introduce inefficiencies. By permitting higher layers to influence lower-layer operations or vice versa, cross-layer approaches optimize metrics like throughput and reliability, though they complicate protocol and maintenance. Quality of Service (QoS) mechanisms exemplify cross-layer functions, as parameters such as delay and inherently span multiple OSI layers to ensure reliable data delivery. For instance, propagation delay at the due to over media directly contributes to end-to-end latency experienced at the , where congestion control protocols must mitigate accumulated delays. , or variation in packet arrival times, arises from interactions between the layer's error correction and the Network layer's routing decisions, requiring cross-layer signaling to prioritize real-time traffic like . In wireless sensor networks, cross-layer frameworks address these by coordinating QoS metrics across , , and layers, reducing time-delay through adaptive resource allocation. A multilayered QoS architecture based on the OSI employs cross-layer coordination to integrate feedback from lower layers into higher-layer policies, ensuring consistent performance in systems. Security functions, particularly , operate across layers to protect data confidentiality throughout transmission. Encryption processes typically initiate at the , where data is formatted and encrypted using algorithms like AES to abstract application-specific representations, but the protection extends through the Network layer by embedding encrypted payloads in packets that traverse intermediate routers without decryption. This spanning ensures that even if lower layers like are compromised during local hops, the core data remains secure from source to destination. In wireless security contexts, such cross-layer encryption addresses vulnerabilities at each OSI layer, with end-to-end mechanisms like at the Network layer complementing Presentation-layer encryption to counter threats like . Mobility management in networks relies on cross-layer handoffs to maintain seamless connectivity as devices move between access points. Handoffs involve the for maintaining local associations and switching medium access control addresses, while the Network layer updates routing tables and care-of addresses to redirect traffic without session interruption. For example, in networks, handoffs are enhanced by layer 1/2 triggered mobility, where beam management coordinates with scheduling and RRC signaling at higher layers to reduce handover latency and support ultra-reliable low-latency communications. This coordination minimizes signaling overhead and latency, affecting layers from Physical (radio link adaptation) to (connection continuity). Representative examples of cross-layer functions include adaptive modulation, which adjusts Physical-layer transmission parameters based on feedback from the Network layer to optimize path selection in dynamic channels. In cross-layer designs for wireless networks, adaptive modulation and coding (AMC) at the Physical layer integrates with Data Link-layer hybrid automatic repeat request (HARQ) to boost throughput by 20-50% under varying signal-to-noise ratios, effectively bridging to Network-layer QoS requirements. Modern protocols like IEEE 802.11 in Wi-Fi employ cross-layer optimizations, where Physical-layer channel state information informs Data Link-layer scheduling, enhancing spectral efficiency in contention-based environments. Similar optimizations are employed in 5G NR, where cross-layer designs integrate PHY/MAC layer adaptations with higher-layer QoS to support ultra-reliable low-latency communications (URLLC). Criticisms of cross-layer functions highlight their violation of the OSI model's purity, as they undermine modularity and information hiding, potentially complicating interoperability and increasing design complexity in standardized systems. Despite this, such functions are necessary for efficiency in real-world deployments like Wi-Fi, where strict layering fails to handle wireless-specific issues like fading and mobility, leading to suboptimal performance without inter-layer coordination. In wireless networks, cross-layer approaches mitigate inefficiencies of traditional layered architectures, such as high error rates and handoff delays, by enabling adaptive optimizations that traditional OSI adherence cannot achieve.

Programming Interfaces

Programming interfaces in the OSI model enable software applications to interact with the underlying layers, primarily through standardized APIs that abstract the complexities of layer-specific protocols. These interfaces allow developers to access services at the transport and layers without needing to implement low-level details, facilitating portable and efficient network programming. The most common mechanisms include socket-based APIs for transport layer operations and more structured interfaces that align closely with the OSI layering for broader access. The Berkeley Software Distribution (BSD) sockets provides a foundational interface for accessing the (layer 4) of the OSI model, enabling applications to communicate using protocols such as TCP for reliable, connection-oriented streams and UDP for connectionless datagrams. Originating in systems, this uses file descriptors to represent sockets, allowing operations like binding addresses, connecting to peers, sending data, and receiving notifications of incoming connections. It serves as the for access, bridging the directly to transport services while hiding details of lower layers. For more explicit alignment with the OSI model's layered , the X/Open Transport Interface (XTI) offers a standardized programming interface that supports access to transport layer services across multiple protocols, including those beyond TCP/IP. Defined by the Open Group, XTI provides functions for connection establishment, data transfer, and disconnection, with options to select specific transport providers that map to OSI layer 4 behaviors. This interface emphasizes , allowing applications to query and configure transport characteristics like , making it suitable for environments requiring strict adherence to OSI principles. At the core of OSI layer interactions are service primitives, which define the standardized messages exchanged between adjacent layers to request, indicate, respond to, or confirm services. These primitives include four main types: request primitives issued by a higher layer (N+1) to invoke a service from the lower layer (N); indication primitives sent upward from layer N to layer N+1 to notify of events or incoming service activations; response primitives from layer N+1 to complete a previously indicated service; and confirm primitives from layer N to layer N+1 to acknowledge a requested service outcome. For example, in the transport layer, a T-CONNECT request from the session layer initiates a connection, triggering a T-CONNECT indication at the remote peer's session layer, followed by responses and confirmations to establish the end-to-end link. This primitive-based model ensures reliable interlayer communication and supports both connection-oriented and connectionless services across the OSI stack. Implementations of these interfaces vary by operating system, integrating OSI concepts into kernel-level networking stacks. In , the netfilter framework provides hooks for packet processing at the network layer (layer 3), allowing user-space applications to define rules for filtering, modification, and via tools like or , which inspect IP headers and enforce policies aligned with OSI network functions. Similarly, the Windows Sockets API (Winsock), an extension of the BSD sockets model, enables access through functions like socket(), connect(), and send(), with support for both IPv4 and protocols, abstracting the OSI services for Windows applications. These OS-specific realizations make OSI-compliant programming practical in real-world environments. As of 2025, modern extensions to OSI programming interfaces incorporate (SDN) controllers, which enable programmable access to multiple layers through centralized APIs like those in or P4, allowing dynamic reconfiguration of network behaviors at layers 2 through 7 via southbound interfaces to switches and northbound APIs for applications. This integration enhances flexibility in and edge environments by decoupling control logic from data planes, supporting OSI-like layering while adding programmability for emerging use cases such as quantum networks.

Comparisons with Other Models

TCP/IP Model

The TCP/IP model, developed as part of the ARPANET project and standardized by the Internet Engineering Task Force (IETF), organizes networking functions into four primary layers: the Network Access (or Link) layer, which handles physical transmission and data framing (mapping to OSI layers 1 and 2); the Internet layer, responsible for logical addressing and routing (OSI layer 3); the Transport layer, which provides end-to-end data delivery (OSI layer 4); and the Application layer, encompassing user interfaces, data formatting, and session management (OSI layers 5 through 7). This structure emerged from practical implementations in the 1970s and 1980s, prioritizing interoperability across diverse hardware. Key protocols in the TCP/IP suite align with these layers through direct mappings to OSI functions: the (IP) operates at the for packet routing and addressing, akin to the OSI ; the Transmission Control Protocol (TCP) and (UDP) function at the for reliable or unreliable data transfer, respectively; and application protocols such as Hypertext Transfer Protocol (HTTP) and (FTP) reside in the , integrating session establishment, data presentation, and application-specific logic without separate OSI-style Session or Presentation layers. These mappings highlight how TCP/IP condenses upper-layer responsibilities, allowing protocols like HTTP to handle encryption and formatting inline. In contrast to the OSI model's abstract, seven-layer reference framework designed for theoretical standardization and vendor neutrality, the TCP/IP model adopts a protocol-centric, implementation-driven approach with fewer, more flexible layers, omitting strict boundaries for session and representation. This pragmatic enabled rapid and deployment, as evidenced by the U.S. Department of Defense's 1983 mandate for TCP/IP adoption across , leading to its dominance in global internetworking by the . The OSI model's detailed layering, while comprehensive, proved overly rigid and resource-intensive, slowing commercial uptake despite international backing from the (ISO). The TCP/IP model's simplicity facilitated its role as the foundation of the modern , supporting scalable growth without the OSI's emphasis on modular protocol development. Today, hybrid approaches prevail in network analysis, where TCP/IP implementations are mapped onto the OSI model to aid troubleshooting, protocol debugging, and educational clarity, leveraging the OSI's structured lens for dissecting TCP/IP behaviors across functions like and application delivery.

Other Networking Frameworks

Systems Network Architecture (SNA), developed by in the 1970s, represents a hierarchical networking framework that contrasts sharply with the OSI model's layered approach. SNA organizes communication into five nested subnetworks—user, transaction, interprocess, subarea, and peripheral—emphasizing centralized control through elements like the System Services Control Point (SSCP) for and session establishment. This hierarchical structure, which evolved from an initial three-layer model in 1974, prioritizes IBM's unified product ecosystem over open , differing from OSI's seven independent layers designed for diverse systems. In SNA, functions such as and session management are tightly coupled across hierarchies, whereas OSI enforces strict layer boundaries to enable modular protocol development. DECnet, Digital Equipment Corporation's proprietary networking suite introduced in the late 1970s, adopted a layered architecture that closely mirrored OSI's structure while remaining vendor-specific, influencing early OSI standardization efforts. Phase IV of DECnet featured eight layers—adding a user layer atop the seven OSI equivalents (Application, Presentation, Session, Transport, Network, Data Link, Physical)—supporting protocols like DDCMP for data link control and adaptive routing at the network layer. Digital's active participation in ISO committees since 1979, including contributions to OSI network layer protocols like ISO 8473, helped shape OSI's design by demonstrating practical layered implementations in proprietary contexts. Unlike OSI's open standards focus, DECnet's proprietary addressing (e.g., limited to 1023 nodes per area) and extensions like the DNA Naming Service optimized for DEC hardware, limiting cross-vendor compatibility until Phase V integrated full OSI compliance. In modern telecommunications, the 5G New Radio (NR) architecture defined by 3GPP maps its protocol stack to OSI layers, adapting OSI principles for high-speed, low-latency wireless environments while introducing service-based paradigms. The physical layer (OSI Layer 1) handles mmWave and massive MIMO transmission; Layer 2 encompasses MAC, RLC, and PDCP for error correction and multiplexing; Layer 3 includes RRC for connection management and IP-based routing; higher layers (4-7) integrate NAS for mobility and application services via SDN-enabled control planes. This mapping, outlined in 3GPP TS 38.300, extends OSI's modularity to support network slicing but diverges by embedding virtualization natively, unlike OSI's hardware-centric assumptions. Cloud networking models like AWS Virtual Private Cloud (VPC) further illustrate this evolution, implementing OSI layers through isolated virtual networks: Layer 1/2 via Elastic Network Interfaces and VPC endpoints for physical/data link abstraction; Layer 3 with route tables and Internet Gateways for IP routing; Layers 4-7 via Elastic Load Balancing and API Gateway for transport and application handling. AWS VPC's software-defined overlays enable scalable isolation, addressing OSI's limitations in multi-tenant environments. Comparisons highlight OSI's universality as a framework against the domain-specific optimizations of alternatives: SNA and DECnet prioritized vendor ecosystems, while and AWS VPC tailor layers for wireless and cloud scalability, respectively. OSI's influence persists in SDN and NFV, where control planes decouple from planes across Layers 2-7, enabling programmable that OSI conceptually foreshadowed but did not explicitly support. A key gap in OSI is its absence of native virtualization mechanisms, assuming dedicated hardware per layer; newer frameworks like 5G's network slicing and AWS VPC's overlays integrate hypervisor-based functions to virtualize entire stacks, enhancing flexibility for and multi-tenancy.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.