Hubbry Logo
Packet switchingPacket switchingMain
Open search
Packet switching
Community hub
Packet switching
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Packet switching
Packet switching
from Wikipedia

In telecommunications, packet switching is a method of grouping data into short messages in fixed format, i.e., packets, that are transmitted over a telecommunications network. Packets consist of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.

During the early 1960s, American engineer Paul Baran developed a concept he called distributed adaptive message block switching as part of a research program at the RAND Corporation, funded by the United States Department of Defense. His proposal was to provide a fault-tolerant, efficient method for communication of voice messages using low-cost hardware to route the message blocks across a distributed network. His ideas contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of Welsh computer scientist Donald Davies at the National Physical Laboratory beginning in 1965. Davies developed the concept for data communication using software switches in a high-speed computer network and coined the term packet switching. His work inspired numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States and the CYCLADES network in France. The ARPANET and CYCLADES were the primary precursor networks of the modern Internet.

Concept

[edit]

A simple definition of packet switching is:

The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, and upon completion of the transmission the channel is made available for the transfer of other traffic.[4][5]

Packet switching allows delivery of variable bit rate data streams, realized as sequences of short messages in fixed format, i.e. packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. Packet-based communication may be implemented with or without intermediate forwarding nodes (switches and routers). In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme.

Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages.

A packet switch has four components: input ports, output ports, routing processor, and switching fabric.[6]

History

[edit]

Invention and development

[edit]
The "message block", designed by Paul Baran in 1962 and refined in 1964, is the first proposal of a data packet.[7][8]
Packet-switching cost performance trends, 1960-1980.[9]

The concept of switching small blocks of data was first invented independently by Paul Baran at the RAND Corporation during the early 1960s in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK in 1965.[1][2][3][10]

In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment (SAGE) radar defense system. Recognizing vulnerabilities in this network, the Air Force sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies (see Mutual assured destruction). In the early 1960s, Baran invented the concept of distributed adaptive message block switching in support of the Air Force initiative.[11][12] The concept was first presented to the Air Force in the summer of 1961 as briefing B-265,[13] later published as RAND report P-2626 in 1962,[7] and finally in report RM 3420 in 1964.[8] The reports describe a general architecture for a large-scale, distributed, survivable communications network. The proposal was composed of three key ideas: use of a decentralized network with multiple paths between any two points; dividing user messages into message blocks; and delivery of these messages by store and forward switching.[11][14] Baran's network design was focused on digital communication of voice messages using hardware switches that were low-cost electronics.[15][16][17] The ideas were not entirely original to Baran and the task to design a store and forward type of network was formulated by Baran's boss at RAND.[18]

Christopher Strachey, who became Oxford University's first Professor of Computation, filed a patent application in the United Kingdom for time-sharing in February 1959.[19][20] In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider.[21][22] Licklider (along with John McCarthy) was instrumental in the development of time-sharing. After conversations with Licklider about time-sharing with remote computers in 1965,[23][24] Davies independently invented a similar data communication concept.[25] His insight was to use short messages in fixed format with high data transmission rates to achieve rapid communications.[26] He went on to develop a more advanced design for a hierarchical, high-speed computer network including interface computers and communication protocols.[27][28][29] He coined the term packet switching, and proposed building a commercial nationwide data network in the UK.[30][31] He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence (MoD) told him about Baran's work.[32]

Roger Scantlebury, a member of Davies' team, presented their work (and referenced that of Baran) at the October 1967 Symposium on Operating Systems Principles (SOSP).[29][33][34][35][36] At the conference, Scantlebury proposed packet switching for use in the ARPANET and persuaded Larry Roberts the economics were favorable to message switching.[37][38][39][40][41][42] Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. To deal with packet permutations (due to dynamically updated route preferences) and datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control",[29] thus inventing what came to be known as the end-to-end principle. Davies proposed that a local-area network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in early 1969,[43][44][45][46] the NPL Data Communications Network began service in 1970.[47] Davies was invited to Japan to give a series of lectures on packet switching.[48] The NPL team carried out simulation work on datagrams and congestion in networks on a scale to provide data communication across the United Kingdom.[46][49][50][51][52]

Larry Roberts made the key decisions in the request for proposal to build the ARPANET.[53] Roberts met Baran in February 1967, but did not discuss networks.[54][55] He asked Frank Westervelt to explore the questions of message size and contents for the network, and to write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re transmission, and computer and user identification."[56] Roberts revised his initial design, which was to connect the host computers directly, to incorporate Wesley Clark's idea to use Interface Message Processors (IMPs) to create a message switching network, which he presented at SOSP.[57][58][59][60] Roberts was known for making decisions quickly.[61] Immediately after SOSP, he incorporated Davies' and Baran's concepts and designs for packet switching to enable the data communications on the network.[39][62][63][64]

A contemporary of Roberts' from MIT, Leonard Kleinrock had researched the application of queueing theory in the field of message switching for his doctoral dissertation in 1961–62 and published it as a book in 1964.[65] Davies, in his 1966 paper on packet switching,[27] applied Kleinorck's techniques to show that "there is an ample margin between the estimated performance of the [packet-switched] system and the stated requirement" in terms of a satisfactory response time for a human user.[66] This addressed a key question about the viability of computer networking.[67] Larry Roberts brought Kleinrock into the ARPANET project informally in early 1967.[68] Roberts and Taylor recognized the issue of response time was important, but did not apply Kleinrock's methods to assess this and based their design on a store-and-forward system that was not intended for real-time computing.[69] After SOSP, and after Roberts' direction to use packet switching,[62] Kleinrock sought input from Baran and proposed to retain Baran and RAND as advisors.[70][71][72] The ARPANET working group assigned Kleinrock responsibility to prepare a report on software for the IMP.[73] In 1968, Roberts awarded Kleinrock a contract to establish a Network Measurement Center (NMC) at UCLA to measure and model the performance of packet switching in the ARPANET.[70]

Bolt Beranek & Newman (BBN) won the contract to build the network. Designed principally by Bob Kahn,[74][75] it was the first wide-area packet-switched network with distributed control.[53] The BBN "IMP Guys" independently developed significant aspects of the network's internal operation, including the routing algorithm, flow control, software design, and network control.[76][77] The UCLA NMC and the BBN team also investigated network congestion.[74][78] The Network Working Group, led by Steve Crocker, a graduate student of Kleinrock's at UCLA, developed the host-to-host protocol, the Network Control Program, which was approved by Barry Wessler for ARPA,[79] after he ordered certain more exotic elements to be dropped.[80] In 1970, Kleinrock extended his earlier analytic work on message switching to packet switching in the ARPANET.[81] His work influenced the development of the ARPANET and packet-switched networks generally.[82][83][84]

The ARPANET was demonstrated at the International Conference on Computer Communication (ICCC) in Washington in October 1972.[85][86] However, fundamental questions about the design of packet-switched networks remained.[87][88][89]

Roberts presented the idea of packet switching to communication industry professionals in the early 1970s. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran had faced the same rejection and thus failed to convince the military into constructing a packet switching network in the 1960s.[9]

The CYCLADES network was designed by Louis Pouzin in the early 1970s to study internetworking.[90][91][92] It was the first to implement the end-to-end principle of Davies, and make the host computers responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself.[93] His team was thus first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP).[94]

Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking.[95]

In May 1974, Vint Cerf and Bob Kahn described the Transmission Control Program, an internetworking protocol for sharing resources using packet-switching among the nodes.[96] The specifications of the TCP were then published in RFC 675 (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in December 1974.[97]

The X.25 protocol, developed by Rémi Després and others, was built on the concept of virtual circuits. In the mid-late 1970s and early 1980s, national and international public data networks emerged using X.25 which was developed with participation from France, the UK, Japan, USA and Canada. It was complemented with X.75 to enable internetworking.[98]

Packet switching was shown to be optimal in the Huffman coding sense in 1978.[99][100]

In the late 1970s, the monolithic Transmission Control Program was layered as the Transmission Control Protocol (TCP), atop the Internet Protocol (IP). Many Internet pioneers developed this into the Internet protocol suite and the associated Internet architecture and governance that emerged in the 1980s.[101][102][103][104][105][106]

For a period in the 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the Internet protocol suite and the OSI model would result in the best and most robust computer networks.[107][108][109]

Leonard Kleinrock carried out theoretical work at UCLA during the 1970s analyzing throughput and delay in the ARPANET.[110][111][112] His theoretical work on hierarchical routing with student Farouk Kamoun became critical to the operation of the Internet.[113][114] Kleinrock published hundreds of research papers,[115][116] which ultimately launched a new field of research on the theory and application of queuing theory to computer networks.[81][117]

Complementary metal–oxide–semiconductor (CMOS) VLSI (very-large-scale integration) technology led to the development of high-speed broadband packet switching during the 1980s–1990s.[118][119][120]

The "paternity dispute"

[edit]

Roberts claimed in later years that, by the time of the October 1967 SOSP, he already had the concept of packet switching in mind (although not yet named and not written down in his paper published at the conference, which a number of sources describe as "vague"), and that this originated with his old colleague, Kleinrock, who had written about such concepts in his Ph.D. research in 1961-2.[59][37][60][121][122] In 1997, along with seven other Internet pioneers, Roberts and Kleinrock co-wrote "Brief History of the Internet" published by the Internet Society.[123] In it, Kleinrock is described as having "published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964". Many sources about the history of the Internet began to reflect these claims as uncontroversial facts. This became the subject of what Katie Hafner called a "paternity dispute" in The New York Times in 2001.[124]

The disagreement about Kleinrock's contribution to packet switching dates back to a version of the above claim made on Kleinrock's profile on the UCLA Computer Science department website sometime in the 1990s. Here, he was referred to as the "Inventor of the Internet Technology".[125] The webpage's depictions of Kleinrock's achievements provoked anger among some early Internet pioneers.[126] The dispute over priority became a public issue after Donald Davies posthumously published a paper in 2001 in which he denied that Kleinrock's work was related to packet switching. Davies also described ARPANET project manager Larry Roberts as supporting Kleinrock, referring to Roberts' writings online and Kleinrock's UCLA webpage profile as "very misleading".[127][128] Walter Isaacson wrote that Kleinrock's claims "led to an outcry among many of the other Internet pioneers, who publicly attacked Kleinrock and said that his brief mention of breaking messages into smaller pieces did not come close to being a proposal for packet switching".[126]

Davies' paper reignited a previous dispute over who deserves credit for getting the ARPANET online between engineers at Bolt, Beranek, and Newman (BBN) who had been involved in building and designing the ARPANET IMP on the one side, and ARPA-related researchers on the other.[76][77] This earlier dispute is exemplified by BBN's Will Crowther, who in a 1990 oral history described Paul Baran's packet switching design (which he called hot-potato routing), as "crazy" and non-sensical, despite the ARPA team having advocated for it.[129] The reignited debate caused other former BBN employees to make their concerns known, including Alex McKenzie, who followed Davies in disputing that Kleinrock's work was related to packet switching, stating "... there is nothing in the entire 1964 book that suggests, analyzes, or alludes to the idea of packetization".[130]

Former IPTO director Bob Taylor also joined the debate, stating that "authors who have interviewed dozens of Arpanet pioneers know very well that the Kleinrock-Roberts claims are not believed".[131] Walter Isaacson notes that "until the mid-1990s Kleinrock had credited [Baran and Davies] with coming up with the idea of packet switching".[126]

A subsequent version of Kleinrock's biography webpage was copyrighted in 2009 by Kleinrock.[132] He was called on to defend his position over subsequent decades.[133] In 2023, he acknowledged that his published work in the early 1960s was about message switching and claimed he was thinking about packet switching.[134] Primary sources and historians recognize Baran and Davies for independently inventing the concept of digital packet switching used in modern computer networking including the ARPANET and the Internet.[1][2][39][135][136]

Kleinrock has received many awards for his ground-breaking applied mathematical research on packet switching, carried out in the 1970s, which was an extension of his pioneering work in the early 1960s on the optimization of message delays in communication networks.[81][137] However, Kleinrock's claims that his work in the early 1960s originated the concept of packet switching and that his work was a source of the packet switching concepts used in the ARPANET have affected sources on the topic, which has created methodological challenges in the historiography of the Internet.[124][126][128][133] Historian Andrew L. Russell said "'Internet history' also suffers from a ... methodological, problem: it tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories".[138]

Connectionless and connection-oriented modes

[edit]

Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching. Examples of connectionless systems are Ethernet, IP, and the User Datagram Protocol (UDP). Connection-oriented systems include X.25, Frame Relay, Multiprotocol Label Switching (MPLS), and TCP.

In connectionless mode each packet is labeled with a destination address, source address, and port numbers. It may also be labeled with the sequence number of the packet. This information eliminates the need for a pre-established path to help the packet find its way to its destination, but means that more information is needed in the packet header, which is therefore larger. The packets are routed individually, sometimes taking different paths resulting in out-of-order delivery. At the destination, the original message may be reassembled in the correct order, based on the packet sequence numbers. Thus a virtual circuit carrying a byte stream is provided to the application by a transport layer protocol, although the network only provides a connectionless network layer service.

Connection-oriented transmission requires a setup phase to establish the parameters of communication before any packet is transferred. The signaling protocols used for setup allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated. The packets transferred may include a connection identifier rather than address information and the packet header can be smaller, as it only needs to contain this code and any information, such as length, timestamp, or sequence number, which is different for different packets. In this case, address information is only transferred to each node during the connection setup phase, when the route to the destination is discovered and an entry is added to the switching table in each network node through which the connection passes. When a connection identifier is used, routing a packet requires the node to look up the connection identifier in a table.[citation needed]

Connection-oriented transport layer protocols such as TCP provide a connection-oriented service by using an underlying connectionless network. In this case, the end-to-end principle dictates that the end nodes, not the network itself, are responsible for the connection-oriented behavior.

Packet switching in networks

[edit]

In telecommunication networks, packet switching is used to optimize the usage of channel capacity and increase robustness.[60] Compared to circuit switching, packet switching is highly dynamic, allocating channel capacity based on usage instead of explicit reservations. This can reduce wasted capacity caused by underutilized reservations at the cost of removing bandwidth guarantees. In practice, congestion control is generally used in IP networks to dynamically negotiate capacity between connections. Packet switching may also increase the robustness of networks in the face of failures. If a node fails, connections do not need to be interrupted, as packets may be routed around the failure.

Packet switching is used in the Internet and most local area networks. The Internet is implemented by the Internet Protocol Suite using a variety of link layer technologies. For example, Ethernet and Frame Relay are common. Newer mobile phone technologies (e.g., GSM, LTE) also use packet switching. Packet switching is associated with connectionless networking because, in these systems, no connection agreement needs to be established between communicating parties prior to exchanging data.

X.25, the international CCITT standard of 1976, is a notable use of packet switching in that it provides to users a service of flow-controlled virtual circuits. These virtual circuits reliably carry variable-length packets with data order preservation. DATAPAC in Canada was the first public network to support X.25, followed by TRANSPAC in France.[139]

Asynchronous Transfer Mode (ATM) is another virtual circuit technology. It differs from X.25 in that it uses small fixed-length packets (cells), and that the network imposes no flow control to users.

Technologies such as MPLS and the Resource Reservation Protocol (RSVP) create virtual circuits on top of datagram networks. MPLS and its predecessors, as well as ATM, have been called "fast packet" technologies. MPLS, indeed, has been called "ATM without cells".[140] Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications.

Packet-switched networks

[edit]

Donald Davies' work in the late 1960s on data communications and computer network design became well known in the United States, Europe and Japan.[48][141][142][143] It was the "cornerstone" that inspired numerous packet switching networks in the decade following.[144][145][146][147]

The history of packet-switched networks can be divided into three overlapping eras: early networks before the introduction of X.25; the X.25 era when many postal, telephone, and telegraph (PTT) companies provided public data networks with X.25 interfaces; and the Internet era which initially competed with the OSI model.[148][149][150]

Early networks

[edit]

Research into packet switching at the National Physical Laboratory (NPL) began with a proposal for a wide-area network in 1965,[23] and a local-area network in 1966.[151] ARPANET funding was secured in 1966 by Bob Taylor, and planning began in 1967 when he hired Larry Roberts. The NPL network followed by the ARPANET became operational in 1969, the first two networks to use packet switching.[44][45] Larry Roberts said many of the packet switching networks built in the 1970s were similar "in nearly all respects" to Donald Davies' original 1965 design.[147] The ARPANET and Louis Pouzin's CYCLADES were the primary precursor networks of the modern Internet.[93] CYCLADES, unlike ARPANET, was explicitly designed to research internetworking.[90]

Before the introduction of X.25 in 1976,[152] about twenty different network technologies had been developed. Two fundamental differences involved the division of functions and tasks between the hosts at the edge of the network and the network core. In the datagram system, operating according to the end-to-end principle, the hosts have the responsibility to ensure orderly delivery of packets. In the virtual call system, the network guarantees sequenced delivery of data to the host. This results in a simpler host interface but complicates the network. The X.25 protocol suite uses this network type.

AppleTalk

[edit]

AppleTalk is a proprietary suite of networking protocols developed by Apple in 1985 for Apple Macintosh computers. It was the primary protocol used by Apple devices through the 1980s and 1990s. AppleTalk included features that allowed local area networks to be established ad hoc without the requirement for a centralized router or server. The AppleTalk system automatically assigned addresses, updated the distributed namespace, and configured any required inter-network routing. It was a plug-n-play system.[153][154]

AppleTalk implementations were also released for the IBM PC and compatibles, and the Apple IIGS. AppleTalk support was available in most networked printers, especially laser printers, some file servers and routers.

The protocol was designed to be simple, autoconfiguring, and not require servers or other specialized services to work. These benefits also created drawbacks, as Appletalk tended not to use bandwidth efficiently. AppleTalk support was terminated in 2009.[153][155]

ARPANET

[edit]

The ARPANET was a progenitor network of the Internet and one of the first networks, along with ARPA's SATNET, to run the TCP/IP suite using packet switching technologies.

BNRNET

[edit]

BNRNET was a network which Bell-Northern Research developed for internal use. It initially had only one host but was designed to support many hosts. BNR later made major contributions to the CCITT X.25 project.[156]

Cambridge Ring

[edit]

The Cambridge Ring was an experimental ring network developed at the Computer Laboratory, University of Cambridge. It operated from 1974 until the 1980s.

CompuServe

[edit]

CompuServe developed its own packet switching network, implemented on DEC PDP-11 minicomputers acting as network nodes that were installed throughout the US (and later, in other countries) and interconnected. Over time, the CompuServe network evolved into a complicated multi-tiered network incorporating ATM, Frame Relay, IP and X.25 technologies.

CYCLADES

[edit]

The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to use the end-to-end principle and make the hosts responsible for reliable delivery of data, rather than the network itself. Concepts of this network influenced later ARPANET architecture.[157][158]

DECnet

[edit]

DECnet is a suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers.[159] It evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol. The DECnet protocols were designed entirely by Digital Equipment Corporation. However, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including one for Linux.

DDX-1

[edit]

DDX-1 was an experimental network from Nippon PTT. It mixed circuit switching and packet switching. It was succeeded by DDX-2.[160]

EIN

[edit]

The European Informatics Network (EIN), originally called COST 11, was a project beginning in 1971 to link networks in Britain, France, Italy, Switzerland and Euratom. Six other European countries also participated in the research on network protocols. Derek Barber directed the project, and Roger Scantlebury led the UK technical contribution; both were from NPL.[161][162][163][164] The contract for its implementation was awarded to an Anglo French consortium led by the UK systems house Logica and Sesa and managed by Andrew Karney. Work began in 1973 and it became operational in 1976 including nodes linking the NPL network and CYCLADES.[165] Barber proposed and implemented a mail protocol for EIN.[166] The transport protocol of the EIN helped to launch the INWG and X.25 protocols.[167][168][169] EIN was replaced by Euronet in 1979.[170]

EPSS

[edit]

The Experimental Packet Switched Service (EPSS) was an experiment of the UK Post Office Telecommunications. It was the first public data network in the UK when it began operating in 1976.[171] Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks.[172][173][174]

GEIS

[edit]

As General Electric Information Services (GEIS), General Electric was a major international provider of information services. The company originally designed a telephone network to serve as its internal (albeit continent-wide) voice telephone network.

In 1965, at the instigation of Warner Sinback, a data network based on this voice-phone network was designed to connect GE's four computer sales and service centers (Schenectady, New York, Chicago, and Phoenix) to facilitate a computer time-sharing service.

After going international some years later, GEIS created a network data center near Cleveland, Ohio. Very little has been published about the internal details of their network. The design was hierarchical with redundant communication links.[175][176]

IPSANET

[edit]

IPSANET was a semi-private network constructed by I. P. Sharp Associates to serve their time-sharing customers. It became operational in May 1976.[177]

IPX/SPX

[edit]

The Internetwork Packet Exchange (IPX) and Sequenced Packet Exchange (SPX) are Novell networking protocols from the 1980s derived from Xerox Network Systems' IDP and SPP protocols, respectively which date back to the 1970s. IPX/SPX was used primarily on networks using the Novell NetWare operating systems.[178]

Merit Network

[edit]

Merit Network, an independent nonprofit organization governed by Michigan's public universities,[179] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[180] With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host-to-host connection was made between the IBM mainframe systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit.[181] In October 1972, connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years, in addition to host-to-host interactive connections, the network was enhanced to support terminal-to-host connections, host-to-host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP; additionally, public universities in Michigan joined the network.[181][182] All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.

NPL

[edit]

Donald Davies of the National Physical Laboratory (United Kingdom) designed and proposed a national commercial data network based on packet switching in 1965.[183][184] The proposal was not taken up nationally but the following year, he designed a local network using "interface computers", today known as routers, to serve the needs of NPL and prove the feasibility of packet switching.[185]

By 1968 Davies had begun building the NPL network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions.[186][46][187] In 1969, the NPL, followed by the ARPANET, were the first two networks to use packet switching.[188][45] By 1976, 12 computers and 75 terminal devices were attached,[189] and more were added until the network was replaced in 1986. NPL was the first to use high-speed links.[190][191][192]

Octopus

[edit]

Octopus was a local network at Lawrence Livermore National Laboratory. It connected sundry hosts at the lab to interactive terminals and various computer peripherals including a bulk storage system.[193][194][195]

Philips Research

[edit]

Philips Research Laboratories in Redhill, Surrey developed a packet switching network for internal use. It was a datagram network with a single switching node.[196]

PUP

[edit]

PARC Universal Packet (PUP or Pup) was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher level functions such as a reliable byte stream, along with numerous applications. Further developments led to Xerox Network Systems (XNS).[197]

RCP

[edit]

RCP was an experimental network created by the French PTT. It was used to gain experience with packet switching technology before the specification of the TRANSPAC public network was frozen. RCP was a virtual-circuit network in contrast to CYCLADES which was based on datagrams. RCP emphasised terminal-to-host and terminal-to-terminal connection; CYCLADES was concerned with host-to-host communication. RCP influenced the X.25 specification, which was deployed on TRANSPAC and other public data networks.[198][199][200]

RETD

[edit]

Red Especial de Transmisión de Datos (RETD) was a network developed by Compañía Telefónica Nacional de España. It became operational in 1972 and thus was the first public network.[201][202][203][204]

SCANNET

[edit]

"The experimental packet-switched Nordic telecommunication network SCANNET was implemented in Nordic technical libraries in the 1970s, and it included first Nordic electronic journal Extemplo. Libraries were also among first ones in universities to accommodate microcomputers for public use in the early 1980s."[205]

SITA HLN

[edit]

SITA is a consortium of airlines. Its High Level Network (HLN) became operational in 1969. Although organised to act like a packet-switching network,[23] it still used message switching.[206][18] As with many non-academic networks, very little has been published about it.

SRCnet/SERCnet

[edit]

A number of computer facilities serving the Science Research Council (SRC) community in the United Kingdom developed beginning in the early 1970s. Each had their own star network (ULCC London, UMRCC Manchester, Rutherford Appleton Laboratory). There were also regional networks centred on Bristol (on which work was initiated in the late 1960s) followed in the mid-late 1970s by Edinburgh, the Midlands and Newcastle. These groups of institutions shared resources to provide better computing facilities than could be afforded individually. The networks were each based on one manufacturer's standards and were mutually incompatible and overlapping.[207][208][209] In 1981, the SRC was renamed the Science and Engineering Research Council (SERC). In the early 1980s a standardisation and interconnection effort started, hosted on an expansion of the SERCnet research network and based on the Coloured Book protocols, later evolving into JANET.[210][211][212]

Systems Network Architecture

[edit]

Systems Network Architecture (SNA) is IBM's proprietary networking architecture created in 1974. An IBM customer could acquire hardware and software from IBM and lease private lines from a common carrier to construct a private network.[213]

Telenet

[edit]

Telenet was the first FCC-licensed public data network in the United States. Telenet was incorporated in 1973 and started operations in 1975. It was founded by Bolt Beranek & Newman with Larry Roberts as CEO as a means of making packet switching technology public. Telenet initially used a proprietary Virtual circuit host interface, but changed it to X.25 and the terminal interface to X.29 after their standardization in CCITT.[89] It went public in 1979 and was then sold to GTE.[214][215]

Tymnet

[edit]

Tymnet was an international data communications network headquartered in San Jose, CA. In 1969, it began install a network based on minicomputers to connect timesharing terminals to its central computers. The network used store-and-forward and voice-grade lines. Routing was not distributed, rather it was established by a central supervisor on a call-by-call basis.[23]

X.25 era

[edit]
CCITT SGVII X25 Advocates

There were two kinds of X.25 networks. Some such as DATAPAC and TRANSPAC were initially implemented with an X.25 external interface. Some older networks such as TELENET and TYMNET were modified to provide an X.25 host interface in addition to older host connection schemes. DATAPAC was developed by Bell-Northern Research which was a joint venture of Bell Canada (a common carrier) and Northern Telecom (a telecommunications equipment supplier). Northern Telecom sold several DATAPAC clones to foreign PTTs including the Deutsche Bundespost. X.75 and X.121 allowed the interconnection of national X.25 networks.

AUSTPAC

[edit]

AUSTPAC was an Australian public X.25 network operated by Telstra. Established by Telstra's predecessor Telecom Australia in the early 1980s, AUSTPAC was Australia's first public packet-switched data network and supported applications such as on-line betting, financial applications—the Australian Taxation Office made use of AUSTPAC—and remote terminal access to academic institutions, who maintained their connections to AUSTPAC up until the mid-late 1990s in some cases. Access was via a dial-up terminal to a PAD, or, by linking a permanent X.25 node to the network.[216]

ConnNet

[edit]

ConnNet was a network operated by the Southern New England Telephone Company serving the state of Connecticut.[217][218] Launched on March 11, 1985, it was the first local public packet-switched network in the United States.[219]

Datanet 1

[edit]

Datanet 1 was the public switched data network operated by the Dutch PTT Telecom (now known as KPN). Strictly speaking Datanet 1 only referred to the network and the connected users via leased lines (using the X.121 DNIC 2041), the name also referred to the public PAD service Telepad (using the DNIC 2049). And because the main Videotex service used the network and modified PAD devices as infrastructure the name Datanet 1 was used for these services as well.[220]

DATAPAC

[edit]

DATAPAC was the first operational X.25 network (1976).[221] It covered major Canadian cities and was eventually extended to smaller centers.[citation needed]

Datex-P

[edit]

Deutsche Bundespost operated the Datex-P national network in Germany. The technology was acquired from Northern Telecom.[222]

Eirpac

[edit]

Eirpac is the Irish public switched data network supporting X.25 and X.28. It was launched in 1984, replacing Euronet. Eirpac is run by Eircom.[223][224][225]

Euronet

[edit]

Nine member states of the European Economic Community contracted with Logica and the French company SESA to set up a joint venture in 1975 to undertake the Euronet development, using X.25 protocols to form virtual circuits. It was to replace EIN and established a network in 1979 linking a number of European countries until 1984 when the network was handed over to national PTTs.[226][227]

HIPA-NET

[edit]

Hitachi designed a private network system for sale as a turnkey package to multi-national organizations.[when?] In addition to providing X.25 packet switching, message switching software was also included. Messages were buffered at the nodes adjacent to the sending and receiving terminals. Switched virtual calls were not supported, but through the use of logical ports an originating terminal could have a menu of pre-defined destination terminals.[228]

Iberpac

[edit]

Iberpac is the Spanish public packet-switched network, providing X.25 services. It was based on RETD which was operational since 1972. Iberpac was run by Telefonica.[229]

IPSS

[edit]

In 1978, X.25 provided the first international and commercial packet-switching network, the International Packet Switched Service (IPSS).

JANET

[edit]

JANET was the UK academic and research network, linking all universities, higher education establishments, and publicly funded research laboratories following its launch in 1984.[230] The X.25 network, which used the Coloured Book protocols, was based mainly on GEC 4000 series switches, and ran X.25 links at up to 8 Mbit/s in its final phase before being converted to an IP-based network in 1991. The JANET network grew out of the 1970s SRCnet, later called SERCnet.[231]

PSS

[edit]

Packet Switch Stream (PSS) was the Post Office Telecommunications (later to become British Telecom) national X.25 network with a DNIC of 2342. British Telecom renamed PSS Global Network Service (GNS), but the PSS name has remained better known. PSS also included public dial-up PAD access, and various InterStream gateways to other services such as Telex.

REXPAC

[edit]

REXPAC was the nationwide experimental packet switching data network in Brazil, developed by the research and development center of Telebrás, the state-owned public telecommunications provider.[232]

SITA Data Transport Network

[edit]

SITA is a consortium of airlines. Its Data Transport Network adopted X.25 in 1981, becoming the world's most extensive packet-switching network.[233][234][235] As with many non-academic networks, very little has been published about it.

TRANSPAC

[edit]

TRANSPAC was the national X.25 network in France.[139] It was developed locally at about the same time as DATAPAC in Canada. The development was done by the French PTT and influenced by its preceding experimental network RCP.[236] It began operation in 1978, and served commercial users and, after Minitel began, consumers.[237]

Tymnet

[edit]

Tymnet utilized virtual call packet switched technology including X.25, SNA/SDLC, BSC and ASCII interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous serial connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the U.S. and internationally via X.25/X.75 gateways.[238][239]

UNINETT

[edit]

UNINETT was a wide-area Norwegian packet-switched network established through a joint effort between Norwegian universities, research institutions and the Norwegian Telecommunication administration. The original network was based on X.25; Internet protocols were adopted later.[240]

VENUS-P

[edit]

VENUS-P was an international X.25 network that operated from April 1982 through March 2006. At its subscription peak in 1999, VENUS-P connected 207 networks in 87 countries.[241]

XNS

[edit]

Xerox Network Systems (XNS) was a protocol suite promulgated by Xerox, which provided routing and packet delivery, as well as higher-level functions such as a reliable stream, and remote procedure calls. It was developed from PARC Universal Packet (PUP).[242][243]

Internet era

[edit]

When Internet connectivity was made available to anyone who could pay for an Internet service provider subscription, the distinctions between national networks blurred. The user no longer saw network identifiers such as the DNIC. Some older technologies such as circuit switching have resurfaced with new names such as fast packet switching. Researchers have created some experimental networks to complement the existing Internet.[244]

CSNET

[edit]

The Computer Science Network (CSNET) was a computer network funded by the NSF that began operation in 1981. Its purpose was to extend networking benefits for computer science departments at academic and research institutions that could not be directly connected to ARPANET due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to the development of the global Internet.[245][246]

Internet2

[edit]

Internet2 is a not-for-profit United States computer networking consortium led by members from the research and education communities, industry, and government.[247] The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project.[248] In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 to 100 Gbit/s.[249] In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network.

NSFNET

[edit]
NSFNET Traffic 1991, NSFNET backbone nodes are shown at the top, regional networks below, traffic volume is depicted from purple (zero bytes) to white (100 billion bytes), visualization by NCSA using traffic data provided by the Merit Network.

The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the NSF beginning in 1985 to promote advanced research and education networking in the United States.[250] NSFNET was also the name given to several nationwide backbone networks, operating at speeds of 56 kbit/s, 1.5 Mbit/s (T1), and 45 Mbit/s (T3), that were constructed to support NSF's networking initiatives from 1985 to 1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.

NSFNET regional networks

[edit]

In addition to the five NSF supercomputer centers, NSFNET provided connectivity to eleven regional networks and through these networks to many smaller regional and campus networks in the United States. The NSFNET regional networks were:[251][252]

National LambdaRail

[edit]

The National LambdaRail (NRL) was launched in September 2003. It is a 12,000-mile high-speed national computer network owned and operated by the US research and education community that runs over fiber-optic lines. It was the first transcontinental 10 Gigabit Ethernet network. It operates with an aggregate capacity of up to 1.6 Tbit/s and a 40 Gbit/s bitrate.[257][258] NLR ceased operations in March 2014.

TransPAC2, and TransPAC3

[edit]

TransPAC2 is a high-speed international Internet service connecting research and education networks in the Asia-Pacific region to those in the US.[259] TransPAC3 is part of the NSF's International Research Network Connections (IRNC) program.[260]

Very high-speed Backbone Network Service (vBNS)

[edit]

The Very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a NSF sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States.[261] The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF. By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12 (622 Mbit/s) links on an all OC-12 backbone, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48 (2.5 Gbit/s) IP links in February 1999 and went on to upgrade the entire backbone to OC-48.[262]

In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF.[263] After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone. In January 2006, when MCI and Verizon merged,[264] vBNS+ became a service of Verizon Business.[265]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Packet switching is a method of digital communication in which is divided into small, self-contained units called packets that are transmitted independently over a shared network medium and reassembled at the destination to reconstruct the original message. This approach contrasts with by dynamically allocating bandwidth on demand, allowing multiple users to share the same transmission lines efficiently without dedicating a fixed path for the duration of a session. The concept originated in the early 1960s amid efforts to design resilient communication networks capable of surviving nuclear attacks. , working at the , proposed the foundational ideas in his 1964 report On Distributed Communications, envisioning a distributed network where messages are broken into fixed-size blocks with headers containing information, enabling adaptive "hot-potato" to bypass damaged nodes. Independently, at the UK's National Physical Laboratory developed similar principles in 1965, coining the term "packet" and advocating store-and-forward techniques for efficient data handling. These ideas converged in the 1969 launch of , the precursor to the , under the direction of Lawrence G. Roberts and influenced by , marking the first operational packet-switched network. At its core, packet switching operates on a store-and-forward principle: each network node receives a complete packet, stores it briefly, checks for errors, and forwards it based on the header's destination address and tables. Packets from different sources may take varied paths, interleave on links, and arrive out of order, necessitating sequence numbers for reassembly and protocols like TCP for reliability in modern implementations. This method offers significant advantages over traditional , including 3 to 100 times greater bandwidth efficiency, enhanced through , and for bursty traffic patterns common in data communications. Packet switching underpins the global and most contemporary data networks, from local Ethernet to wide-area protocols like IP, enabling the seamless exchange of diverse content such as web pages, emails, and . Its evolution has included refinements in congestion control, mechanisms, and integration with optical and technologies, ensuring robust performance amid growing data demands.

Fundamentals

Definition and Principles

Packet switching is a method of in which a message is divided into smaller units known as packets, each containing a header with source and destination addresses, control information such as sequence numbers, and a of . These packets are transmitted independently across a network from the source to the destination, potentially via different routes, and then reassembled at the receiving end to reconstruct the original message. This approach enables efficient transmission over shared digital networks by treating as discrete, self-contained units that can be routed hop-by-hop through intermediate nodes. The core principles of packet switching revolve around statistical multiplexing, which allows multiple data streams to share network links dynamically based on current demand, maximizing bandwidth utilization without dedicating resources exclusively to any single connection. Packets from different sources are interleaved on links, with each packet routed individually based on its destination address, enabling the use of multiple possible paths through to reach the endpoint. This independence of packets enhances robustness, as the failure of a single link or node does not necessarily prevent delivery, since alternative routes can be utilized for unaffected packets. In principle, packet switching offers key benefits including superior resource utilization over methods that reserve dedicated paths, as bandwidth is allocated only when packets are present, reducing idle time on links. It is particularly well-suited to bursty traffic patterns common in data communications, where transmissions occur in irregular bursts interspersed with periods of inactivity, allowing the network to accommodate varying loads efficiently without wasting capacity during low-activity phases. To illustrate the packet flow, consider a simple example of transmitting a 1,000-byte message from host A to host B across a network with intermediate routers R1 and R2:
  • Segmentation: Host A breaks the message into fixed-size packets (e.g., four 250-byte packets), adding a header to each with source (A), destination (B), and sequence numbers (1 through 4) to enable reassembly.
  • Transmission: Each packet is sent independently. Packet 1 routes A → R1 → B; packet 2 routes A → R2 → B; packets 3 and 4 may follow similar or varied paths based on network conditions.
  • Forwarding: At each router, the packet's header is examined, queued if necessary, and forwarded to the next hop toward B without regard to other packets from the same message.
  • Reassembly: Host B receives the packets out of order, buffers them, sorts by sequence number, and combines the payloads to recover the original message, discarding headers once complete.
This process assumes no losses or errors for simplicity, highlighting the modularity and flexibility of packet handling.

Comparison to Circuit Switching

Circuit switching establishes a dedicated end-to-end communications path between two nodes before transmission begins, reserving the full bandwidth of that path for the entire duration of the session, regardless of whether the channel is actively used. This approach, exemplified by traditional public switched telephone networks (PSTN), ensures constant service suitable for constant-flow applications like voice calls, but it leads to inefficient when traffic is intermittent or bursty, as reserved resources remain idle during silent periods. In contrast, packet switching divides data into independent packets that are routed dynamically through the network using shared links, employing statistical multiplexing to allocate bandwidth on demand rather than reserving fixed paths. This allows multiple conversations to share the same physical links efficiently, as packets from different sources are interleaved based on , reducing idle time and accommodating variable traffic patterns better than circuit switching's rigid allocation. However, packet switching introduces variable delays due to queuing at switches and the need for reassembly at the destination, which can affect real-time applications but is less critical for data transfer. The efficiency advantage of packet switching stems from its ability to handle bursty data common in environments, where utilization can reach 70-80% on links through statistical , compared to 20-30% in circuit-switched systems for the same due to overprovisioning for peak loads. For instance, if 10 users each require 100 kbps for brief bursts averaging 10% , a circuit-switched network would need to reserve 100 kbps per user (total 1 Mbps) to avoid blocking, whereas packet switching can support this on a 1 Mbps link with low overload probability (e.g., less than 1% chance of exceeding capacity using modeling). The shift toward packet switching in the and was driven by the growing need for data networks in , where voice-like constant bandwidth was inefficient for irregular, bursty transmissions; pioneers like proposed it for robust military communications, emphasizing survivability and resource sharing over dedicated circuits. Similarly, independently developed the concept at the UK's National Physical Laboratory to optimize computer-to-computer data exchange, highlighting its superiority for non-constant traffic over traditional paradigms.

Operational Modes

Connectionless Mode

Connectionless mode, also known as the approach, operates without establishing a or prior connection between sender and receiver. In this mode, each packet is treated as an independent entity containing complete addressing information, including source and destination addresses, allowing it to be routed separately through the network. This contrasts with connection-oriented methods by avoiding any session setup, enabling immediate transmission of data units called datagrams. During operation, the source host transmits packets without any handshaking or acknowledgment process with the destination or intermediate routers. Routers examine the destination in each packet's header and forward it toward the destination based on current tables, without maintaining state information for the entire flow. Delivery is best-effort, meaning the network attempts to route packets efficiently but provides no guarantees against loss, duplication, delay, or out-of-order arrival; packets may take different paths and arrive independently or not at all. The primary advantages of connectionless mode include its simplicity, as routers do not need to track connection states, reducing complexity in network devices. This stateless enhances for large, dynamic networks by supporting high volumes of traffic without the resource overhead of maintaining session details across multiple nodes. Additionally, the absence of setup or teardown phases eliminates initial latency and overhead, allowing packets to be sent instantaneously, which is ideal for bursty or intermittent data flows. Prominent examples of connectionless mode include the (IP) at the network layer of the TCP/IP stack, where IP datagrams carry full addressing and are routed independently to enable across diverse networks. At the transport layer, the (UDP) exemplifies this mode by providing a lightweight, connectionless service atop IP, suitable for applications like real-time streaming or DNS queries that prioritize speed over reliability. In such cases, any , reordering, or errors are detected and corrected by higher-layer protocols or application logic, rather than the network layer itself. A key potential issue in connectionless mode is the lack of inherent guarantees for packet delivery or ordering, which can result in or fragmentation during congestion or failures, necessitating end-to-end reliability mechanisms at higher layers. This best-effort nature may lead to variable performance in unreliable environments, where packets could be dropped silently without notification to the sender.

Connection-Oriented Mode

Connection-oriented mode in packet switching establishes a logical association, referred to as a , between the source and destination prior to transmitting data, ensuring that all packets associated with a session follow the same predetermined path through the network. This approach contrasts with connectionless modes by providing a structured pathway that mimics a dedicated connection without reserving physical resources exclusively. The operation of connection-oriented packet switching proceeds in distinct phases: a call setup phase, where signaling packets negotiate and establish the , including path selection and ; a data transfer phase, during which user packets are transmitted along the fixed route with sequence numbers for ordering and mechanisms for handled at the network layer; and a teardown phase that releases the upon session completion. This phased structure enables reliable, ordered delivery while allowing multiple virtual circuits to share the same physical links efficiently. Key advantages include predictable performance due to the consistent routing path, which minimizes variability in delay and ; reduced overhead for extended sessions, as initial decisions eliminate the need for per-packet address resolution; and inherent reliability features such as packet sequencing and network-layer error recovery, enhancing without relying solely on higher-layer protocols. Prominent examples include the X.25 protocol suite, developed by the , which implements connection-oriented service through its packet layer procedures for virtual circuits. X.25 supports two variants: permanent virtual circuits (PVCs), which are statically configured by the network provider for ongoing connections, and switched virtual circuits (SVCs), which are dynamically set up and cleared as needed. Early forms of (ATM) also employed connection-oriented virtual paths and channels for cell-based packet switching, prioritizing in networks. Despite these benefits, connection-oriented mode suffers from higher initial latency introduced by the setup phase, which can delay short or sporadic transmissions; and reduced flexibility for dynamic , as changes in network conditions or require re-establishing circuits rather than adapting .

Technical Implementation

Packet Structure and Transmission

In packet switching networks, a packet serves as the fundamental unit of data transmission, comprising three primary components: the header, the , and optionally a trailer. The header encapsulates essential control information to facilitate and delivery, including the source and destination addresses to identify the sender and receiver, sequence numbers to enable reassembly in the correct order, and a time-to-live (TTL) field that decrements at each hop to prevent packets from circulating indefinitely. For instance, in the IPv4 protocol, the header is fixed at a minimum of 20 bytes and includes fields such as version (4 bits), internet header length (4 bits), type of service (8 bits), total length (16 bits), identification (16 bits) for fragmentation, flags and fragment offset (16 bits), TTL (8 bits), protocol (8 bits), header checksum (16 bits), and 32-bit source and destination IP addresses. The carries the actual user data fragment, typically limited to a size that fits within the network's maximum transmission unit (MTU), while the trailer, when present (e.g., in link-layer frames), appends error-detection bits such as a cyclic redundancy check (CRC) to verify integrity during transmission over physical links. The transmission process begins with encapsulation at the source host, where application data is segmented into payloads and wrapped with appropriate headers at each protocol layer (e.g., , network, and ) to form complete packets or frames. These are then serialized—converted into a bit stream—and transmitted over the physical medium. If a packet's size exceeds the MTU of an outgoing link (commonly 1500 bytes for Ethernet), fragmentation occurs, splitting the packet into smaller fragments, each with a copy of the header modified to include offset and more-fragments flags for reassembly at the destination. This ensures compatibility across heterogeneous networks but introduces overhead and potential delays. Error handling in packet switching operates primarily at the for per-hop and extends to higher layers for end-to-end reliability. At the , a CRC polynomial is computed over the frame (including header and ) and appended as a trailer; the receiver recomputes the CRC and discards the frame if it mismatches, triggering retransmission via mechanisms like (ARQ) if implemented (e.g., in protocols such as HDLC). Higher layers, such as the in TCP, handle packet-level errors through acknowledgments and selective retransmissions. For network-layer headers like IPv4, a dedicated field provides verification using one's complement arithmetic. The is calculated as the one's complement of the one's complement sum of all 16-bit words in the header (with the checksum field itself set to zero during computation), ensuring detection of transmission errors; the receiver performs the inverse to validate. Packet overhead, the non-data portion introduced by headers and trailers, impacts and is quantified as the :
Overhead Percentage=(Header Size+Trailer SizeTotal Packet Size)×100\text{Overhead Percentage} = \left( \frac{\text{Header Size} + \text{Trailer Size}}{\text{Total Packet Size}} \right) \times 100
For a typical IPv4 packet with a 20-byte header and no trailer over a 1500-byte MTU, this yields approximately 1.33% overhead, though it rises significantly for smaller (e.g., 20% for 100-byte total packets), emphasizing the importance of payload optimization in high-throughput networks.
The structure of packets has evolved from the rudimentary formats of early networks like , where host-to-host packets under the Network Control Protocol (NCP) featured simple headers consisting of a 32-bit leader (message length and type fields) followed by 64-bit source and destination socket fields for basic addressing and control, to the more robust IPv4 design in TCP/IP (adopted in 1983). Subsequent advancements in introduced a streamlined 40-byte fixed header with fields like version, traffic class, flow label, payload length, next header, hop limit (analogous to TTL), and 128-bit addresses, supplemented by optional extension headers chained via the "next header" field to support advanced features such as , fragmentation, and without bloating the base header. This modular approach reduces processing overhead at routers compared to IPv4's variable options while enabling scalability for modern demands.

Routing and Switching Mechanisms

In packet switching networks, routing involves determining the path for packets from source to destination using tables that map destination addresses to next-hop interfaces or addresses. These tables are populated either statically, through manual configuration by network administrators for fixed paths in stable environments, or dynamically, via protocols that automatically exchange and update information to adapt to changes like link failures or congestion. Switching mechanisms handle the forwarding of packets at network nodes, with two primary approaches: store-and-forward and cut-through. In store-and-forward switching, the entire packet is received and buffered at the switch before error checking and forwarding to the output port, ensuring reliable transmission but introducing latency proportional to packet size. Cut-through switching begins forwarding the packet as soon as the destination address is read from the header, reducing latency at the cost of potentially propagating erroneous packets, as full error detection occurs later. Packet switching operates in or modes for forwarding decisions. switching treats each packet independently, based on its header without prior setup, allowing flexible paths but risking and variable delays. switching establishes a logical connection beforehand, reserving resources and using consistent paths for all packets in a flow, similar to but with shared links, which simplifies ordering but adds setup overhead. Routing algorithms compute optimal paths, primarily through distance-vector and link-state methods. Distance-vector algorithms, exemplified by the (), have each router maintain a table of distances to destinations and periodically share it with neighbors; updates propagate iteratively using the Bellman-Ford approach, where the distance to a destination is the minimum of (neighbor's distance + link cost). uses hop count as the metric (1-15 hops, with 16 as ) and sends updates every 30 seconds or on triggers, though it can suffer slow convergence and loops mitigated by techniques like split horizon. Link-state algorithms, such as (OSPF), flood each router with complete information (link states and costs) to build a global network graph, then independently compute shortest paths using . OSPF groups routers into areas for scalability, with backbone area 0 connecting others, and recalculates paths on changes via link-state advertisements. finds the shortest path from a source to all nodes in a weighted graph by maintaining a of tentative distances, iteratively selecting the unvisited node with the smallest distance and relaxing edges to its neighbors. High-level steps include:
  1. Initialize distances: source = 0, others = ∞; mark all nodes unvisited.
  2. While unvisited nodes remain: Select the unvisited node u with minimum ; mark u visited.
  3. For each neighbor v of u: If (u) + weight(u,v) < (v), update (v) and set predecessor.
Hardware implements these mechanisms differently: layer-2 switches forward packets within a local network using MAC addresses in a (CAM) table for fast, hardware-based lookups via application-specific integrated circuits (), operating at the . Layer-3 routers interconnect networks using IP addresses, performing more complex lookups (e.g., ) in ternary CAM (TCAM) and updating headers like decrementing time-to-live, often with dedicated forwarding engines to offload the for high-speed processing. Modern layer-3 switches combine both, using for intra-VLAN layer-2 switching and between VLANs. For scalability in large networks, hierarchical routing divides the into levels or areas, reducing the size of routing tables and computation by summarizing routes at boundaries. Routers within a level maintain detailed intra-level tables but use aggregated inter-level routes, as in OSPF areas where non-backbone areas advertise summary links to the core, limiting flooding and supporting thousands of nodes without overwhelming resources. This approach, analyzed in early work on store-and-forward networks, minimizes update traffic and table sizes while preserving path efficiency.

Congestion Management and Quality of Service

In packet-switched networks, congestion arises primarily from overloaded communication links and bursty patterns, where sudden surges in data transmission exceed the capacity of network resources, leading to queue buildup at routers and switches and subsequent packet drops. To manage congestion, several techniques are employed. regulates the rate of outgoing by buffering excess packets and releasing them at a controlled pace, preventing bursts from overwhelming downstream links. In contrast, traffic policing enforces strict rate limits by discarding or marking packets that exceed the threshold, ensuring compliance without buffering. Backpressure mechanisms allow downstream nodes to signal upstream devices to reduce transmission rates when queues are filling, providing a decentralized form of flow control. Additionally, (ECN) enables routers to mark packets indicating incipient congestion instead of dropping them, allowing endpoints to adjust sending rates proactively. Quality of Service (QoS) mechanisms further ensure reliable performance by prioritizing traffic. Packets are classified based on criteria such as source, destination, or application type, then marked with Differentiated Services Code Points (DSCPs) in the to indicate handling priority, as defined in the (DiffServ) architecture. Queuing disciplines manage contention at output ports; First-In-First-Out (FIFO) queuing treats all packets equally but can lead to unfairness, whereas priority queuing assigns higher precedence to critical traffic, dequeuing it ahead of lower-priority packets during congestion. For more stringent guarantees, reservation protocols like the () enable end-to-end by signaling routers to reserve bandwidth and buffer space along a path before data transmission begins. A key algorithm for end-to-end congestion control is implemented in the Transmission Control Protocol (TCP), which dynamically adjusts the congestion window (cwnd) to probe network capacity. In the slow start phase, upon receiving an acknowledgment (ACK) for new data, the sender increases cwnd by 1 , effectively doubling the window every round-trip time to quickly ramp up transmission. This transitions to congestion avoidance once cwnd reaches the slow start threshold, where cwnd increases more gradually by 1 MSS per round-trip time (approximately cwnd += 1/cwnd per ACK) to avoid overload. Upon detecting loss—typically via duplicate ACKs or timeouts—TCP halves cwnd multiplicatively to back off aggressively. Performance in congested packet-switched networks is evaluated using metrics such as throughput (data transfer rate), latency (end-to-end delay), and (variation in packet arrival times). In unmanaged networks without these controls, congestion can cause severe degradation: throughput may collapse to near zero as retransmissions exacerbate queue buildup, latency can spike due to excessive queuing delays, and jitter increases, disrupting real-time applications like voice or video.

Historical Development

Early Concepts and Invention

The concept of packet switching emerged in the mid-1960s as a response to the limitations of circuit-switched networks, which were optimized for synchronous voice traffic but inefficient for the asynchronous, bursty nature of computer data. In 1964, at the proposed dividing messages into small "message blocks" transmitted independently across a distributed network to enhance against nuclear attacks, emphasizing decentralized over dedicated circuits to avoid single points of failure. Baran's work, detailed in his multi-volume report On Distributed Communications Networks, laid the groundwork for resilient data transmission by advocating for redundancy and adaptive rerouting of blocks, rather than end-to-end connections. Independently, in late 1965, at the UK's National Physical Laboratory (NPL) developed the idea of "packet switching" to enable efficient resource sharing among computer systems, where multiple users intermittently accessed centralized mainframes. coined the term "packet" for fixed-size data units—typically 1024 bits—to multiplex traffic over shared links, addressing the inefficiency of idle circuits in supporting interactive . His proposal envisioned a national network of switches for asynchronous data flows, contrasting with telephony's synchronous requirements, and was motivated by the need to handle variable-rate digital communications without wasting bandwidth. Key figures in propagating these ideas included Roger Scantlebury, a colleague of , who presented the NPL concepts at the 1967 ACM Symposium on Operating Systems Principles in , where he introduced the term "packet switching" to an international audience and influenced U.S. researchers like Lawrence Roberts. This presentation, based on a paper co-authored by , , Scantlebury, and Wilkinson, highlighted rapid-response networking for remote terminals. Early validation came in 1968 when publicly presented packet switching principles at the IFIP World Congress in . This presentation underscored the technique's potential for handling demands, marking the first public insight into packet-based .

Key Milestones and Networks

The , funded by the U.S. Department of Defense's Advanced Research Projects Agency (), became the first operational packet-switched network in 1969, connecting four university nodes and demonstrating resource sharing across geographically dispersed computers. In 1970, the UK's National Physical Laboratory (NPL) implemented its Mark I network under , marking an early practical deployment of packet switching for internal laboratory communications at speeds up to 768 kbit/s. That same year, the UK launched the Experimental Packet Switched Service (EPSS) as a , connecting research institutions and providing the first commercial-like access to packet-switched services in . By 1972, France's network, directed by Louis Pouzin at IRIA (now Inria), introduced innovative connectionless datagram switching, emphasizing end-to-end host responsibilities over network-level reliability to support flexible research applications. The European Informatics Network (EIN), initiated in 1973 under the COST 11 project by the , connected research centers across nine countries using X.25-compatible packet switching, fostering international collaboration in data exchange. In 1974, Telenet emerged as the world's first commercial packet-switched network, operated by BBN (now part of ) in the U.S., offering public access via dial-up for businesses and extending concepts to wide-area services. Spain's RETD (Red de Transmisión de Datos), developed by Telefónica, began operations in 1975 as an experimental network, pioneering packet switching in Iberia for national data transmission. The (ITU) standardized X.25 in 1976, defining interface protocols for public packet-switched data networks and enabling interoperable services worldwide. Canada's DATAPAC, launched that year by the Trans-Canada Telephone System, became the first operational X.25 network, covering major cities and supporting asynchronous terminal access at up to 9.6 kbit/s. Tymnet, developed by Tymshare in the U.S. during the early 1970s, expanded in the late 1970s as a specialized packet-switched system for remote terminal access, using synchronous star topology to connect over 2,000 nodes globally by the decade's end. In the X.25 era, France's TRANSPAC network went public in 1978, operated by the Direction Générale des Télécommunications, providing nationwide X.25 services and handling millions of packets daily by integrating with international links. The International Packet Switched Service (IPSS), established in 1978 through collaboration between the , International, and Tymnet, formed the first global commercial packet-switched backbone, initially linking and the U.S. before expanding to , , and by 1981. The 's Packet Switch Stream (PSS), introduced in 1979 by British Telecom as a successor to EPSS, offered X.25-based public access, supporting academic and commercial users with reliable data transfer up to 64 kbit/s. A key transition occurred in 1983 when fully adopted TCP/IP protocols on January 1, known as "," replacing the earlier Network Control Program and standardizing across diverse packet-switched systems. In the mid-1980s, local area innovations like , released by Apple in , applied packet switching to Ethernet-based networks, enabling ad-hoc connections among Macintosh computers without centralized servers.

Debates on Origins

The origins of packet switching have been the subject of a longstanding "paternity dispute" among historians and networking pioneers, primarily centering on independent contributions by Paul Baran in the United States in 1964 and Donald Davies in the United Kingdom in 1965, with occasional claims extending to Leonard Kleinrock's 1961 doctoral thesis on queuing theory. Baran, working at the RAND Corporation, developed the concept of distributed adaptive messaging as part of a study on robust military communications networks capable of surviving nuclear attacks, breaking messages into small blocks for transmission across a decentralized network. Davies, at the National Physical Laboratory (NPL), independently conceived a similar system for efficient data communication, explicitly introducing the term "packet" to describe fixed-size blocks of data routed independently through software-based switches in a high-speed computer network. Kleinrock's earlier work at MIT provided mathematical models for analyzing message-switching queues and decentralized network control, laying theoretical groundwork for delay and throughput in such systems, but it focused on whole-message transmission rather than subdividing messages into packets, leading critics to argue it did not encompass the full packet-switching paradigm. The arguments in the debate highlight distinctions in scope and . Baran's approach emphasized survivability through redundancy and adaptive routing in a "distributed communications" system, detailed in his 11-volume RAND report series On Distributed Communications, without using the word "packet" but describing equivalent block-based transmission. Davies, motivated by the need for economical data networks, proposed breaking messages into small "packets" to optimize line utilization and enable store-and-forward switching, influencing the design of the NPL's experimental network and coining the precise that became standard. Kleinrock's contributions, while seminal for modeling—published as Communication Nets: Stochastic Message Flow and Delay in 1964—were seen by contemporaries like as applying to broader message systems rather than specifically advocating packet subdivision for switching efficiency, prompting to assert in later reflections that Kleinrock's models assumed fixed message sizes unsuitable for variable-length packets. Key events underscoring the convergence of these ideas include the October 1967 ACM Symposium on Operating Systems Principles in , where British researcher Roger Scantlebury presented ' packet-switching concepts to program manager Larry Roberts, accelerating the adoption of the technique in U.S. projects. The debate gained public attention in the amid growing interest in history, with Baran receiving the IEEE Medal in 1990 "for pioneering in packet switching," recognizing his foundational role. was similarly honored, including induction into the Royal Society in 1987 and posthumous acclaim following his 2000 death, though the controversy intensified around 2001 when Kleinrock publicly sought greater credit, prompting responses from ' colleagues emphasizing the independent practical inventions by Baran and Davies. The resolution reflects a broad consensus among networking experts, such as , that packet switching emerged from multiple independent origins without a single inventor, with Baran and credited for the core architectural innovations and specifically for the terminology that shaped subsequent implementations. This view, articulated in historical analyses and award citations, acknowledges Kleinrock's theoretical contributions but distinguishes them from the engineering breakthroughs in packetization and . The debates have significantly influenced the of computer networking, prompting detailed archival reviews and ensuring balanced attribution in academic and institutional narratives of development.

Evolution and Modern Applications

Transition to the Internet

The transition from early packet-switched networks to the began with the 's adoption of the TCP/IP protocol suite on January 1, 1983, replacing the older Network Control Protocol (NCP) and enabling the interconnection of diverse networks into a unified system. This "" cutover marked the operational birth of the , as evolved from a Department of Defense (DoD)-centric research network to a broader platform supporting packet switching across heterogeneous environments. Prior to this, the , established in 1981 with funding, extended packet-switched networking benefits to non-DoD academic institutions by connecting over 180 sites through a mix of gateways, dial-up services, and relays. In 1985, the NSF launched the NSFNET as a national backbone to link centers and regional networks, operating initially at 56 kbit/s using TCP/IP and serving as the primary infrastructure for non-military traffic. This network connected five initial supercomputing sites and expanded through 13 regional networks, such as MIDnet and NYSERNet, which aggregated traffic from universities and institutions, fostering widespread adoption of packet switching for scientific collaboration. The core protocols underpinning this evolution were the (IP), which standardized connectionless packet switching for efficient, scalable routing without virtual circuits, and the Transmission Control Protocol (TCP), which ensured reliable, ordered delivery through end-to-end error detection and retransmission. These were informed by the , articulated in the by Jerome Saltzer, David Reed, and David Clark, which argued that communication functions like reliability should be implemented at network endpoints rather than in the core to enhance robustness and adaptability in heterogeneous systems. Key milestones in the 1980s included the 1989 introduction of the (BGP) as RFC 1105, enabling scalable inter-domain routing across autonomous systems and supporting the Internet's growth beyond a single backbone. That same year, commercialization accelerated as NSF regional networks began accepting non-academic traffic under revised Acceptable Use Policies, with providers like Performance Systems International (PSI) and Advanced Network Services (ANS) emerging to offer paid connectivity, bridging research and commercial use. The shift addressed scaling challenges from earlier X.25-based networks, which struggled with global traffic volumes due to per-connection state management, by leveraging IP's stateless approach for higher throughput and simpler expansion. This culminated in the NSFNET's in 1995, when its backbone was decommissioned on April 30, transferring operations to commercial providers like MCI and Sprint while maintaining . Supporting this were NSFNET's regional networks, which handled localized aggregation; the Very high-speed Service (vBNS), deployed in 1995 by MCI under NSF sponsorship to deliver 155–622 Mbit/s links for high-performance research; and , formed in 1996 by 34 universities as a successor to advance next-generation networking beyond commoditized services.

Contemporary Networks and Protocols

In contemporary packet-switched networks, has emerged as the predominant protocol for addressing the limitations of IPv4, featuring 128-bit addresses that enable approximately 3.4 × 10^38 unique identifiers to support the exponential growth in connected devices. This expansion is complemented by built-in security enhancements, including mandatory support for , which provides , authentication, and integrity protection at the IP layer, reducing reliance on application-level security measures. As of October 2025, global adoption has reached approximately 45%, with native traffic to services at 45.26%, driven by widespread deployment in regions like the (over 50%) and parts of and . Advanced networking technologies have built upon packet switching to optimize performance in high-speed environments. Multiprotocol Label Switching (MPLS) enables efficient traffic engineering by assigning short labels to packets, allowing routers to forward data based on label values rather than deep IP header inspections, which supports explicit path control and bandwidth reservation for critical applications. Introduced in the late 1990s but widely adopted in the 2000s, MPLS is integral to service provider backbones for Virtual Private Networks (VPNs) and fast rerouting. Software-Defined Networking (SDN), which gained prominence in the 2010s, separates the control plane from the data plane to enable programmable network management; OpenFlow, a foundational SDN protocol standardized in 2011, allows centralized controllers to dynamically configure packet forwarding rules across switches. In mobile networks, the 5G core architecture relies on a fully packet-switched user plane within the 5G Core (5GC), as defined by 3GPP Release 15 onward, supporting ultra-reliable low-latency communications through service-based interfaces and network slicing for diverse traffic types. Modern protocols have evolved to address specific challenges in packet delivery and security. QUIC, initially developed by Google in 2012 as a UDP-based transport protocol, reduces connection establishment latency by integrating TLS 1.3 handshake into the transport layer and multiplexing streams to avoid head-of-line blocking, forming the basis for HTTP/3 and with HTTP/3 supported by approximately 36% of websites as of November 2025. Border Gateway Protocol (BGP) enhancements, particularly Resource Public Key Infrastructure (RPKI), introduced in the 2010s, mitigate prefix hijacking by validating route announcements through cryptographic certificates, with ROAs covering over 50% of IPv4 prefixes as of September 2024. An illustrative high-speed implementation is TransPAC3, a 100 Gbps packet-switched research and education network connecting Asia-Pacific institutions to the United States since the early 2010s, facilitating collaborative data-intensive projects like those in high-energy physics. Specialized packet-switched infrastructures cater to emerging ecosystems. In (IoT) deployments, LoRaWAN employs a low-power, wide-area packet-switching mechanism where end devices transmit small packets via modulation to gateways, which forward them over IP networks to application servers, enabling long-range connectivity for sensors in smart cities and with data rates up to 50 kbps. Cloud interconnects like AWS Direct Connect provide dedicated, private packet-switched links between customer on-premises networks and AWS data centers, bypassing the public internet to achieve consistent low-latency performance up to 100 Gbps, with encryption via MACsec for data in transit. Global networks exemplify scalable packet switching in dedicated environments. National LambdaRail (NLR), launched in the mid-2000s as a U.S.-based optical infrastructure, delivers dynamic circuit and packet-switched services over lambda wavelengths, supporting terabit-scale collaborations until its integration into broader ecosystems in the 2010s. In the , the modern network, operated by since the 1990s but upgraded to 400 Gbps Ethernet in the 2020s, interconnects universities and facilities with hybrid packet-optical switching, enabling petabyte-scale data transfers for projects in AI and .

Advantages, Limitations, and Future Directions

Packet switching offers significant advantages in network robustness, allowing packets to be rerouted dynamically around failures in nodes or links, thereby enhancing overall network resilience compared to circuit-switched systems. This capability stems from its distributed architecture, where independent enables alternative paths without disrupting the entire communication flow. Furthermore, packet switching improves by enabling statistical multiplexing, which utilizes available bandwidth more effectively—often achieving utilization rates exceeding 95% for larger packets—through shared resource allocation among multiple users and bursty traffic patterns. This , reported as 3 to 100 times greater than preallocation methods in early analyses, supports scalable connectivity essential for internet-scale networks by accommodating diverse and intermittent demands without dedicated end-to-end paths. Despite these strengths, packet switching has notable limitations, particularly its inherent variability in latency and due to queueing and dynamics, which can degrade performance for real-time applications like VoIP that require consistent low delays. Such variability arises from bursty causing unpredictable queue buildup, often necessitating additional QoS mechanisms to mitigate buffer overruns in delay-sensitive scenarios. vulnerabilities represent another challenge, as the protocol-agnostic nature of packets facilitates DDoS amplification attacks, where spoofed requests exploit UDP-based services to generate overwhelming response . Additionally, overhead from headers in small packets reduces effective , particularly for short voice packets where processing delays can impair quality. Quantitative analysis of these limitations often employs the M/M/1 queueing model to estimate delays in packet networks, assuming Poisson arrivals at rate λ\lambda and exponential service at rate μ\mu. The average queueing delay DqD_q is given by: Dq=λμ(μλ)D_q = \frac{\lambda}{\mu(\mu - \lambda)} This formula highlights how high arrival rates approaching the service capacity (λμ\lambda \approx \mu) exponentially increase delays, underscoring the need for congestion controls in packet-switched environments. Looking ahead, packet switching is poised for integration with quantum networking, where hybrid circuit- and packet-based strategies will enable entanglement distribution in future quantum architectures, favoring packet methods for their flexibility in dynamic topologies. AI-optimized will further enhance by leveraging for adaptive path selection and bandwidth allocation, reducing latency in heterogeneous networks through predictive . In 6G systems, all-packet architectures will dominate, reinventing network designs with integrated sensing, computing, and ultra-reliable low-latency communications to support immersive applications. Addressing IPv4 exhaustion remains critical, as the finite strains global connectivity, prompting accelerated adoption to sustain packet-switched amid growing device proliferation. On a societal level, packet switching has democratized access by powering the internet's efficient data dissemination, enabling widespread connectivity that fosters global sharing and economic inclusion. However, this ubiquity amplifies challenges, as pervasive packet inspection and in networked environments erode user data protections, necessitating robust frameworks to balance with .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.