Hubbry Logo
logo
Message switching
Community hub

Message switching

logo
0 subscribers
Read side by side
from Wikipedia

In telecommunications, message switching involves messages routed in their entirety, one hop at a time. It evolved from circuit switching and was the precursor of packet switching.[1]

An example of message switching is email in which the message is sent through different intermediate servers to reach the mail server for storing. Unlike packet switching, the message is not divided into smaller units and sent independently over the network.

History

[edit]

Western Union operated a message switching system, Plan 55-A, for processing telegrams in the 1950s.[2] Leonard Kleinrock wrote a doctoral thesis at the Massachusetts Institute of Technology in 1962 that analyzed queueing delays in this system.[3]

Message switching was built by Collins Radio Company, Newport Beach, California, during the period 1959–1963 for sale to large airlines, banks and railroads.

The original design for the ARPANET was Wesley Clark's April 1967 proposal for using Interface Message Processors to create a message switching network.[4][5][6] After the seminal meeting at the first ACM Symposium on Operating Systems Principles in October 1967, where Roger Scantlebury presented Donald Davies work and referenced the work of Paul Baran, Larry Roberts incorporated packet switching into the design.[7]

The SITA High-Level Network (HLN) became operational in 1969, handling data traffic for airlines in real time via a message-switched network over common carrier leased lines.[8][9] It was organised to act like a packet-switching network.[10]

Message switching systems are nowadays mostly implemented over packet-switched or circuit-switched data networks. Each message is treated as a separate entity. Each message contains addressing information, and at each switch this information is read and the transfer path to the next switch is decided. Depending on network conditions, a conversation of several messages may not be transferred over the same path. Each message is stored (usually on hard drive due to RAM limitations) before being transmitted to the next switch. Because of this it is also known as a 'store and forward' network. Email is a common application for message switching. A delay in delivering email is allowed real-time data transfer between two computers.

Examples

[edit]

Hop-by-hop Telex forwarding and UUCP are examples of message switching systems.

When this form of switching is used, no physical path is established in advance between sender and receiver. Instead, when the sender has a block of data to be sent, it is stored in the first switching office (i.e. router) then forwarded later one hop at a time. Each block is received in its entity form, inspected for errors and then forwarded or re-transmitted.

A form of store-and-forward network. Data is transmitted into the network and stored in a switch. The network transfers the data from switch to switch when it is convenient to do so, as such the data is not transferred in real-time. Blocking can not occur, however, long delays can happen. The source and destination terminal need not be compatible, since conversions are done by the message switching networks.

A message switch is "transactional". It can store data or change its format and bit rate, then convert the data back to their original form or an entirely different form at the receive end. Message switching multiplexes data from different sources onto a common facility. A message switch is one of the switching technologies.

Store and forward delays

[edit]

Since message switching stores each message at intermediate nodes in its entirety before forwarding, messages experience an end to end delay which is dependent on the message length, and the number of intermediate nodes. Each additional intermediate node introduces a delay which is at minimum the value of the minimum transmission delay into or out of the node. Note that nodes could have different transmission delays for incoming messages and outgoing messages due to different technology used on the links. The transmission delays are in addition to any propagation delays which will be experienced along the message path.

In a message-switching centre an incoming message is not lost when the required outgoing route is busy. It is stored in a queue with any other messages for the same route and retransmitted when the required circuit becomes free. Message switching is thus an example of a delay system or a queuing system. Message switching is still used for telegraph traffic and a modified form of it, known as packet switching, is used extensively for data communications.

Advantages

[edit]

The advantages to message switching are:

  • Data channels are shared among communication devices, improving the use of bandwidth.
  • Messages can be stored temporarily at message switches, when network congestion becomes a problem.
  • Priorities may be used to manage network traffic.
  • Broadcast addressing uses bandwidth more efficiently because messages are delivered to multiple destinations.
Store and forward delays

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Message switching is a store-and-forward networking technique in which an entire message, treated as a single unit, is received, stored temporarily at each intermediate node, and then forwarded to the next node until it reaches its destination, without establishing a dedicated end-to-end path.[1][2] This method originated in 19th-century telegraph systems and evolved through early 20th-century telex networks, where operators manually relayed messages between stations; by the mid-20th century, it was computerized, as seen in Western Union's Plan 55-A system introduced in 1948 for automated telegram processing.[1] In the 1960s and 1970s, message switching influenced early data networks, including military and research applications, but it was largely supplanted by packet switching due to the latter's efficiency for smaller data units.[1] Key characteristics of message switching include the use of general-purpose computers or specialized nodes with secondary storage (e.g., disk space) to buffer messages, appending of destination addresses to enable routing, and support for features like message prioritization and broadcasting via special addresses.[2][1] Unlike circuit switching, which reserves a fixed path for the duration of communication, or packet switching, which fragments messages into smaller packets for independent routing, message switching operates on whole messages, leading to higher latency but simpler implementation for non-real-time data.[2][1] Among its advantages, message switching improves channel efficiency by allowing shared use of links without idle time during transmission gaps, reduces congestion through temporary storage during peak loads, and provides fault tolerance via alternative routing and error correction at each hop.[2][1] However, it incurs significant delays from storage and processing at each node, demands substantial storage capacity especially for large messages, and is ill-suited for interactive or real-time applications due to its batch-oriented nature.[2][1] Historically and technically, message switching laid foundational concepts for modern store-and-forward paradigms, finding niche applications in military communications and legacy telecommunications infrastructures where reliability trumps speed.[1]

Fundamentals

Definition and Overview

Message switching is a fundamental technique in telecommunications networks for routing data from a source to a destination through a series of interconnected nodes. In general, switching refers to the process of directing data traffic across a network by establishing paths or forwarding units of information between devices, enabling efficient sharing of communication resources among multiple users.[3] At its core, message switching is a store-and-forward method where an entire message is transmitted as a single, indivisible unit from the source to the destination, with intermediate nodes fully receiving, storing, and then forwarding the complete message to the next node in the path, one hop at a time.[4] This approach treats the message holistically, without segmenting it into smaller parts during transmission, distinguishing it from methods that divide data into fragments for independent routing.[5] Key components of message switching include specialized nodes, often called switches or routers, equipped with storage capabilities such as buffers or disks to hold incoming messages until they can be forwarded. Each message typically comprises a header section containing essential routing information, like the destination address and control data, and a payload section carrying the substantive content, such as text or files. This technique is particularly suited to network topologies handling non-real-time, bursty data traffic, where delays from storage are tolerable, as in asynchronous communications.[6] The store-and-forward principle underpins the reliability of delivery by allowing nodes to verify message integrity before transmission.[4]

Key Principles

Message switching operates on the principle of complete message integrity, wherein messages are transmitted as indivisible units without fragmentation across the network. Each intermediate node must receive the entire message, verify its correctness through error detection mechanisms such as checksums, and store it fully before forwarding it to the next node, ensuring that the original content remains unaltered throughout the transit.[7] This store-and-forward approach contrasts with techniques that break data into smaller parts, prioritizing the preservation of the message's wholeness to simplify routing and reduce complexity at switching points.[8] A core aspect of error handling in message switching involves acknowledgment mechanisms between adjacent nodes to confirm successful reception and storage of the complete message. Upon receipt, the receiving node inspects the message for integrity; if errors are detected, it discards the message and prompts the sender to retransmit the entire unit, thereby guaranteeing reliable delivery hop-by-hop without propagating corrupted data.[7] End-to-end reliability in message switching is composed of hop-by-hop acknowledgments, where each node confirms receipt to the immediate sender, enabling retransmission of the entire message if needed.[7] Resource allocation in message switching eschews end-to-end dedicated paths, allowing network bandwidth to be shared dynamically among multiple flows. Transmission occurs only during actual data bursts, with links remaining idle otherwise, which optimizes utilization in environments where traffic is intermittent rather than continuous.[7] This on-demand usage eliminates the inefficiency of reserving resources for idle periods, enabling multiplexing of diverse message streams over the same infrastructure.[8] The technique is particularly suited to asynchronous communication scenarios involving bursty, non-time-sensitive data, such as electronic mail or file reports, where delays from queuing and processing at nodes are tolerable. By handling variable-length messages without real-time constraints, message switching supports efficient transfer of irregular traffic patterns that do not require synchronized delivery.[9] In this context, the message—as a self-contained unit encompassing both data and control information—facilitates flexible, connectionless operation across non-dedicated networks.[8]

Historical Development

Origins in Early Telecommunications

Message switching traces its conceptual roots to early communication systems that employed store-and-forward principles, where messages were received, stored temporarily, and then forwarded to their destination without requiring a continuous end-to-end connection. Postal relay systems, dating back to ancient civilizations such as the Persian Empire and the Roman cursus publicus, operated on this model by handing off physical letters between couriers or stations, ensuring reliable long-distance transmission despite intermittent disruptions.[10] Similarly, 18th- and 19th-century optical telegraph networks, like Claude Chappe's semaphore system in France (introduced in the 1790s) and Abraham Niclas Edelcrantz's shutter-based system in Sweden, relied on human operators at relay towers spaced approximately 10 kilometers apart to interpret and retransmit visual signals sequentially, mimicking store-and-forward to overcome line-of-sight limitations and weather constraints.[11] These analog precursors addressed the need for dependable messaging over vast distances in an era predating electrical transmission. The transition to electrical telegraphy in the mid-19th century further solidified these principles in telecommunications infrastructure. Samuel F. B. Morse's electromagnetic telegraph, patented in 1837 and publicly demonstrated in 1844, enabled rapid encoding of messages via Morse code over wires, but long-distance transmission often involved manual relaying at intermediate stations where operators would receive, decode, store briefly on paper tape, and re-encode for forwarding, preventing signal degradation over extended lines.[12] In the United States, companies like Western Union, which dominated the market by 1866 after consolidating over 500 firms, handled millions of such relayed messages annually—reaching 5.8 million in 1867 alone—facilitating economic integration and business coordination without dedicated circuits for each transmission.[12] This manual process, while error-prone due to multi-firm handoffs, underscored the efficiency of store-and-forward for scalable, non-real-time communication, particularly post-Civil War as telegraph lines expanded across North America. A pivotal milestone in formalizing message switching occurred in the 1930s with the advent of telex networks, which automated the store-and-forward of complete text messages using teleprinters connected over switched telephone lines. Originating in Germany between 1926 and 1933 as a military distribution tool, telex enabled direct-dial exchanges where messages were stored at central switching centers for semi-automated routing, eliminating the need for constant connections and reducing costs compared to dedicated telegraph wires.[13] In the United States, AT&T launched the first teletypewriter exchange service in 1931, while Western Union began integrating teletypewriters into its network in the early 1920s, expanding switching centers in major cities during the 1930s and 1940s where operators used reperforated tape to relay messages efficiently.[14] These developments were driven by post-World War I demands for reliable international business and governmental communication, as expanding global trade and diplomacy required robust systems for asynchronous messaging amid growing transoceanic cable and radio infrastructure. International coordination, led by bodies like the precursor to the CCITT (established in 1865 as the International Telegraph Union), began standardizing telex protocols in the early 1930s to interconnect national networks, fostering a worldwide system that peaked with commercial adoption by the late 1940s.[15]

Evolution and Transition to Packet Switching

Following World War II, message switching saw significant advancements as telecommunications infrastructure expanded and integrated with emerging computing technologies. In the 1950s and 1960s, store-and-forward message systems evolved from manual telegraph operations to automated processes leveraging early computers, enabling more efficient handling of data queries over long distances.[16] A prominent example was the Semi-Automated Business Research Environment (SABRE), developed by American Airlines in the early 1960s, which utilized message switching to route reservation requests from remote terminals to central mainframes, processing up to 30,000 messages daily across a network spanning the United States.[17][18] This integration marked a shift toward computer-assisted routing, reducing manual intervention while supporting bursty data traffic in commercial applications. Standardization efforts in the 1960s further propelled message switching for international networks, particularly through the International Telecommunication Union (ITU-T, formerly CCITT). During this period, ITU-T developed recommendations for telex and gentex services, emphasizing automated switching centers to interconnect national networks and handle global message traffic more reliably.[19] These standards, outlined in CCITT volumes from conferences in New Delhi (1960) and Geneva (1964), focused on fault detection, rapid clearing of incomplete messages, and interoperability for store-and-forward operations in public data networks.[20] By facilitating international telex exchanges, these developments supported the growth of message switching into a foundational technology for early data communications. The transition to packet switching in the 1970s was driven by message switching's inherent limitations in managing escalating data volumes and interactive applications. Large, variable-length messages required substantial storage at each node and introduced significant delays—often minutes or hours—making the approach inefficient for real-time computing needs amid the post-1960s data explosion.[16] Leonard Kleinrock's 1961 analysis demonstrated that breaking messages into fixed-size packets could optimize resource utilization and reduce congestion, influencing ARPANET's design as the first operational packet-switched network in 1969.[21] ARPANET, funded by the U.S. Department of Defense Advanced Research Projects Agency (DARPA), connected university computers using 50-kbit/s lines and proved packet switching's superiority for reliable, decentralized data transfer, accelerating the paradigm shift.[16] By the 1980s, message switching had largely declined in favor of packet-based systems, though its store-and-forward principles influenced subsequent protocols. The adoption of ITU-T's X.25 standard in 1976 standardized packet switching for public data networks like TELENET and DATAPAC, enabling virtual circuits and flow control that addressed message switching's storage and delay issues while supporting up to 4,095 simultaneous connections per node.[16] X.25 networks, operational by the late 1970s, interconnected globally and carried the legacy of message switching into modern WANs, though pure message systems were phased out as computing power favored finer-grained packet handling.[16]

Operational Mechanism

Store-and-Forward Process

In message switching, the store-and-forward process enables the transmission of complete messages across a network of nodes without establishing a dedicated end-to-end path, relying instead on intermediate nodes to receive, process, and relay each message in its entirety.[22] This node-level operation ensures reliable delivery by handling messages sequentially at each hop, contrasting with techniques that fragment data.[23] The process begins with message arrival and storage. When a message reaches an intermediate node via an incoming link, the node fully receives the entire message before any further action, buffering it in local memory, disk, or dedicated storage until sufficient space is available to accommodate it without overflow.[22] In early implementations, node capabilities often included magnetic tapes for storage to manage large message sizes that exceeded available random-access memory, requiring physical media for temporary holding during high-load periods.[22] Next, the node performs verification and queuing. The stored message undergoes error checking to detect transmission issues, such as corruption or incompleteness, ensuring integrity before proceeding; if errors are found, the message may be discarded or retransmitted upon request.[22] The node then appends or updates necessary routing information in the message header and places it into an outbound queue, prioritized based on factors like message urgency or network policy, where it awaits availability of the next link.[23] This step upholds key principles of message integrity by confirming the message's wholeness at each node.[23] Finally, forwarding occurs when the selected outgoing link becomes free. The node transmits the complete message to the subsequent node at the full capacity of the link, typically employing acknowledgments to confirm successful reception and enable retransmission if needed, thereby completing the hop.[22] Node processing capabilities must support this transmission, including sufficient computational resources for header manipulation and queue management to avoid bottlenecks.[23]

Message Routing and Handling

In message switching networks, routing decisions are primarily destination-based, where the next hop for a message is determined by examining the destination address contained in the message header at each intermediate node. This approach allows for dynamic path selection based on current network conditions, enabling the message to be forwarded to the most appropriate outgoing link toward the final recipient. Adaptive routing techniques further enhance this process by continuously updating routing tables with real-time information on link delays and capacities, ensuring efficient path choices in store-and-forward environments.[24] For larger networks, hierarchical routing organizes switches into regions or levels, where intra-region routing uses local addressing and inter-region routing employs aggregated addresses to reduce the complexity of routing tables and computational overhead.[25] Handling multiple messages at a switch involves queuing disciplines to manage contention for outgoing links, with first-in-first-out (FIFO) being a common method where messages are processed in the order of arrival to maintain simplicity and fairness. Priority queuing, however, assigns higher precedence to certain messages based on flags in their headers, allowing critical traffic to bypass lower-priority queues and reduce latency for urgent communications. Congestion control is achieved through backpressure mechanisms, where a congested switch signals upstream nodes to halt further message transmissions until buffer space is available, preventing overflow and propagating the control signal backward through the network.[26] Message headers in these networks typically include essential fields such as source and destination identifiers to facilitate routing, sequence numbers to track message order and detect losses or duplicates, and priority flags to enable differentiated handling during queuing. These fields ensure that the entire message can be reliably directed and reconstructed at the destination without requiring end-to-end connections.[17] Fault tolerance is incorporated through rerouting capabilities, where adaptive algorithms detect node or link failures via periodic status updates and automatically select alternate paths to maintain message delivery. This decentralized approach allows the network to reconfigure dynamically, using redundant routes predefined or computed on-the-fly to avoid failed components.[27]

Comparisons with Other Switching Techniques

Versus Circuit Switching

Message switching and circuit switching represent two distinct paradigms in telecommunications for managing data transmission across networks. In circuit switching, an end-to-end dedicated path is established and reserved for the entire duration of the communication session, ensuring exclusive use of resources like bandwidth and switches from sender to receiver.[4] This reservation occurs during a setup phase, after which data flows continuously over the fixed circuit without interruption from other traffic.[28] In contrast, message switching employs shared, on-demand links where no such end-to-end path is pre-reserved; instead, entire messages are routed hop-by-hop using a store-and-forward mechanism, allowing intermediate nodes to hold and forward messages as resources become available.[29] This approach avoids dedicating resources solely to one session, enabling dynamic allocation across multiple users. The transmission styles of these techniques further highlight their differences. Circuit switching operates in a synchronous manner, maintaining a continuous connection with a fixed data rate once established, which suits applications requiring predictable timing but results in idle resources during periods of silence or low activity.[4] Message switching, however, is asynchronous, transmitting data in discrete bursts where each complete message is stored at a node before being forwarded to the next, without requiring an ongoing connection.[28] This store-and-forward process, briefly referencing its core operational mechanism, introduces variability in transmission timing as messages queue at nodes until links are free.[29] Use cases for each method align with their structural strengths. Circuit switching is ideal for real-time, constant-bitrate applications such as voice telephony and video conferencing, where low latency and guaranteed bandwidth are essential to prevent disruptions.[4] Message switching, on the other hand, excels in non-real-time data transfer scenarios, such as file sharing or bulk email delivery, particularly in environments with bursty or intermittent traffic patterns.[28] Efficiency trade-offs underscore the suitability of each for specific traffic types. Message switching achieves better bandwidth utilization for bursty data by sharing links among multiple messages, avoiding the waste associated with reserved but unused circuits in circuit switching.[29] However, this comes at the cost of higher setup variability and potential queuing delays at nodes, whereas circuit switching provides consistent performance during active sessions but underutilizes resources for sporadic or uneven traffic.[4]

Versus Packet Switching

Message switching and packet switching both employ a store-and-forward paradigm but differ fundamentally in their approach to data granularity. In message switching, an entire message is treated as a single, indivisible unit that is stored completely at each intermediate node before being forwarded to the next hop along the route to the destination.[30] This contrasts with packet switching, where a source message is fragmented into smaller, fixed- or variable-size packets, each of which is routed independently through the network and reassembled only upon arrival at the final destination.[30] The granularity of whole-message handling in message switching ensures that no partial transmission occurs, but it requires sufficient storage at each node to accommodate potentially large messages without interruption.[30] Regarding overhead, message switching incurs lower per-message control overhead because a single header is attached to the entire message, containing routing and addressing information that suffices for the whole unit.[30] In packet switching, however, each individual packet requires its own header, which includes not only routing details but also sequence numbers for reassembly and error-checking fields, leading to proportionally higher overhead as the number of packets increases with message size.[30] This header redundancy in packet switching, while enabling more flexible routing, can consume a significant portion of bandwidth for short messages, whereas message switching's unified header approach minimizes such costs for complete transmissions.[30] In terms of scalability, message switching is well-suited for scenarios involving large, infrequent data transfers, such as bulk file exchanges or non-time-sensitive communications, where the full storage and forwarding of messages do not impose undue delays on network resources.[30] Packet switching, by contrast, excels in handling real-time applications and variable-size data streams, like voice or video traffic, due to its ability to interleave packets from multiple sources efficiently and adapt to dynamic network conditions without blocking entire paths.[30] This makes packet switching more scalable for high-volume, bursty traffic in modern interconnected systems, though it demands more sophisticated queue management at nodes compared to the simpler, message-centric buffering in message switching.[30] Message switching served as an evolutionary precursor to packet switching, introducing core store-and-forward principles that addressed limitations in earlier circuit-based methods but were refined in packet switching to achieve greater independence and efficiency in routing.[30] Early implementations of message switching, rooted in telegraph and early data networks, paved the way for packet switching's innovations, such as independent packet travel and reassembly, which became foundational to protocols like those in the ARPANET and subsequent internetworks.[30] This progression highlights how message switching's whole-unit handling evolved into packet switching's modular approach to better support diverse, scalable communication needs.[30]

Performance and Characteristics

Advantages

Message switching optimizes bandwidth usage by allowing communication channels to be shared among multiple devices and transmissions, rather than dedicating lines that would otherwise remain idle. In this store-and-forward process, resources are allocated only when needed for message transmission, enabling higher overall efficiency and reducing waste in network capacity.[31] The technique offers flexibility in routing, as complete messages can be directed through dynamic paths based on current network conditions and embedded routing information, allowing adaptation to failures or congestion in specific links without disrupting delivery. This inherent fault tolerance ensures messages can traverse alternative routes, maintaining connectivity in variable environments.[31][32] Message switching exhibits lower complexity than packet switching, as it avoids the need to fragment messages into smaller packets, manage multiple headers, or perform reassembly and ordering at the destination. Instead, each message is handled as a single unit with one header, simplifying processing at intermediate nodes and enabling straightforward full-message acknowledgments that enhance reliability by ensuring either complete reception or total loss detection.[33] It proves cost-effective for scenarios involving low-volume or bursty data over long distances, where establishing and maintaining dedicated connections would be inefficient; shared trunks and on-demand resource allocation minimize infrastructure and operational expenses.[32]

Disadvantages

One key limitation of message switching is its high latency, stemming from the store-and-forward process where each intermediate node must fully receive and store an entire message before forwarding it to the next hop. This cumulative delay across multiple nodes makes the technique unsuitable for real-time applications, such as voice or video communications, as transmission times can extend significantly for longer messages—potentially from seconds to minutes depending on network load and message size.[34][35] In contrast to faster switching methods, this approach exacerbates end-to-end delays, rendering it inefficient for time-sensitive data flows.[4] Storage demands represent another major drawback, as intermediate nodes require substantial memory to hold complete messages of potentially unlimited size until forwarding is possible. This can lead to buffer overflow risks, where incoming messages are lost if storage capacity is exceeded during peak traffic periods, compromising reliability.[34][35] Such requirements strain resources in networks with variable message lengths, often necessitating oversized buffers that increase hardware costs and complexity.[4] Scalability issues further hinder message switching in high-volume environments, as the technique struggles with congestion at busy nodes where queuing for storage and processing creates bottlenecks. Without dedicated paths, multiple messages competing for node resources can result in widespread delays or drops, making it poorly suited for modern networks handling bursty or high-throughput traffic.[4] This inefficiency is particularly evident in large-scale systems, where the lack of fragmentation limits adaptability to varying loads. Security vulnerabilities arise from the prolonged storage of messages at intermediate nodes, extending the window during which data is exposed to potential interception or tampering if a node is compromised. Unlike transient transmission methods, this persistence heightens risks of unauthorized access to sensitive content, as stored messages remain vulnerable longer than in end-to-end encrypted or direct routing schemes.[36][37]

Delay Analysis

In message switching, where entire messages are stored and forwarded at intermediate nodes, the end-to-end delay experienced by a message arises from multiple components incurred at each hop along the path. These delays are inherent to the store-and-forward mechanism and accumulate across the network, making delay analysis critical for understanding performance in such systems. The primary types of delays include propagation, transmission, queuing, and processing delays, each contributing differently based on network conditions and message characteristics.[38] Propagation delay represents the time required for the message's signal to travel the physical distance between two adjacent nodes, determined by the link length and the speed of propagation in the medium, typically around two-thirds the speed of light in fiber optics or copper. Transmission delay is the time needed to serialize and push the entire message onto the outgoing link, calculated as $ D_{\text{trans}} = \frac{M}{C} $, where $ M $ is the message size in bits and $ C $ is the link bandwidth in bits per second; this delay is particularly significant in message switching due to the typically large message sizes. Queuing delay occurs when the message waits in a buffer at a node because the outgoing link or storage is occupied by other messages, varying with the arrival rate and service rate at the node. Processing delay encompasses the time for node operations such as error verification, routing decisions, and temporary storage of the complete message before forwarding, influenced by the node's computational capabilities.[38][33] The total end-to-end delay $ D_{\text{total}} $ for a message traversing $ N $ hops in a message switching network can be approximated as
Dtotal=N×(Dprop+Dtrans+Dqueue+Dproc), D_{\text{total}} = N \times (D_{\text{prop}} + D_{\text{trans}} + D_{\text{queue}} + D_{\text{proc}}),
where each delay component is experienced roughly once per hop, though propagation is strictly per link and queuing/processing can vary. This formula assumes uniform conditions across hops for simplicity, as derived from analyses of store-and-forward systems. In practice, transmission and processing delays dominate for large messages, while queuing becomes prominent under high traffic.[38][33][39] Several factors influence these delays in message switching networks. Message length directly scales the transmission delay, as longer messages require more time to serialize at each node, potentially amplifying overall latency in multi-hop paths. Network load, or the volume of concurrent messages, primarily affects queuing delay by increasing wait times when buffers fill up. Node storage and processing speeds impact the processing delay, with slower hardware leading to longer verification and forwarding times for bulky messages. For instance, reducing message length proportionally decreases the transmission delay component, highlighting a key trade-off in system design, though other delays may persist based on fixed propagation distances and variable loads.[38][33]

Applications

Historical Examples

Message switching originated from practices in early telegraphy, where operators manually stored incoming messages and forwarded them to their destinations via relay stations.[40] One prominent historical example is the Telex network, introduced in 1933 in Germany as a teleprinter-based system for distributing military messages and later expanding globally for commercial use.[15] The network employed store-and-forward techniques at switching exchanges to route typed messages over shared telephone lines, enabling efficient international business communications from the 1930s through the 1980s without requiring dedicated circuits. By connecting subscribers worldwide, Telex facilitated the transmission of complete messages that were buffered and relayed hop-by-hop, proving particularly valuable for time-insensitive exchanges in low-capacity environments.[41] In military contexts during World War II, store-and-forward message switching was integral to command and control operations, with message centers using teleprinter networks like Telex to dispatch orders across front lines and disrupted infrastructures.[13] These systems allowed messages to be stored at intermediate nodes for retransmission, ensuring delivery even when direct paths were unavailable due to wartime conditions, and supported the coordination of Allied and Axis forces through reliable, buffered handling of dispatches. Another key military example was the Automatic Digital Network (AUTODIN), a computerized store-and-forward system deployed by the U.S. Department of Defense starting in the 1960s for secure global message switching. It processed teletype and later digital messages across dedicated switches, providing reliable transmission for command dispatches in bandwidth-limited environments until its phase-out in the late 1990s.[42] The PLATO educational network, developed in 1960 at the University of Illinois, represented an early computing application of message switching through batch exchanges for sharing instructional content and user interactions.[43] Users submitted messages via terminals connected to a central mainframe, which stored and forwarded them in batches to recipients, enabling asynchronous collaboration in educational settings during the 1960s and supporting features like early email and discussion boards.[44] These examples underscored message switching's high reliability for complete message delivery in bandwidth-constrained eras, but by the 1990s, legacy systems such as the military's AUTODIN were phased out as packet switching offered greater efficiency and scalability.[45]

Contemporary Uses

Despite its decline in favor of more efficient packet-switching paradigms, message switching persists in select legacy systems where reliability in intermittent or low-bandwidth environments outweighs speed. In aviation, the Aeronautical Fixed Telecommunication Network (AFTN) remains operational for relaying international flight plans, meteorological reports, and safety messages across global air traffic services. This store-and-forward system, compliant with ICAO Annex 10 standards, handles messages through dedicated switching centers that buffer and route them to multiple recipients, ensuring delivery even during network disruptions.[46][47] Similarly, in maritime operations, SITOR (Simplex Teletype Over Radio) continues to support critical communications, particularly for ship-to-shore weather broadcasts and distress alerts. The U.S. Coast Guard employs HF SITOR on frequencies such as 6314 kHz and 8416.5 kHz to disseminate National Weather Service marine products, leveraging message switching to transmit teletype-style messages reliably over long distances where real-time connectivity is unavailable.[48] This approach tolerates the high latency inherent in ocean-going transmissions, prioritizing complete message integrity. Hybrid integrations extend message switching principles into modern messaging infrastructures, notably email and SMS gateways suited to low-bandwidth regions. Email via the Simple Mail Transfer Protocol (SMTP) operates on a store-and-forward model, where messages are queued at intermediate servers before final delivery, adapting well to variable network conditions in remote or developing areas.[32] SMS similarly uses short message service centers (SMSCs) for store-and-forward handling, enabling text delivery in bandwidth-constrained environments like rural mobile networks, where immediate end-to-end paths may not exist.[49] In niche IoT deployments, message switching adaptations facilitate data collection from sensor networks in challenging terrains, such as environmental monitoring in deserts or oceans. Store-and-forward core networks in multi-LEO satellite systems buffer sensor data—ranging from temperature readings to pollution levels—on orbiting nodes until ground station links become available, enhancing coverage for infrequent but substantial data transmissions without constant connectivity.[50] The principles of message switching also inform future-oriented applications in Delay-Tolerant Networking (DTN), particularly for space communications. NASA's DTN implementations enable store-and-forward bundling of data across interplanetary links, supporting missions like Artemis by accommodating extreme delays and disruptions in deep space, where traditional protocols fail.[51] This evolution positions message switching as a foundational element for resilient extraterrestrial networks.[52]

References

User Avatar
No comments yet.