Recent from talks
Nothing was collected or created yet.
Data communication
View on Wikipedia

Data communication is the transfer of data over a point-to-point or point-to-multipoint communication channel. Data communication comprises data transmission and data reception and can be classified as analog transmission and digital communications.[1][2][3][4][5]
Analog data communication conveys voice, data, image, signal or video information using a continuous signal, which varies in amplitude, phase, or some other property. In baseband analog transmission, messages are represented by a sequence of pulses by means of a line code; in passband analog transmission, they are communicated by a limited set of continuously varying waveforms, using a digital modulation method. Passband modulation and demodulation is carried out by modem equipment.
Digital transmission and digital reception are the transfer of either a digitized analog signal or a born-digital bitstream.[1] Baseband digital transmission is regarded as comprising part of a digital signal, whereas passband transmission of digital data may also or alternatively be considered a form of digital-to-analog conversion.
Data communication channels include copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal.
Distinction between related subjects
[edit]Digital transmission or data transmission traditionally belongs to telecommunications and electrical engineering. Basic principles of data transmission may also be covered within the computer science or computer engineering topic of data communications, which also includes computer networking applications and communication protocols, for example routing, switching and inter-process communication. Although the Transmission Control Protocol (TCP) involves transmission, TCP and other transport layer protocols are covered in computer networking but not discussed in a textbook or course about data transmission.
In most textbooks, the term analog transmission only refers to the transmission of an analog message signal (without digitization) by means of an analog signal, either as a non-modulated baseband signal or as a passband signal using an analog modulation method such as AM or FM. It may also include analog-over-analog pulse modulated baseband signals such as pulse-width modulation. In a few books within the computer networking tradition, analog transmission also refers to passband transmission of bit-streams using digital modulation methods such as FSK, PSK and ASK.[1]
The theoretical aspects of data transmission are covered by information theory and coding theory.
Protocol layers and sub-topics
[edit]| OSI model by layer |
|---|
Courses and textbooks in the field of data transmission typically deal with the following OSI model protocol layers and topics:
- Layer 1, the physical layer:
- Channel coding including
- Digital modulation schemes
- Line coding schemes
- Forward error correction (FEC) codes
- Bit synchronization
- Multiplexing
- Equalization
- Channel models
- Channel coding including
- Layer 2, the data link layer:
- Channel access schemes, media access control (MAC)
- Packet mode communication and Frame synchronization
- Error detection and automatic repeat request (ARQ)
- Flow control
- Layer 6, the presentation layer:
- Source coding (digitization and data compression), and information theory.
- Cryptography (may occur at any layer)
It is also common to deal with the cross-layer design of those three layers.[7]
Applications and history
[edit]Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical, acoustic, mechanical) means since the advent of communication. Analog signal data has been sent electronically since the advent of the telephone. However, the first data electromagnetic transmission applications in modern time were electrical telegraphy (1809) and teletypewriters (1906), which are both digital signals. The fundamental theoretical work in data transmission and information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the early 20th century, was done with these applications in mind.
In the early 1960s, Paul Baran invented distributed adaptive message block switching for digital communication of voice messages using switches that were low-cost electronics.[8][9] Donald Davies invented and implemented modern data communication during 1965–7, including packet switching, high-speed routers, communication protocols, hierarchical computer networks and the essence of the end-to-end principle.[10][11][12][13] Baran's work did not include routers with software switches and communication protocols, nor the idea that users, rather than the network itself, would provide the reliability.[14][15][16] Both were seminal contributions that influenced the development of computer networks.[17][18]
Data transmission is utilized in computers in computer buses and for communication with peripheral equipment via parallel ports and serial ports such as RS-232 (1969), FireWire (1995) and USB (1996). The principles of data transmission are also utilized in storage media for error detection and correction since 1951. The first practical method to overcome the problem of receiving data accurately by the receiver using digital code was the Barker code invented by Ronald Hugh Barker in 1952 and published in 1953.[19] Data transmission is utilized in computer networking equipment such as modems (1940), local area network (LAN) adapters (1964), repeaters, repeater hubs, microwave links, wireless network access points (1997), etc.
In telephone networks, digital communication is utilized for transferring many phone calls over the same copper cable or fiber cable by means of pulse-code modulation (PCM) in combination with time-division multiplexing (TDM) (1962). Telephone exchanges have become digital and software controlled, facilitating many value-added services. For example, the first AXE telephone exchange was presented in 1976. Digital communication to the end user using Integrated Services Digital Network (ISDN) services became available in the late 1980s. Since the end of the 1990s, broadband access techniques such as ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-the-home (FTTH) have become widespread to small offices and homes. The current tendency is to replace traditional telecommunication services with packet mode communication such as IP telephony and IPTV.
Transmitting analog signals digitally allows for greater signal processing capability. The ability to process a communications signal means that errors caused by random processes can be detected and corrected. Digital signals can also be sampled instead of continuously monitored. The multiplexing of multiple digital signals is much simpler compared to the multiplexing of analog signals. Because of all these advantages, because of the vast demand to transmit computer data and the ability of digital communications to do so and because recent advances in wideband communication channels and solid-state electronics have allowed engineers to realize these advantages fully, digital communications have grown quickly.
The digital revolution has also resulted in many digital telecommunication applications where the principles of data transmission are applied. Examples include second-generation (1991) and later cellular telephony, video conferencing, digital TV (1998), digital radio (1999), and telemetry.
Data transmission, digital transmission or digital communications is the transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels include copper wires, optical fibers, wireless communication channels, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radio wave, microwave, or infrared light.
While analog transmission is the transfer of a continuously varying analog signal over an analog channel, digital communication is the transfer of discrete messages over a digital or an analog channel. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying waveforms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation (also known as detection) is carried out by modem equipment. According to the most common definition of a digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion.[citation needed]
Data transmitted may be digital messages originating from a data source, for example, a computer or a keyboard. It may also be an analog signal, such as a phone call or a video signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more advanced source coding (analog-to-digital conversion and data compression) schemes. This source coding and decoding is carried out by codec equipment.
Serial and parallel transmission
[edit]In telecommunications, serial transmission is the sequential transmission of signal elements of a group representing a character or other entity of data. Digital serial transmissions are bits sent over a single wire, frequency or optical path sequentially. Because it requires less signal processing and fewer chances for error than parallel transmission, the transfer rate of each individual path may be faster. This can be used over longer distances, and a check digit or parity bit can be sent along with the data easily.
Parallel transmission is the simultaneous transmission of related signal elements over two or more separate paths. Multiple electrical wires are used that can transmit multiple bits simultaneously, which allows for higher data transfer rates than can be achieved with serial transmission. This method is typically used internally within the computer, for example, the internal buses, and sometimes externally for such things as printers. Timing skew can be a significant issue in these systems because the wires in parallel data transmission unavoidably have slightly different properties, so some bits may arrive before others, which may corrupt the message. This issue tends to worsen with distance, making parallel data transmission less reliable for long distances.
Communication channels
[edit]Some communications channel types include:
Asynchronous and synchronous data transmission
[edit]Asynchronous serial communication uses start and stop bits to signify the beginning and end of transmission.[20] This method of transmission is used when data is sent intermittently as opposed to in a solid stream.
Synchronous transmission synchronizes transmission speeds at both the receiving and sending end of the transmission using clock signals. The clock may be a separate signal or embedded in the data. A continual stream of data is then sent between the two nodes. Due to there being no start and stop bits, the data transfer rate may be more efficient.
See also
[edit]References
[edit]- ^ a b c A. P. Clark, "Principles of Digital Data Transmission", Published by Wiley, 1983
- ^ David R. Smith, "Digital Transmission Systems", Kluwer International Publishers, 2003, ISBN 1-4020-7587-1.
- ^ Sergio Benedetto, Ezio Biglieri, "Principles of Digital Transmission: With Wireless Applications", Springer 2008, ISBN 0-306-45753-9, ISBN 978-0-306-45753-1.
- ^ Simon Haykin, "Digital Communications", John Wiley & Sons, 1988. ISBN 978-0-471-62947-4.
- ^ John Proakis, "Digital Communications", 4th edition, McGraw-Hill, 2000. ISBN 0-07-232111-3.
- ^ "X.225 : Information technology – Open Systems Interconnection – Connection-oriented Session protocol: Protocol specification". Archived from the original on 1 February 2021. Retrieved 10 March 2023.
- ^ F. Foukalas et al., "Cross-layer design proposals for wireless mobile networks: a survey and taxonomy "
- ^ Baran, Paul (2002). "The beginnings of packet switching: some underlying concepts" (PDF). IEEE Communications Magazine. 40 (7): 42–48. Bibcode:2002IComM..40g..42B. doi:10.1109/MCOM.2002.1018006. ISSN 0163-6804. Archived (PDF) from the original on 2022-10-10.
Essentially all the work was defined by 1961, and fleshed out and put into formal written form in 1962. The idea of hot potato routing dates from late 1960.
- ^ "Paul Baran and the Origins of the Internet". RAND Corporation. Retrieved 2020-02-15.
- ^ Yates, David M. (1997). Turing's Legacy: A History of Computing at the National Physical Laboratory 1945-1995. National Museum of Science and Industry. pp. 132–4. ISBN 978-0-901805-94-2.
Davies's invention of packet switching and design of computer communication networks ... were a cornerstone of the development which led to the Internet
- ^ Naughton, John (2000) [1999]. A Brief History of the Future. Phoenix. p. 292. ISBN 9780753810934.
- ^ Campbell-Kelly, Martin (1987). "Data Communications at the National Physical Laboratory (1965-1975)". Annals of the History of Computing. 9 (3/4): 221–247. doi:10.1109/MAHC.1987.10023. S2CID 8172150.
the first occurrence in print of the term protocol in a data communications context ... the next hardware tasks were the detailed design of the interface between the terminal devices and the switching computer, and the arrangements to secure reliable transmission of packets of data over the high-speed lines
- ^ Davies, Donald; Bartlett, Keith; Scantlebury, Roger; Wilkinson, Peter (October 1967). A Digital Communication Network for Computers Giving Rapid Response at remote Terminals (PDF). ACM Symposium on Operating Systems Principles. Archived (PDF) from the original on 2022-10-10. Retrieved 2020-09-15. "all users of the network will provide themselves with some kind of error control"
- ^ Kleinrock, L. (1978). "Principles and lessons in packet communications". Proceedings of the IEEE. 66 (11): 1320–1329. doi:10.1109/PROC.1978.11143. ISSN 0018-9219.
Paul Baran ... focused on the routing procedures and on the survivability of distributed communication systems in a hostile environment, but did not concentrate on the need for resource sharing in its form as we now understand it; indeed, the concept of a software switch was not present in his work.
- ^ Pelkey, James L. "6.1 The Communications Subnet: BBN 1969". Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968–1988.
As Kahn recalls: ... Paul Baran's contributions ... I also think Paul was motivated almost entirely by voice considerations. If you look at what he wrote, he was talking about switches that were low-cost electronics. The idea of putting powerful computers in these locations hadn't quite occurred to him as being cost effective. So the idea of computer switches was missing. The whole notion of protocols didn't exist at that time. And the idea of computer-to-computer communications was really a secondary concern.
- ^ Waldrop, M. Mitchell (2018). The Dream Machine. Stripe Press. p. 286. ISBN 978-1-953953-36-0.
Baran had put more emphasis on digital voice communications than on computer communications.
- ^ "The real story of how the Internet became so vulnerable". Washington Post. Archived from the original on 2015-05-30. Retrieved 2020-02-18.
Historians credit seminal insights to Welsh scientist Donald W. Davies and American engineer Paul Baran
- ^ A History of the ARPANET: The First Decade (PDF) (Report). Bolt, Beranek & Newman Inc. 1 April 1981. pp. 13, 53 of 183 (III-11 on the printed copy). Archived from the original on 1 December 2012.
Aside from the technical problems of interconnecting computers with communications circuits, the notion of computer networks had been considered in a number of places from a theoretical point of view. Of particular note was work done by Paul Baran and others at the Rand Corporation in a study "On Distributed Communications" in the early 1960's. Also of note was work done by Donald Davies and others at the National Physical Laboratory in England in the mid-1960's. ... Another early major network development which affected development of the ARPANET was undertaken at the National Physical Laboratory in Middlesex, England, under the leadership of D. W. Davies.
- ^ Barker, RH (1953). Group Synchronisation of Binary Digital Systems. Communication Theory: Butterworth. pp. 273–287.
- ^ "What is Asynchronous Transmission? - Definition from Techopedia". Techopedia.com. Retrieved 2017-12-08.
Data communication
View on GrokipediaFundamentals and Distinctions
Definition and Scope
Data communication is the process of exchanging digital data between two or more computing devices through a transmission medium, such as wired or wireless channels, enabling the transfer of information in the form of binary signals.[4] This exchange typically involves a sender initiating the transmission, a receiver accepting the data, the physical or virtual medium carrying the signals, and the message itself, which represents the raw digital content being conveyed.[5] At its core, data communication relies on standardized protocols to ensure compatibility and orderly exchange between heterogeneous devices.[4] The fundamental components of a data communication system include the source, which originates the data; the transmitter, which encodes the data into a suitable format for transmission; the transmission medium, which propagates the signal; the receiver, which decodes the incoming signal; and the destination, where the data is utilized by the end user or device.[6] Protocols serve as the essential ruleset governing the formatting, timing, and error-handling aspects of this exchange, ensuring interoperability across systems.[4] Key principles underpinning effective data communication emphasize reliability through accurate and complete delivery to the intended recipient, efficiency via timely transmission to minimize delays, and security to protect against unauthorized access or tampering during transit.[4] A critical distinction exists between data—raw, unprocessed bits or symbols lacking inherent meaning—and information, which emerges when data is contextualized, processed, and interpreted to convey purposeful content.[7] The importance of data communication lies in its foundational role within modern technological ecosystems, powering the interconnectivity of computer networks, the global Internet, Internet of Things (IoT) devices, and telecommunications infrastructures that facilitate seamless information sharing and resource collaboration.[1] Without robust data communication, applications ranging from real-time video streaming to remote device control in IoT would be infeasible, as it underpins the efficient dissemination of digital content across diverse scales from local area networks to wide-area systems.[1] Data rates, a measure of transmission speed expressed in bits per second (bps), quantify the capacity and performance of these systems, with modern networks achieving rates from megabits to gigabits per second to support high-volume exchanges.[4]Distinction from Related Fields
Data communication, while foundational to many digital systems, is distinct from computer networking in its scope and focus. Data communication primarily concerns the exchange of digital bits between two or more devices over a transmission medium, emphasizing the physical and data link layers for reliable transfer without delving into broader system architectures.[8] In contrast, computer networking encompasses the design, implementation, and management of interconnected systems, including network topologies, routing algorithms, and protocols for multi-device connectivity and resource sharing across larger scales.[9] This distinction highlights that data communication serves as a building block within networking, but networking extends to higher-level abstractions like internetworking and scalability. Telecommunications, a broader field, often integrates data communication but includes non-digital modalities and legacy infrastructures that data communication largely excludes. Data communication is inherently digital and typically packet-oriented, facilitating the transfer of discrete data units such as files or messages between computing devices. Telecommunications, however, traditionally encompasses analog signals for voice, video, and broadcast services, frequently relying on circuit-switched networks where dedicated paths are established for the duration of a session, unlike the dynamic, on-demand nature of data communication.[10] This separation is evident in applications: data communication powers email and file transfers, while telecommunications supports telephony and television distribution.[11] In relation to information theory, data communication addresses the engineering challenges of actual data transmission, applying theoretical principles to real-world systems rather than deriving fundamental limits. Information theory, pioneered by Claude Shannon, mathematically models the maximum reliable transmission rate over noisy channels, as captured in the channel capacity theorem, but remains abstract and focused on entropy, coding efficiency, and noise bounds without specifying implementation details.[12] Data communication, by comparison, implements practical techniques like error detection and modulation to achieve viable throughput in physical media, bridging theory to deployment in devices and protocols.[13] Data communication fundamentally differs from data storage in purpose and mechanism, prioritizing transient movement over persistent retention. Transmission in data communication involves real-time propagation of data across media like cables or wireless links, subject to latency, bandwidth constraints, and potential loss during transit.[8] Data storage, conversely, entails recording information on media such as hard drives or cloud repositories for indefinite access, emphasizing durability, retrieval speed, and capacity without the immediacy of live exchange.[14] For instance, sending an email leverages data communication for delivery, while saving its content to a server relies on storage paradigms.[15] A common misconception is that data communication equates to internet access or web browsing, overlooking its role as an underlying enabler rather than the end-user application. In reality, data communication provides the bit-level transport mechanisms that make internet services possible, but it operates independently in local or point-to-point scenarios without requiring global connectivity.[16] Another error is assuming data communication inherently guarantees security or error-free delivery, whereas it focuses on transfer efficiency, necessitating additional layers for protection and reliability.Transmission Methods
Serial Transmission
Serial transmission involves the sequential sending of data bits, one at a time, over a single communication channel or wire. This process requires hardware, such as a universal asynchronous receiver-transmitter (UART) or similar converter, to transform parallel data from internal device buses into a serial stream for transmission and vice versa upon reception. Data is typically framed into bytes or packets, with each bit representing a voltage level transition (e.g., high for 1, low for 0) propagated along the medium. Unlike parallel transmission, which sends multiple bits simultaneously, serial transmission uses fewer conductors, making it suitable for extending signals over longer distances without significant skew issues.[17][18] There are two primary types of serial transmission: asynchronous and synchronous. In asynchronous serial transmission, data is sent without a dedicated clock signal, relying instead on framing bits to synchronize the receiver. Each byte begins with a start bit (typically logic 0) to signal the onset, followed by 7 or 8 data bits (transmitted least significant bit first), an optional parity bit for error checking, and one or more stop bits (logic 1) to mark the end, allowing the receiver to sample the data at an agreed baud rate. This method accommodates irregular data flows with potential gaps between bytes. The RS-232 standard exemplifies asynchronous serial, defining voltage levels (e.g., +3V to +15V for logic 0, -3V to -15V for logic 1) and supporting data rates up to 20 kbps over distances of about 50 feet at lower speeds.[17][19][18] Synchronous serial transmission, in contrast, delivers a continuous stream of bits without start or stop bits per byte, using an external clock signal shared between sender and receiver to maintain precise timing. Data is organized into frames, often with header sequences or flags to delineate boundaries, enabling higher efficiency for steady, high-volume transfers. This type requires constant synchronization to avoid bit slippage, making it ideal for applications with predictable data rates.[17] Serial transmission offers several advantages, particularly in cost and simplicity for extended ranges. It requires minimal wiring—often just a single pair of wires—reducing material costs and electromagnetic interference susceptibility compared to multi-wire setups, while robust signaling (e.g., differential in some implementations) supports reliable operation over hundreds of meters. However, it has disadvantages, including inherently lower throughput for bandwidth-intensive tasks due to sequential bit delivery, and potential timing challenges in asynchronous modes from baud rate mismatches. Synchronous variants demand ongoing clock alignment, adding complexity to hardware.[17][19][18] Common use cases include legacy interfaces like RS-232 for connecting computers to peripherals such as modems, printers, or industrial controllers, where point-to-point links suffice at low to moderate speeds. The Universal Serial Bus (USB) employs serial transmission at its physical layer, using differential signaling over twisted pairs for plug-and-play device connectivity, supporting speeds from 1.5 Mbps (USB 1.0) to 480 Mbps (USB 2.0) and beyond in peripherals like keyboards, drives, and cameras. In Ethernet networks, the physical layer (PHY) per IEEE 802.3 standards transmits serial bit streams over twisted-pair or fiber media, enabling local area networking at rates from 10 Mbps to 400 Gbps through serialized data encoding.[18][20][21] Error handling in serial transmission commonly incorporates parity bits for basic detection of transmission faults. A parity bit is appended to the data frame, set to make the total number of 1s either even (even parity) or odd (odd parity); the receiver recalculates this and flags a mismatch if an odd number of bits (typically single-bit errors) have flipped due to noise. While unable to correct errors or detect multi-bit faults reliably, parity provides a low-overhead check, often combined with framing validation in asynchronous protocols like RS-232.[19][22][18]Parallel Transmission
Parallel transmission is a method in data communication where multiple bits of data are sent simultaneously across separate physical channels or wires, allowing for the concurrent transfer of an entire data unit, such as an 8-bit byte, using one wire per bit. This approach contrasts with sequential methods by enabling all bits to propagate in parallel, typically requiring a dedicated set of lines equal to the bit width of the data being transmitted. To ensure proper reception, the signals on these lines must be precisely timed, often achieved through a shared clock line that coordinates the sender and receiver.[23][24] One key advantage of parallel transmission is its ability to achieve significantly higher data rates over short distances, as the throughput scales directly with the number of parallel channels; for example, an 8-bit parallel interface can theoretically transfer data eight times faster than a single-bit line operating at the same clock frequency. This makes it ideal for applications requiring rapid internal data movement, such as within computing hardware, where minimal propagation delay allows for efficient high-bandwidth operations without the overhead of serialization. However, this speed comes at the cost of increased hardware complexity, as more wires necessitate additional connectors and cabling.[25][23] Despite these benefits, parallel transmission faces notable disadvantages, particularly related to signal integrity over distance. Skew arises from slight variations in wire lengths, materials, or electromagnetic propagation speeds, causing bits to arrive at the receiver out of alignment, which can lead to data errors if not compensated by advanced timing mechanisms. Crosstalk, the electromagnetic interference between adjacent wires, further exacerbates signal degradation, amplifying noise and reducing reliability as cable length increases. These issues, combined with higher susceptibility to attenuation and the economic burden of multi-wire setups, render parallel transmission unsuitable for long-distance applications, typically limiting it to spans under a few meters.[24][25][26] Synchronization in parallel transmission poses significant challenges, as all bits must be aligned at the receiver to reconstruct the original data accurately; without a reliable clock signal or strobe to sample the bits simultaneously, desynchronization can corrupt entire bytes. This often requires additional control lines for handshaking or timing, increasing the overall pin count and design complexity in interfaces. In practice, these synchronization demands have contributed to the decline of parallel methods in favor of serial alternatives that avoid multi-line timing issues.[24][27] Historically, parallel transmission found prominent use in peripheral connections like the Centronics parallel printer interface, developed in the 1970s and standardized under IEEE 1284, which enabled asynchronous data transfer at rates up to 150 KB/s over short cables for efficient printing. Within computers, it powered internal buses such as the Peripheral Component Interconnect (PCI), a synchronous parallel bus operating at 32- or 64-bit widths to facilitate high-speed data exchange between the CPU and expansion cards on the motherboard. Although effective for these short-range, high-throughput needs, parallel transmission has largely been supplanted in contemporary systems by serial technologies like USB and PCIe, which offer better scalability for modern speeds while circumventing skew and crosstalk limitations.[28][29]Synchronous Transmission
Synchronous transmission involves the transfer of data as a continuous stream of bits between a sender and receiver that operate under a shared timing mechanism, ensuring precise coordination without individual byte delimiters like start or stop bits.[30] This method relies on a common clock signal to dictate the rate at which bits are sent and received, allowing for efficient handling of large data volumes in real-time applications.[31] In terms of mechanics, synchronous transmission sends data as an unbroken bit stream, where the absence of framing bits per character minimizes overhead and maximizes throughput.[30] The clock signal can be provided via a separate line from the transmitter to the receiver (source synchronous), a shared system clock, or embedded within the data stream itself using techniques like Manchester encoding, which combines clock and data by representing each bit with a transition in the signal.[32] To delineate data blocks within this stream, protocols employ flags or headers; for instance, in bit-oriented protocols, specific bit patterns such as the flag sequence 01111110 signal the start and end of frames.[33] Synchronization is achieved by aligning the sender's and receiver's clocks to the same frequency, enabling the receiver to sample the data stream at exact intervals, typically on clock edges.[30] This shared timing reduces the likelihood of bit misalignment, with the receiver counting bits precisely against the clock to reconstruct the data.[31] In network contexts, such as SONET, synchronization extends across multiple nodes via a master clock, ensuring all elements maintain plesiochronous or fully synchronous operation for multiplexing streams.[34] Key advantages include higher efficiency due to the lack of per-character overhead, making it ideal for high-speed links where continuous transmission without pauses between bytes optimizes bandwidth usage.[31] It supports real-time communication and higher data rates, as seen in double data rate schemes that transfer bits on both rising and falling clock edges, and it minimizes timing errors in synchronized environments.[30] However, synchronous transmission demands precise clock synchronization, as any drift or loss of alignment can lead to bit errors that propagate until resynchronization occurs, potentially corrupting subsequent data.[31] Implementation is more complex and costly, requiring accurate clock distribution and receiver capabilities to handle timing violations without double-sampling or missing bits.[30] Common use cases encompass high-speed networks like SONET/SDH, where synchronous framing and clocking enable multiplexing of digital streams at rates up to 9.953 Gbps (OC-192), providing robust support for long-distance transmission with low error rates.[34] Similarly, the HDLC protocol utilizes synchronous transmission over serial links for reliable frame delivery, incorporating flags for block demarcation, error detection via CRC, and flow control to facilitate full-duplex operations in point-to-point or multipoint setups.[33]Asynchronous Transmission
Asynchronous transmission is a method of serial data communication where characters are sent independently in irregular bursts without a shared clock between the sender and receiver. Each character, typically consisting of 5 to 8 data bits, is framed by a start bit at the beginning and one or more stop bits at the end to delineate the boundaries of the data unit.[35] The start bit, represented as a logic low (0), signals the receiver that a new character is incoming, while the stop bit(s), represented as logic high (1), indicate the end of the character and return the line to its idle state.[35] An optional parity bit may be included within the frame for basic error detection.[35] Upon detecting the falling edge of the start bit, the receiver synchronizes its internal clock locally to sample the data bits at the center of each bit period, ensuring accurate interpretation despite the absence of a continuous clock signal.[35] Timing is governed by a pre-agreed baud rate, which defines the bit duration (e.g., at 9600 baud, each bit lasts approximately 104 microseconds), with the sender and receiver clocks operating independently but required to stay within about 5% tolerance to avoid sampling errors.[35] This self-clocking per character allows for gaps between transmissions, accommodating bursty or intermittent data flows without needing precise global synchronization.[35] The primary advantages of asynchronous transmission lie in its simplicity and low implementation cost, as it eliminates the need for a dedicated clock line and complex synchronization hardware, making it ideal for low-speed, bursty data scenarios where timing variations can be tolerated up to the clock tolerance limit.[35] However, the inclusion of start and stop bits introduces overhead—typically 10-20% of the frame—reducing the effective data efficiency, and the method is generally limited to lower speeds (below 64 kbps) due to accumulating clock drift over longer transmissions.[36] Common use cases include RS-232 serial ports for connecting computers to peripherals over short distances, early modems for asynchronous dial-up networking, and keyboard interfaces where sporadic keypress data is transmitted to host systems.[35] This approach serves as a fundamental mode within serial transmission, particularly suited for point-to-point links requiring minimal setup.[37]Communication Channels
Types of Channels
Communication channels in data communication are broadly classified into physical and logical types, where physical channels encompass the tangible or intangible media for signal propagation, and logical channels specify the directional flow of data over those media. Physical channels are further divided into guided and unguided categories based on whether they employ a physical conduit. Guided media, also known as wired media, constrain electromagnetic signals to follow a specific path, offering controlled transmission with characteristics influenced by the medium's material properties.[38] Unguided media, or wireless media, propagate signals through free space without physical guidance, relying on electromagnetic waves and susceptible to environmental factors.[38] Among guided media, twisted pair cable consists of two insulated copper wires twisted together to minimize electromagnetic interference and crosstalk, providing a cost-effective option for short-distance applications.[38] It exhibits low attenuation of approximately 0.2 dB/km at 1 kHz but limited bandwidth up to 400 MHz in advanced categories like Cat 6, making it suitable for voice and moderate data rates.[38] Coaxial cable features a central copper conductor surrounded by an insulating layer, metallic shielding, and an outer jacket, enabling higher bandwidths up to 500 MHz with attenuation around 7 dB/km at 10 MHz, which supports applications like cable television.[38] Fiber optic cable transmits data via light pulses through a glass or plastic core with cladding, achieving very low attenuation of 0.2-0.5 dB/km and immense bandwidth in the terahertz range, far surpassing copper-based media like twisted pair due to reduced signal loss over distance.[38] Unguided media include radio waves, which operate in various frequency bands for omnidirectional broadcast over ranges up to thousands of kilometers, as seen in AM and FM radio.[38] Microwave transmission uses higher frequencies (2-45 GHz) for line-of-sight point-to-point links, with ranges of 1.6-70 km depending on the band, offering high data rates but requiring clear paths.[38] Satellite communication employs unguided microwave signals relayed via orbiting satellites, enabling global coverage for applications like broadcasting and remote data links.[39] Logical channels define the communication directionality overlaid on physical media, independent of the underlying transmission method. Simplex mode allows data flow in one direction only, utilizing a single channel for unidirectional transmission, such as from a keyboard to a computer.[40] Half-duplex mode supports bidirectional communication but alternates directions, using one channel where only one party transmits at a time, exemplified by walkie-talkies.[40] Full-duplex mode enables simultaneous bidirectional transmission, typically requiring two separate channels or advanced techniques, as in modern telephone systems or Ethernet networks with dedicated transmit and receive paths.[40] To efficiently share physical channels among multiple users or signals, multiplexing techniques divide the channel capacity into logical sub-channels. Time-division multiplexing (TDM) allocates discrete time slots to each signal within a shared frequency band, allowing sequential transmission for digital systems like telephony.[41] Frequency-division multiplexing (FDM) partitions the channel's bandwidth into non-overlapping frequency bands, each assigned to a signal, with guard bands to prevent interference, commonly used in analog radio broadcasting.[41] Representative examples of these channels include twisted pair cabling in traditional telephone lines for voice communication and in Ethernet local area networks (LANs) for data connectivity, where four pairs of wires support speeds up to 1 Gbps in gigabit Ethernet.[42]Channel Characteristics and Performance
Channel characteristics refer to the inherent properties of a communication medium that determine its ability to transmit data reliably and efficiently, including bandwidth, noise levels, signal degradation, and effective data rates. These properties directly influence the quality and speed of data transmission, with performance metrics quantifying how well a channel meets application requirements. For instance, in twisted-pair copper cables used for Ethernet, characteristics like limited bandwidth and susceptibility to noise constrain achievable data rates to around 100 Mbps over 100 meters without repeaters.[43] Bandwidth is the range of frequencies a channel can support, measured in hertz (Hz), and it fundamentally limits the data rate. According to the Nyquist theorem for noiseless channels, the maximum signaling rate is twice the bandwidth, and with multiple signal levels, the maximum data rate bits per second, where is the bandwidth and is the number of discrete signal levels. This relation establishes the theoretical upper bound for binary signaling () at symbols per second, enabling higher rates through multilevel encoding, as demonstrated in early telegraph systems.[44] Noise and distortion impair signal integrity, leading to errors in received data. Common types include thermal noise, arising from random electron motion in conductors and modeled as additive white Gaussian noise, and crosstalk, where signals from adjacent channels interfere. The signal-to-noise ratio (SNR), defined as the ratio of signal power to noise power (often in decibels), critically affects error rates; higher SNR reduces the probability of bit misinterpretation by improving signal distinguishability. For example, in digital systems, an SNR below 10 dB can increase error likelihood significantly, necessitating amplification or error detection.[45] Attenuation describes the progressive loss of signal strength over distance, typically exponential and frequency-dependent, expressed in decibels (dB) as , where is power. In wired channels like coaxial cable, attenuation rises with frequency, limiting usable bandwidth; wireless channels experience path loss proportional to distance raised to a power (2–5), as in free-space propagation where . Propagation delay is the time for a signal to traverse the channel, calculated as , with the distance and the propagation speed (near light speed in fiber, slower in copper). This delay impacts real-time applications, such as in satellite links where round-trip delays exceed 500 ms.[43] Throughput represents the effective data rate after accounting for protocol overhead, retransmissions, and errors, always less than the channel's bandwidth capacity. For instance, while a 1 Gbps Ethernet link has a bandwidth of 1 Gbps, throughput might drop to 800 Mbps due to header overhead (8 bytes per frame) and contention. The bit error rate (BER), the ratio of erroneous bits to total bits transmitted (e.g., for reliable links), serves as a key performance metric, correlating inversely with SNR and indicating channel reliability. Low BER ensures minimal retransmissions, preserving throughput in noisy environments like wireless LANs.[46][45] To transmit digital data over analog channels, modulation techniques alter carrier wave parameters: amplitude modulation (AM) varies signal strength to encode bits, as in amplitude-shift keying (ASK); frequency modulation (FM) shifts the carrier frequency, used in frequency-shift keying (FSK) for robust short-range links; and phase modulation (PM) changes the phase angle, enabling phase-shift keying (PSK) variants like binary PSK for efficient spectrum use. These methods map binary data to analog variations, with combined schemes like quadrature amplitude modulation (QAM) achieving higher rates by jointly modulating amplitude and phase.[47]Protocol Layers
OSI Model Layers
The Open Systems Interconnection (OSI) model is a conceptual framework that standardizes the functions of a telecommunication or computing system into seven distinct layers, enabling modular design and interoperability across diverse network technologies. Developed by the International Organization for Standardization (ISO) and published as ISO/IEC 7498-1 in 1984 (with a revision in 1994), the model separates the complexities of data communication into hierarchical levels, where each layer provides services to the layer above and relies on the layer below for transmission. This layered approach ensures that changes in one layer do not affect others, promoting flexibility in protocol implementation.[48][49] Layer 1: Physical LayerThe Physical layer is the foundational layer responsible for the transmission and reception of unstructured bit streams over a physical medium, such as cables, wireless signals, or optical fibers. It defines the electrical, mechanical, functional, and procedural characteristics required to establish, maintain, and terminate a physical connection, including specifications for voltage levels, bit rates, and connector types. For instance, the RS-232 standard specifies serial communication interfaces for short-distance data transfer between devices like computers and modems, using defined voltage levels (e.g., +3 to +15 V for logic 0 and -3 to -15 V for logic 1) to ensure reliable bit-level signaling. Similarly, the Physical layer in Ethernet, governed by IEEE 802.3, handles the conversion of digital data into electrical or optical signals for transmission over twisted-pair or fiber-optic media, supporting speeds up to 400 Gbps in modern implementations. This layer does not address error correction or addressing, focusing solely on raw bit delivery.[49][18] Layer 2: Data Link Layer
The Data Link layer provides node-to-node data transfer across a physical link, organizing raw bits from the Physical layer into manageable data units called frames and ensuring error-free delivery between directly connected devices. It performs framing by adding synchronization bits and delimiters, error detection using techniques like Cyclic Redundancy Check (CRC), and flow control to prevent overwhelming the receiver. The layer is divided into two sublayers: the Media Access Control (MAC) sublayer, which manages access to the shared physical medium and uses MAC addresses for device identification (as defined in IEEE 802 standards), and the Logical Link Control (LLC) sublayer, which provides multiplexing and flow/error control interfaces to the upper layers. For example, Ethernet frames at this layer include a 48-bit MAC address for source and destination, a CRC field for integrity verification, and support half-duplex or full-duplex operations to avoid collisions on local networks. This layer detects but does not correct errors, passing responsibility for retransmission to higher layers if needed.[49] Layer 3: Network Layer
The Network layer facilitates the transfer of variable-length data sequences (packets) from a source host to a destination host across one or more networks, handling internetworking through routing and logical addressing. It determines optimal paths for packet forwarding using routing algorithms and protocols, manages congestion, and performs fragmentation/reassembly if packets exceed network limits. Logical addressing, such as IP addresses in internet protocols, enables end-to-end identification independent of physical locations, allowing packets to traverse routers that connect disparate networks. For instance, the layer supports packet switching where routers examine the destination address in the packet header to forward traffic, ensuring scalability in large-scale environments like wide-area networks. Unlike the Data Link layer's focus on local links, this layer provides global addressing and path determination for reliable inter-network communication.[49] Layer 4: Transport Layer
The Transport layer ensures end-to-end delivery of data between hosts, providing reliable, connection-oriented or connectionless services while segmenting upper-layer data into smaller units for transmission. It handles error recovery, flow control, and multiplexing to distinguish between multiple applications on the same host, using port numbers for this purpose. Connection-oriented protocols like TCP establish virtual circuits, sequence segments, acknowledge receipt, and retransmit lost data to guarantee delivery and order, making it suitable for applications requiring reliability such as file transfers. In contrast, connectionless protocols like UDP offer faster, best-effort delivery without acknowledgments or retransmissions, ideal for real-time applications like video streaming where occasional loss is tolerable. Segmentation involves breaking data into transport protocol data units (segments or datagrams), with headers including source/destination ports and checksums for integrity. This layer abstracts the network's unreliability, providing process-to-process communication.[49] Layers 5-7: Session, Presentation, and Application Layers
The Session layer (Layer 5) manages communication sessions between applications, establishing, maintaining, and terminating connections while handling dialog control, synchronization, and recovery from disruptions, such as resuming interrupted transfers. It provides services like checkpointing to allow session resumption after failures. The Presentation layer (Layer 6) translates data between the application layer and the network format, ensuring syntax compatibility through encryption, compression, and data formatting; for example, it converts between character encodings like ASCII (ISO 646) and Unicode (ISO/IEC 10646) to handle diverse data representations such as text, images, or multimedia. The Application layer (Layer 7), the highest level, interfaces directly with end-user applications, providing network services like file access or email; protocols such as HTTP enable web browsing by defining request-response mechanisms for resource retrieval over the network. These upper layers focus on user-facing functionality, with the Presentation layer acting as a translator and the Session layer as a coordinator, while the Application layer supports specific protocols for tasks like remote login or directory services.[49] In the OSI model, data encapsulation occurs as information traverses the layers from top to bottom, where each layer adds a header (and sometimes a trailer) to the data unit from the layer above, forming protocol data units (PDUs): application data becomes a segment at the Transport layer, a packet at the Network layer, a frame at the Data Link layer, and bits at the Physical layer. Upon reception, the process reverses, with headers stripped layer by layer to reconstruct the original data. This encapsulation mechanism standardizes data structuring, with PDUs ensuring proper handling at each level for efficient, error-managed communication.[49][48]