Recent from talks
Contribute something
Nothing was collected or created yet.
Telecommunications
View on Wikipedia

Telecommunication, often used in its plural form or abbreviated as telecom, is the transmission of information over a distance using electrical or electronic means, typically through cables, radio waves, or other communication technologies. These means of transmission may be divided into communication channels for multiplexing, allowing for a single medium to transmit several concurrent communication sessions. Long-distance technologies invented during the 20th and 21st centuries generally use electric power, and include the electrical telegraph, telephone, television, and radio.
Early telecommunication networks used metal wires as the medium for transmitting signals. These networks were used for telegraphy and telephony for many decades. In the first decade of the 20th century, a revolution in wireless communication began with breakthroughs including those made in radio communications by Guglielmo Marconi, who won the 1909 Nobel Prize in Physics. Other early pioneers in electrical and electronic telecommunications include co-inventors of the telegraph Charles Wheatstone and Samuel Morse, numerous inventors and developers of the telephone including Antonio Meucci, Philipp Reis, Elisha Gray and Alexander Graham Bell, inventors of radio Edwin Armstrong and Lee de Forest, as well as inventors of television like Vladimir K. Zworykin, John Logie Baird and Philo Farnsworth.
Since the 1960s, the proliferation of digital technologies has meant that voice communications have gradually been supplemented by data. The physical limitations of metallic media prompted the development of optical fibre.[1][2][3] The Internet, a technology independent of any given medium, has provided global access to services for individual users and further reduced location and time limitations on communications.
Definition
[edit]At the 1932 Plenipotentiary Telegraph Conference and the International Radiotelegraph Conference in Madrid, the two organizations merged to form the International Telecommunication Union (ITU).[4] They defined telecommunication as "any telegraphic or telephonic communication of signs, signals, writing, facsimiles and sounds of any kind, by wire, wireless or other systems or processes of electric signaling or visual signaling (semaphores)."
The definition was later reconfirmed, according to Article 1.3 of the ITU Radio Regulations, which defined it as "Any transmission, emission or reception of signs, signals, writings, images and sounds or intelligence of any nature by wire, radio, optical, or other electromagnetic systems".
As such, slow communications technologies like postal mail and pneumatic tubes are excluded from the telecommunication's definition.[5][6]
The term telecommunication was coined in 1904 by the French engineer and novelist Édouard Estaunié, who defined it as "remote transmission of thought through electricity".[7] Telecommunication is a compound noun formed from the Greek prefix tele- (τῆλε), meaning distant, far off, or afar,[8] and the Latin verb communicare, meaning to share.[9][10] Communication was first used as an English word in the late 14th century. It comes from Old French comunicacion (14c., Modern French communication), from Latin communicationem (nominative communication), noun of action from past participle stem of communicare, "to share, divide out; communicate, impart, inform; join, unite, participate in," literally, "to make common", from communis.[11]
History
[edit]Many transmission media have been used for long-distance communication throughout history, from smoke signals, beacons, semaphore telegraphs, signal flags, and optical heliographs to wires and empty space made to carry electromagnetic signals.
Before the electrical and electronic era
[edit]
Long distance communication was used long before the discovery of electricity and electromagnetism enabled the invention of telecommunications. A few of the many ingenious methods for communicating over distances prior to that are described here.
Homing pigeons have been used throughout history by different cultures. Pigeon post had Persian roots and was later used by the Romans to aid their military. Frontinus claimed Julius Caesar used pigeons as messengers in his conquest of Gaul.[12] The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons.[13] In the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed.[14]
In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London.[15]
In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris.[16] However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880.[17]
Telegraph and telephone
[edit]On July 25, 1837, the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke and English scientist Sir Charles Wheatstone.[18][19] Both inventors viewed their device as "an improvement to the [existing] electromagnetic telegraph" and not as a new device.[20]
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on September 2, 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on July 27, 1866, allowing transatlantic telecommunication for the first time.[21]
After early attempts to develop a talking telegraph by Antonio Meucci and a telefon by Johann Philipp Reis, a patent for the conventional telephone was filed by Alexander Bell in February 1876 (just a few hours before Elisha Gray filed a patent caveat for a similar device).[22][23] The first commercial telephone services were set up by the Bell Telephone Company in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London.[24][25]
Radio and television
[edit]In 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then-newly discovered phenomenon of radio waves, demonstrating, by 1901, that they could be transmitted across the Atlantic Ocean.[26] This was the start of wireless telegraphy by radio. On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1904, a commercial service was established to transmit nightly news summaries to subscribing ships, which incorporated them into their onboard newspapers.[27]
World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation, and radar.[28] Development of stereo FM broadcasting of radio began in the 1930s in the United States and the 1940s in the United Kingdom,[29] displacing AM as the dominant commercial standard in the 1970s.[30]
On March 25, 1925, John Logie Baird demonstrated the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk by Paul Nipkow and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning on 30 September 1929.[31]
Vacuum tubes
[edit]Vacuum tubes use thermionic emission of electrons from a heated cathode for a number of fundamental electronic functions such as signal amplification and current rectification.
The simplest vacuum tube, the diode invented in 1904 by John Ambrose Fleming, contains only a heated electron-emitting cathode and an anode. Electrons can only flow in one direction through the device—from the cathode to the anode. Adding one or more control grids within the tube enables the current between the cathode and anode to be controlled by the voltage on the grid or grids.[32] These devices became a key component of electronic circuits for the first half of the 20th century and were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analogue and early digital computers. While some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, leading to the creation of electronics.[33]
For most of the 20th century, televisions depended on a kind of vacuum tube — the cathode ray tube — invented by Karl Ferdinand Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927.[34] After World War II, interrupted experiments resumed and television became an important home entertainment broadcast medium.
Also in the 1940s, the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, cheaper, and more efficient, reliable, and durable than vacuum tubes. Starting in the mid-1960s, vacuum tubes were replaced with the transistor. Vacuum tubes still have some applications for certain high-frequency amplifiers.
Computer networks and the Internet
[edit]On 11 September 1940, George Stibitz transmitted problems for his Complex Number Calculator in New York using a teletype and received the computed results back at Dartmouth College in New Hampshire.[35] This configuration of a centralized computer (mainframe) with remote dumb terminals remained popular well into the 1970s. In the 1960s, Paul Baran and, independently, Donald Davies started to investigate packet switching, a technology that sends a message in portions to its destination asynchronously without passing it through a centralized mainframe. A four-node network emerged on 5 December 1969, constituting the beginnings of the ARPANET, which by 1981 had grown to 213 nodes.[36] ARPANET eventually merged with other networks to form the Internet. While Internet development was a focus of the Internet Engineering Task Force (IETF) who published a series of Request for Comments documents, other networking advancements occurred in industrial laboratories, such as the local area network (LAN) developments of Ethernet (1983), Token Ring (1984)[citation needed] and Star network topology.
Growth of transmission capacity
[edit]The effective capacity to exchange information worldwide through two-way telecommunication networks grew from 281 petabytes (PB) of optimally compressed information in 1986 to 471 PB in 1993 to 2.2 exabytes (EB) in 2000 to 65 EB in 2007.[37] This is the informational equivalent of two newspaper pages per person per day in 1986, and six entire newspapers per person per day by 2007.[38] Given this growth, telecommunications play an increasingly important role in the world economy and the global telecommunications industry was about a $4.7 trillion sector in 2012.[39][40] The service revenue of the global telecommunications industry was estimated to be $1.5 trillion in 2010, corresponding to 2.4% of the world's gross domestic product (GDP).[39]
Technical concepts
[edit]Modern telecommunication is founded on a series of key concepts that experienced progressive development and refinement in a period of well over a century:
Basic elements
[edit]Telecommunication technologies may primarily be divided into wired and wireless methods. Overall, a basic telecommunication system consists of three main parts that are always present in some form or another:
- A transmitter that takes information and converts it to a signal
- A transmission medium, also called the physical channel, that carries the signal (e.g., the "free space channel")
- A receiver that takes the signal from the channel and converts it back into usable information for the recipient
In a radio broadcasting station, the station's large power amplifier is the transmitter and the broadcasting antenna is the interface between the power amplifier and the free space channel. The free space channel is the transmission medium and the receiver's antenna is the interface between the free space channel and the receiver. Next, the radio receiver is the destination of the radio signal, where it is converted from electricity to sound.
Telecommunication systems are occasionally "duplex" (two-way systems) with a single box of electronics working as both the transmitter and a receiver, or a transceiver (e.g., a mobile phone).[41] The transmission electronics and the receiver electronics within a transceiver are quite independent of one another. This can be explained by the fact that radio transmitters contain power amplifiers that operate with electrical powers measured in watts or kilowatts, but radio receivers deal with radio powers measured in microwatts or nanowatts. Hence, transceivers have to be carefully designed and built to isolate their high-power circuitry and their low-power circuitry from each other to avoid interference.
Telecommunication over fixed lines is called point-to-point communication because it occurs between a transmitter and a receiver. Telecommunication through radio broadcasts is called broadcast communication because it occurs between a powerful transmitter and numerous low-power but sensitive radio receivers.[41]
Telecommunications in which multiple transmitters and multiple receivers have been designed to cooperate and share the same physical channel are called multiplex systems. The sharing of physical channels using multiplexing often results in significant cost reduction. Multiplexed systems are laid out in telecommunication networks and multiplexed signals are switched at nodes through to the correct destination terminal receiver.
Analogue versus digital communications
[edit]Communications can be encoded as analogue or digital signals, which may in turn be carried by analogue or digital communication systems. Analogue signals vary continuously with respect to the information, while digital signals encode information as a set of discrete values (e.g., a set of ones and zeroes).[42] During propagation and reception, information contained in analogue signals is degraded by undesirable noise. Commonly, the noise in a communication system can be expressed as adding or subtracting from the desirable signal via a random process. This form of noise is called additive noise, with the understanding that the noise can be negative or positive at different instances.
Unless the additive noise disturbance exceeds a certain threshold, the information contained in digital signals will remain intact. Their resistance to noise represents a key advantage of digital signals over analogue signals. However, digital systems fail catastrophically when noise exceeds the system's ability to autocorrect. On the other hand, analogue systems fail gracefully: as noise increases, the signal becomes progressively more degraded but still usable. Also, digital transmission of continuous data unavoidably adds quantization noise to the output. This can be reduced, but not eliminated, only at the expense of increasing the channel bandwidth requirement.
Communication channels
[edit]The term channel has two different meanings. In one meaning, a channel is the physical medium that carries a signal between the transmitter and the receiver. Examples of this include the atmosphere for sound communications, glass optical fibres for some kinds of optical communications, coaxial cables for communications by way of the voltages and electric currents in them, and free space for communications using visible light, infrared waves, ultraviolet light, and radio waves. Coaxial cable types are classified by RG type or radio guide, terminology derived from World War II. The various RG designations are used to classify the specific signal transmission applications.[43] This last channel is called the free space channel. The sending of radio waves from one place to another has nothing to do with the presence or absence of an atmosphere between the two. Radio waves travel through a perfect vacuum just as easily as they travel through air, fog, clouds, or any other kind of gas.
The other meaning of the term channel in telecommunications is seen in the phrase communications channel, which is a subdivision of a transmission medium so that it can be used to send multiple streams of information simultaneously. For example, one radio station can broadcast radio waves into free space at frequencies in the neighbourhood of 94.5 MHz (megahertz) while another radio station can simultaneously broadcast radio waves at frequencies in the neighbourhood of 96.1 MHz. Each radio station would transmit radio waves over a frequency bandwidth of about 180 kHz (kilohertz), centred at frequencies such as the above, which are called the "carrier frequencies". Each station in this example is separated from its adjacent stations by 200 kHz, and the difference between 200 kHz and 180 kHz (20 kHz) is an engineering allowance for the imperfections in the communication system.
In the example above, the free space channel has been divided into communications channels according to frequencies, and each channel is assigned a separate frequency bandwidth in which to broadcast radio waves. This system of dividing the medium into channels according to frequency is called frequency-division multiplexing. Another term for the same concept is wavelength-division multiplexing, which is more commonly used in optical communications when multiple transmitters share the same physical medium.
Another way of dividing a communications medium into channels is to allocate each sender a recurring segment of time (a time slot, for example, 20 milliseconds out of each second), and to allow each sender to send messages only within its own time slot. This method of dividing the medium into communication channels is called time-division multiplexing (TDM), and is used in optical fibre communication. Some radio communication systems use TDM within an allocated FDM channel. Hence, these systems use a hybrid of TDM and FDM.
Modulation
[edit]The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analogue waveform. This is commonly called "keying"—a term derived from the older use of Morse Code in telecommunications—and several keying techniques exist (these include phase-shift keying, frequency-shift keying, and amplitude-shift keying). The Bluetooth system, for example, uses phase-shift keying to exchange information between various devices.[44][45] In addition, there are combinations of phase-shift keying and amplitude-shift keying which is called (in the jargon of the field) quadrature amplitude modulation (QAM) that are used in high-capacity digital radio communication systems.
Modulation can also be used to transmit the information of low-frequency analogue signals at higher frequencies. This is helpful because low-frequency analogue signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analogue signal must be impressed into a higher-frequency signal (known as the carrier wave) before transmission. There are several different modulation schemes available to achieve this [two of the most basic being amplitude modulation (AM) and frequency modulation (FM)]. An example of this process is a disc jockey's voice being impressed into a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel 96 FM).[46] In addition, modulation has the advantage that it may use frequency division multiplexing (FDM).
Telecommunication networks
[edit]A telecommunications network is a collection of transmitters, receivers, and communications channels that send messages to one another. Some digital communications networks contain one or more routers that work together to transmit information to the correct user. An analogue communications network consists of one or more switches that establish a connection between two or more users. For both types of networks, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from the noise.[47] Another advantage of digital systems over analogue is that their output is easier to store in memory, i.e., two voltage states (high and low) are easier to store than a continuous range of states.
Societal impact
[edit]Telecommunication has a significant social, cultural and economic impact on modern society. In 2008, estimates placed the telecommunication industry's revenue at US$4.7 trillion or just under three per cent of the gross world product (official exchange rate).[39] Several following sections discuss the impact of telecommunication on society.
Microeconomics
[edit]On the microeconomic scale, companies have used telecommunications to help build global business empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Walmart has benefited from better telecommunication infrastructure compared to its competitors.[48] In cities throughout the world, home owners use their telephones to order and arrange a variety of home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narsingdi District, isolated villagers use cellular phones to speak directly to wholesalers and arrange a better price for their goods. In Côte d'Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price.[49]
Macroeconomics
[edit]On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth.[50][51] Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal.[52]
Because of the economic benefits of good telecommunication infrastructure, there is increasing worry about the inequitable access to telecommunication services amongst various countries of the world—this is known as the digital divide. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly a third of countries have fewer than one mobile subscription for every 20 people and one-third of countries have fewer than one land-line telephone subscription for every 20 people. In terms of Internet access, roughly half of all countries have fewer than one out of 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies.[53] Using this measure, Sweden, Denmark and Iceland received the highest ranking while the African countries Niger, Burkina Faso and Mali received the lowest.[54]
Social impact
[edit]Telecommunication has played a significant role in social relationships. Nevertheless, devices like the telephone system were originally advertised with an emphasis on the practical dimensions of the device (such as the ability to conduct business or order home services) as opposed to the social dimensions. It was not until the late 1920s and 1930s that the social dimensions of the device became a prominent theme in telephone advertisements. New promotions started appealing to consumers' emotions, stressing the importance of social conversations and staying connected to family and friends.[55]
Since then the role that telecommunications has played in social relations has become increasingly important. In recent years,[when?] the popularity of social networking sites has increased dramatically. These sites allow users to communicate with each other as well as post photographs, events and profiles for others to see. The profiles can list a person's age, interests, sexual preference and relationship status. In this way, these sites can play important role in everything from organising social engagements to courtship.[56]
Prior to social networking sites, technologies like short message service (SMS) and the telephone also had a significant impact on social interactions. In 2000, market research group Ipsos MORI reported that 81% of 15- to 24-year-old SMS users in the United Kingdom had used the service to coordinate social arrangements and 42% to flirt.[57]
Entertainment, news, and advertising
[edit]| Local TV | 59% |
| National TV | 47% |
| Radio | 44% |
| Local paper | 38% |
| Internet | 23% |
| National paper | 12% |
| Survey permitted multiple answers | |
In cultural terms, telecommunication has increased the public's ability to access music and film. With television, people can watch films they have not seen before in their own home without having to travel to the video store or cinema. With radio and the Internet, people can listen to music they have not heard before without having to travel to the music store.
Telecommunication has also transformed the way people receive their news. A 2006 survey (right table) of slightly more than 3,000 Americans by the non-profit Pew Internet and American Life Project in the United States the majority specified television or radio over newspapers.
Telecommunication has had an equally significant impact on advertising. TNS Media Intelligence reported that in 2007, 58% of advertising expenditure in the United States was spent on media that depend upon telecommunication.[59]
| Medium | Spending | |
|---|---|---|
| Internet | 7.6% | $11.31 billion |
| Radio | 7.2% | $10.69 billion |
| Cable TV | 12.1% | $18.02 billion |
| Syndicated TV | 2.8% | $4.17 billion |
| Spot TV | 11.3% | $16.82 billion |
| Network TV | 17.1% | $25.42 billion |
| Newspaper | 18.9% | $28.22 billion |
| Magazine | 20.4% | $30.33 billion |
| Outdoor | 2.7% | $4.02 billion |
| Total | 100% | $149 billion |
Regulation
[edit]Many countries have enacted legislation which conforms to the International Telecommunication Regulations established by the International Telecommunication Union (ITU), which is the "leading UN agency for information and communication technology issues".[60] In 1947, at the Atlantic City Conference, the ITU decided to "afford international protection to all frequencies registered in a new international frequency list and used in conformity with the Radio Regulation". According to the ITU's Radio Regulations adopted in Atlantic City, all frequencies referenced in the International Frequency Registration Board, examined by the board and registered on the International Frequency List "shall have the right to international protection from harmful interference".[61]
From a global perspective, there have been political debates and legislation regarding the management of telecommunication and broadcasting. The history of broadcasting discusses some debates in relation to balancing conventional communication such as printing and telecommunication such as radio broadcasting.[62] The onset of World War II brought on the first explosion of international broadcasting propaganda.[62] Countries, their governments, insurgents, terrorists, and militiamen have all used telecommunication and broadcasting techniques to promote propaganda.[62][63] Patriotic propaganda for political movements and colonization started the mid-1930s. In 1936, the BBC broadcast propaganda to the Arab World to partly counter similar broadcasts from Italy, which also had colonial interests in North Africa.[62] Modern political debates in telecommunication include the reclassification of broadband Internet service as a telecommunications service (also called net neutrality),[64][65] regulation of phone spam,[66][67] and expanding affordable broadband access.[68]
Modern media
[edit]Worldwide equipment sales
[edit]According to data collected by Gartner[69][70] and Ars Technica[71] sales of main consumer's telecommunication equipment worldwide in millions of units was:
| Equipment / year | 1975 | 1980 | 1985 | 1990 | 1994 | 1996 | 1998 | 2000 | 2002 | 2004 | 2006 | 2008 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Computers | 0 | 1 | 8 | 20 | 40 | 75 | 100 | 135 | 130 | 175 | 230 | 280 |
| Cell phones | N/A | N/A | N/A | N/A | N/A | N/A | 180 | 400 | 420 | 660 | 830 | 1000 |
Telephone
[edit]
In a telephone network, the caller is connected to the person to whom they wish to talk by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it is transformed back into sound by a small speaker in that person's handset.
As of 2015[update], the landline telephones in most residential homes are analogue—that is, the speaker's voice directly determines the signal's voltage.[72] Although short-distance calls may be handled from end-to-end as analogue signals, increasingly telephone service providers are transparently converting the signals to digital signals for transmission. The advantage of this is that digitized voice data can travel side by side with data from the Internet and can be perfectly reproduced in long-distance communication (as opposed to analogue signals that are inevitably impacted by noise).
Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m).[73] In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth.[74] Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to deprecate analog systems such as AMPS.[75]
There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optical fibres. The benefit of communicating with optical fibres is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optical fibre cables are able to carry 25 times as many telephone calls as TAT-8.[76] This increase in data capacity is due to several factors: First, optical fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable.[77] Lastly, improvements in multiplexing have led to an exponential growth in the data capacity of a single fibre.[78][79]
Assisting communication across many modern optical fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut off completely.[80] There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future.[81][82]
Radio and television
[edit]
In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analogue (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).[41][83]
The broadcast media industry is at a critical turning point in its development, with many countries moving from analogue to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints common to traditional analogue broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analogue transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011— a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission.[84][85]
In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2.[86][87] The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception is the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to piggyback on normal AM or FM analog transmissions.[88]
However, despite the pending switch to digital, analog television remains being transmitted in most countries. An exception is the United States that ended analog television transmission (by all but the very low-power TV stations) on 12 June 2009[89] after twice delaying the switchover deadline. Kenya also ended analog television transmission in December 2014 after multiple delays. For analogue television, there were three standards in use for broadcasting colour TV (see a map on adoption here). These are known as PAL (German designed), NTSC (American designed), and SECAM (French designed). For analogue radio, the switch to digital radio is made more difficult by the higher cost of digital receivers.[90] The choice of modulation for analogue radio is typically between amplitude (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM, and quadrature amplitude modulation is used for stereo AM or C-QUAM.
Internet
[edit]
The Internet is a worldwide network of computers and computer networks that communicate with each other using the Internet Protocol (IP).[91] Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. The Internet is thus an exchange of messages between computers.[92]
It is estimated that 51% of the information flowing through two-way telecommunications networks in the year 2000 were flowing through the Internet (most of the rest (42%) through the landline telephone). By 2007 the Internet clearly dominated and captured 97% of all the information in telecommunication networks (most of the rest (2%) through mobile phones).[37] As of 2008[update], an estimated 21.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (73.6%), Oceania/Australia (59.5%) and Europe (48.1%).[93] In terms of broadband access, Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) led the world.[94]
The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows a web browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite.[95]
For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network.
At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the World Wide Web, these IP addresses are derived from the human-readable form using the Domain Name System (e.g., 72.14.207.99 is derived from Google). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent.[96]
At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer whereas UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered nor retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by.[97] Because certain application-level protocols use certain ports, network administrators can manipulate traffic to suit particular requirements. Examples are to restrict Internet access by blocking the traffic destined for a particular port or to affect the performance of certain applications by assigning priority.
Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that data transferred between two parties remains completely confidential.[98] Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and XMPP (instant messaging).
Voice over Internet Protocol (VoIP) allows data packets to be used for synchronous voice communications. The data packets are marked as voice-type packets and can be prioritized by the network administrators so that the real-time, synchronous conversation is less subject to contention with other types of data traffic which can be delayed (i.e., file transfer or email) or buffered in advance (i.e., audio and video) without detriment. That prioritization is fine when the network has sufficient capacity for all the VoIP calls taking place at the same time and the network is enabled for prioritization, i.e., a private corporate-style network, but the Internet is not generally managed in this way and so there can be a big difference in the quality of VoIP calls over a private network and over the public Internet.[99]
Local area networks and wide area networks
[edit]Despite the growth of the Internet, the characteristics of local area networks (LANs)—computer networks that do not extend beyond a few kilometres—remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them. When they are not connected with the Internet, they also have the advantages of privacy and security. However, purposefully lacking a direct connection to the Internet does not provide assured protection from hackers, military forces, or economic powers. These threats exist if there are any methods for connecting remotely to the LAN.
Wide area networks (WANs) are private computer networks that may extend for thousands of kilometres. Once again, some of their advantages include privacy and security. Prime users of private LANs and WANs include armed forces and intelligence agencies that must keep their information secure and secret.
In the mid-1980s, several sets of communication protocols emerged to fill the gaps between the data-link layer and the application layer of the OSI reference model. These included AppleTalk, IPX, and NetBIOS with the dominant protocol set during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point, but it was typically only used by large government and research facilities.[100]
As the Internet grew in popularity and its traffic was required to be routed into private networks, the TCP/IP protocols replaced existing local area network technologies. Additional technologies, such as DHCP, allowed TCP/IP-based computers to self-configure in the network. Such functions also existed in the AppleTalk/ IPX/ NetBIOS protocol sets.[101]
Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data-link protocols for larger networks such as WANs; Ethernet and Token Ring are typical data-link protocols for LANs. These protocols differ from the former protocols in that they are simpler, e.g., they omit features such as quality of service guarantees, and offer medium access control. Both of these differences allow for more economical systems.[102]
Despite the modest popularity of Token Ring in the 1980s and 1990s, virtually all LANs now use either wired or wireless Ethernet facilities. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used heavier coaxial cables and some recent implementations (especially high-speed ones) use optical fibres.[103] When optic fibres are used, the distinction must be made between multimode fibres and single-mode fibres. Multimode fibres can be thought of as thicker optical fibres that are cheaper to manufacture devices for, but that suffer from less usable bandwidth and worse attenuation—implying poorer long-distance performance.[104]
See also
[edit]- Active networking
- Cell site
- Control communications
- Digital Revolution
- Information Age
- Institute of Telecommunications Professionals
- International Teletraffic Congress
- List of telecommunications encryption terms
- Military communication
- Nanonetwork
- New media
- Outline of telecommunication
- Telecommunications engineering
- Telecommunications Industry Association
- Telecoms resilience
- Telemetry
- Underwater acoustic communication
- Wavelength-division multiplexing
- Wired communication
References
[edit]Citations
[edit]- ^ "How does a Gigabit Passive Optical Network (GPON) work?". European Investment Bank. Archived from the original on 7 June 2021. Retrieved 7 June 2021.
- ^ Renewing U.S. Telecommunications Research. 2006. doi:10.17226/11711. ISBN 978-0-309-10265-0. Archived from the original on 23 June 2021. Retrieved 25 June 2021.
- ^ Cyphers, Bennett (16 October 2019). "The Case for Fiber to the Home, Today: Why Fiber is a Superior Medium for 21st Century Broadband". Electronic Frontier Foundation. Archived from the original on 3 June 2021. Retrieved 7 June 2021.
- ^ "International Telegraph Conference (Madrid, 1932)". ITU. Archived from the original on 8 January 2023. Retrieved 8 January 2023.
- ^ "Article 1.3" (PDF), ITU Radio Regulations, International Telecommunication Union, 2012, archived from the original (PDF) on 19 March 2015
- ^ Constitution and Convention of the International Telecommunication Union, Annex (Geneva, 1992)
- ^ Dilhac, Jean-Marie (2004). "From tele-communicare to Telecommunications" (PDF). Institute of Electrical and Electronics Engineers (IEEE). Archived from the original (PDF) on 2 December 2010.
- ^ "Online Etymology Dictionary". Archived from the original on 25 December 2016. Retrieved 19 August 2016.
- ^ "Telecommunication". Oxford Dictionaries. Oxford University Press. Archived from the original on 30 April 2013. Retrieved 28 February 2013.
- ^ Telecommunication, tele- and communication, New Oxford American Dictionary (2nd edition), 2005.
- ^ "communication". Online Etymology Dictionary. Archived from the original on 14 September 2016. Retrieved 19 August 2016.
- ^ Levi, Wendell (1977). The Pigeon. Sumter, SC: Levi Publishing Co, Inc. ISBN 978-0-85390-013-9.
- ^ Blechman, Andrew (2007). Pigeons-The fascinating saga of the world's most revered and reviled bird. St Lucia, Queensland: University of Queensland Press. ISBN 978-0-7022-3641-9. Archived from the original on 14 May 2008.
- ^ "Chronology: Reuters, from pigeons to multimedia merger" (Web article). Reuters. 19 February 2008. Archived from the original on 26 March 2008. Retrieved 21 February 2008.
- ^ Ross, David. "The Spanish Armada". Britain Express. Archived from the original on 4 January 2020. Retrieved 1 October 2007.
- ^ "Les Télégraphes Chappe". Cédrick Chatenet. l'Ecole Centrale de Lyon. 2003. Archived from the original on 9 April 2004.
- ^ "CCIT/ITU-T 50 Years of Excellence" (PDF). International Telecommunication Union. 2006. Archived from the original (PDF) on 12 February 2020.
- ^ Brockedone, William (11 March 2013). Cooke and Wheatstone and the Invention of the Electric Telegraph. Routledge. ISBN 9780415846783.
- ^ "Who made the first electric telegraph communications?". The Telegraph. Archived from the original on 8 August 2017. Retrieved 7 August 2017.
- ^ Calvert, J. B. (19 May 2004). "The Electromagnetic Telegraph". Archived from the original on 16 June 2001.
- ^ "The Atlantic Cable". Bern Dibner. Burndy Library Inc. 1959. Archived from the original on 1 July 2017.
- ^ Who is credited with inventing the telephone? loc.gov, published: 02/22/2022. updated 9/12/2024. Author: Science Reference Section, Library of Congress
- ^ "Elisha Gray". Oberlin College Archives. Electronic Oberlin Group. 2006. Archived from the original on 28 June 2017.
- ^ "Connected Earth: The telephone". BT. 2006. Archived from the original on 22 August 2006.
- ^ "History of AT&T". AT&T. Archived from the original on 14 January 2003.
- ^ Vujovic, Ljubo (1998). "Tesla Biography". Tesla Memorial Society of New York. Archived from the original on 14 January 2016.
- ^ "TR Center - Talking Across the Ocean". www.theodorerooseveltcenter.org. Archived from the original on 17 April 2021. Retrieved 12 March 2021.
- ^ Thompson, R.J. Jr. (2011). Crystal Clear: The Struggle for Reliable Communications Technology in World War II. Hoboken, NJ: Wiley. ISBN 9781118104644.
- ^ "Report 1946-04 – Frequency Modulation". BBC Research & Development. January 1946. Archived from the original on 3 January 2020. Retrieved 3 January 2020.
- ^ Théberge, P.; Devine, K.; Everrett, T (2015). Living Stereo: Histories and Cultures of Multichannel Sound. New York: Bloomsbury Publishing. ISBN 9781623566654.
- ^ "The Pioneers". MZTV Museum of Television. 2006. Archived from the original on 14 May 2013.
- ^ Hoddeson, L. "The Vacuum Tube". PBS. Archived from the original on 15 April 2012. Retrieved 6 May 2012.
- ^ Macksey, Kenneth; Woodhouse, William (1991). "Electronics". The Penguin Encyclopedia of Modern Warfare: 1850 to the present day. Viking. p. 110. ISBN 978-0-670-82698-8.
The electronics age may be said to have been ushered in with the invention of the vacuum diode valve in 1902 by the Briton John Fleming (himself coining the word 'electronics'), the immediate application being in the field of radio.
- ^ Postman, Neil (29 March 1999). "Philo Farnsworth". TIME Magazine. Archived from the original on 30 September 2009.
- ^ "George Stibitz (1904–1995)". www.kerryr.net. Kerry Redshaw. Archived from the original on 15 August 2017. Retrieved 6 June 2023.
- ^ Hafner, Katie (1998). Where Wizards Stay Up Late: The Origins Of The Internet. Simon & Schuster. ISBN 978-0-684-83267-8.
- ^ a b Hilbert, Martin; López, Priscila (2011). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science. 332 (6025): 60–65. Bibcode:2011Sci...332...60H. doi:10.1126/science.1200970. PMID 21310967. S2CID 206531385.
- ^ "video animation". The Economist. Archived from the original on 18 January 2012.
- ^ a b c "Worldwide Telecommunications Industry Revenues". Plunkett's Telecommunications Industry Almanac 2010. 1 June 2010. Archived from the original on 28 March 2010.
- ^ "Introduction to the Telecommunications Industry". Plunkett Research. Archived from the original on 22 October 2012.
- ^ a b c Haykin, Simon (2001). Communication Systems (4th ed.). John Wiley & Sons. pp. 1–3. ISBN 978-0-471-17869-9.
- ^ Ambardar, Ashok (1999). Analog and Digital Signal Processing (2nd ed.). Brooks. pp. 1–2. ISBN 978-0-534-95409-3.
- ^ "Coax Cable FAQ Series: What is RG Cable? – Conwire". Conwire. 12 January 2016. Archived from the original on 8 August 2017. Retrieved 7 August 2017.
- ^ Haykin, pp. 344–403.
- ^ Bluetooth Specification Version 2.0 + EDR Archived 14 August 2014 at the Wayback Machine (p. 27), Bluetooth, 2004.
- ^ Haykin, pp. 88–126.
- ^ "ATIS Telecom Glossary 2000". ATIS Committee T1A1 Performance and Signal Processing (approved by the American National Standards Institute). 28 February 2001. Archived from the original on 2 March 2008.
- ^ Lenert, Edward (December 1998). "A Communication Theory Perspective on Telecommunications Policy". Journal of Communication. 48 (4): 3–23. doi:10.1111/j.1460-2466.1998.tb02767.x.
- ^ Mireille Samaan (April 2003). The Effect of Income Inequality on Mobile Phone Penetration (Honors thesis). Boston University. Archived from the original (PDF) on 14 February 2007. Retrieved 8 June 2007.
- ^ Röller, Lars-Hendrik; Leonard Waverman (2001). "Telecommunications Infrastructure and Economic Development: A Simultaneous Approach". American Economic Review. 91 (4): 909–23. CiteSeerX 10.1.1.202.9393. doi:10.1257/aer.91.4.909. ISSN 0002-8282.
- ^ Christine Zhen-Wei Qiang and Carlo M. Rossotto with Kaoru Kimura. "Economic Impacts of Broadband" (PDF). siteresources.worldbank.org. Archived from the original on 12 August 2020. Retrieved 31 March 2016.
- ^ Riaz, Ali (1997). "The role of telecommunications in economic growth: proposal for an alternative framework of analysis". Media, Culture & Society. 19 (4): 557–83. doi:10.1177/016344397019004004. S2CID 154398428.
- ^ "Digital Access Index (DAI)". itu.int. Archived from the original on 2 January 2019. Retrieved 6 March 2008.
- ^ "World Telecommunication Development Report 2003: Access Indicators for the Information Society: Executive Summary" (PDF). International Telecommunication Union (ITU). December 2003. p. 22. Archived (PDF) from the original on 6 June 2023. Retrieved 6 June 2023.
- ^ Fischer, Claude S. (January 1988). "Touch Someone: The Telephone Industry Discovers Sociability". Technology and Culture. 29 (1): 32–61. doi:10.2307/3105226. JSTOR 3105226. S2CID 146820965..
- ^ "How do you know your love is real? Check Facebook". CNN. 4 April 2008. Archived from the original on 6 November 2017. Retrieved 8 February 2009.
- ^ "I Just Text To Say I Love You". Ipsos MORI. September 2005. Archived from the original on 27 December 2016.
- ^ "Online News: For many home broadband users, the internet is a primary news source" (PDF). Pew Internet Project. 22 March 2006. Archived from the original (PDF) on 21 October 2013.
- ^ "100 Leading National Advertisers" (PDF). Advertising Age. 23 June 2008. Archived (PDF) from the original on 27 July 2011. Retrieved 21 June 2009.
- ^ "International Telecommunication Union : About ITU". ITU. Archived from the original on 15 July 2009. Retrieved 21 July 2009. (PDF) Archived 7 June 2011 at the Wayback Machine of regulation)
- ^ Codding, George A. (1955). "Jamming and the Protection of Frequency Assignments". American Journal of International Law. 49 (3): 384–388. doi:10.1017/S0002930000170046. JSTOR 2194872..
- ^ a b c d Wood, James (1992). History of international broadcasting. P. Peregrinus Limited. p. 2. ISBN 9780863413025.
- ^ Garfield, Andrew (Fall 2007). "The U.S. Counter-propaganda Failure in Iraq". Middle East Quarterly. 14 (4): 23–32. Archived from the original on 2 March 2009 – via Middle East Forum.
- ^ Wyatt, Edward (10 November 2014). "Obama Asks F.C.C. to Adopt Tough Net Neutrality Rules". New York Times. Archived from the original on 27 April 2019. Retrieved 15 November 2014.
- ^ "Why the F.C.C. Should Heed President Obama on Internet Regulation". New York Times. 14 November 2014. Archived from the original on 9 July 2018. Retrieved 15 November 2014.
- ^ McGill, Margaret Harding (26 September 2022). "FCC takes long-delayed step against spam text surge". Axios. Archived from the original on 8 February 2023. Retrieved 8 February 2023.
- ^ Hall, Madison. "Robocallers are preying on the elderly with fake Medicare calls. It's a no-brainer to stop it, but nobody has". Business Insider. Archived from the original on 7 February 2023. Retrieved 8 February 2023.
- ^ "Affordable Broadband: FCC Could Improve Performance Goals and Measures, Consumer Outreach, and Fraud Risk Management". www.gao.gov. February 2023. Archived from the original on 8 February 2023. Retrieved 8 February 2023.
- ^ Arthur, Charles (4 March 2009). "Why falling PC sales means Windows 7 is on the way". The Guardian. ISSN 0261-3077. Archived from the original on 19 May 2017. Retrieved 6 June 2023.
- ^ "Mobile Phone Sales To Exceed One Billion in 2009". Palm Infocenter. 21 July 2005. Archived from the original on 8 March 2018. Retrieved 6 June 2023.
- ^ Reimer, Jeremy (15 December 2005). "Total share: 30 years of personal computer market share figures". Ars Technica. Archived from the original on 12 May 2015. Retrieved 6 June 2023.
- ^ Hacker, Michael; Burghardt, David; Fletcher, Linnea; Gordon, Anthony; Peruzzi, William (3 April 2015). Engineering and Technology. Cengage Learning. p. 433. ISBN 978-1305855779.
- ^ "Gartner Says Top Six Vendors Drive Worldwide Mobile Phone Sales to 21% Growth in 2005" (Press release). Gartner. 28 February 2006. Archived from the original on 10 May 2012.
- ^ Mbarika, V.W.A.; Mbarika, I. (2006). "Africa calling [African wireless connection]". IEEE Spectrum. 43 (5): 56–60. doi:10.1109/MSPEC.2006.1628825. S2CID 30385268.
- ^ "Ten Years of GSM in Australia". Australia Telecommunications Association. 2003. Archived from the original on 20 July 2008.
- ^ "Milestones in AT&T History". AT&T Knowledge Ventures. 2006. Archived from the original on 6 September 2008.
- ^ Bhatti, Saleem (1995). "Optical fibre waveguide". Archived from the original on 24 May 2006.
- ^ "Fundamentals of DWDM Technology" (PDF). Cisco Systems. 2006. Archived from the original (PDF) on 9 August 2012.
- ^ Jander, Mary (15 April 2003). "Report: DWDM No Match for Sonet". Light Reading. Archived from the original on 24 July 2012.
- ^ Stallings, William (2004). Data and Computer Communications (7th intl ed.). Pearson Prentice Hall. pp. 337–66. ISBN 978-0-13-183311-1.
- ^ Dix, John (2002). "MPLS is the future, but ATM hangs on". Network World. Archived from the original on 6 July 2007.
- ^ Lazar, Irwin (22 February 2011). "The WAN Road Ahead: Ethernet or Bust?". Telecom Industry Updates. Archived from the original on 2 April 2015. Retrieved 22 February 2011.
- ^ "How Radio Works". HowStuffWorks. 7 December 2000. Archived from the original on 2 January 2016. Retrieved 12 February 2023.
- ^ "Digital Television in Australia". Digital Television News Australia. Archived from the original on 12 March 2018. Retrieved 6 June 2023.
- ^ Stallings, William (2004). Data and Computer Communications (7th intl ed.). Pearson Prentice Hall. ISBN 978-0-13-183311-1.
- ^ "HDV Technology Handbook" (PDF). Sony. 2004. Archived from the original (PDF) on 23 June 2006.
- ^ "Audio". Digital Video Broadcasting Project. 2003. Archived from the original on 27 September 2006.
- ^ "Status of DAB (US)". World DAB Forum. March 2005. Archived from the original on 21 July 2006.
- ^ Brian Stelter (13 June 2009). "Changeover to Digital TV Off to a Smooth Start". New York Times. Archived from the original on 14 December 2017. Retrieved 25 February 2017.
- ^ "DAB Products". World DAB Forum. 2006. Archived from the original on 21 June 2006.
- ^ Kahn, Robert; Cerf, Vinton G. (December 1999). "What Is The Internet (And What Makes It Work)". Corporation for National Research Initiatives (CNRI). Archived from the original on 15 July 2017. Retrieved 6 June 2023. Specifically see footnote xv.
- ^ Jeff Tyson (2007). "How Internet Infrastructure Works". Computer.HowStuffWorks.com. Archived from the original on 10 April 2010. Retrieved 22 May 2007.
- ^ "World Internet Users and Population Stats". Internet World Stats. 30 June 2008. Archived from the original on 2 February 2009.
- ^ "OECD Broadband Statistics, December 2005". OECD. Archived from the original on 6 January 2009.
- ^ Kozierok, Charles M. (2005). "The TCP/IP Guide - History of the OSI Reference Model". The TCP/IP Guide. Archived from the original on 4 September 2017. Retrieved 6 June 2023.
- ^ "Introduction to IPv6". Microsoft Corporation. February 2006. Archived from the original on 13 October 2008.
- ^ Stallings, pp. 683–702.
- ^ T. Dierks and C. Allen, The TLS Protocol Version 1.0, RFC 2246, 1999.
- ^ Multimedia, Crucible (7 May 2011). "VoIP, Voice over Internet Protocol and Internet telephone calls". Archived from the original on 24 January 2018. Retrieved 30 June 2011.
- ^ Martin, Michael (2000). "Understanding the Network". The Networker's Guide to AppleTalk, IPX, and NetBIOS (PDF). SAMS Publishing. ISBN 0-7357-0977-7. Archived from the original (PDF) on 29 March 2009.
- ^ Droms, Ralph (November 2003). "Resources for DHCP". Archived from the original on 4 July 2007.
- ^ Stallings, pp. 500–26.
- ^ Stallings, pp. 514–16.
- ^ "Fiber Optic Cable single-mode multi-mode Tutorial". ARC Electronics. Archived from the original on 23 October 2018. Retrieved 6 June 2023.
Bibliography
[edit]- Goggin, Gerard, Global Mobile Media (New York: Routledge, 2011), p. 176. ISBN 978-0-415-46918-0.
- OECD, Universal Service and Rate Restructuring in Telecommunications, Organisation for Economic Co-operation and Development (OECD) Publishing, 1991. ISBN 92-64-13497-2.
- Wheen, Andrew. Dot-Dash to Dot.Com: How Modern Telecommunications Evolved from the Telegraph to the Internet (Springer, 2011).
External links
[edit]- International Teletraffic Congress
- International Telecommunication Union (ITU)
- ATIS Telecom Glossary
- Federal Communications Commission
- IEEE Communications Society
- International Telecommunication Union
- Ericsson's Understanding Telecommunications at the Wayback Machine (archived 13 April 2004) (Ericsson removed the book from their site in September 2005)
Telecommunications
View on GrokipediaFundamentals
Definition and Scope
Telecommunications refers to the transmission of information over distances using electromagnetic systems, including wire, radio, optical, or other means, enabling communication between specified points without altering the form or content of the transmitted data.[10] This process fundamentally involves encoding signals at a source, propagating them through a medium, and decoding them at a destination, often requiring modulation to suit the transmission channel and demodulation for recovery. The scope of telecommunications encompasses both point-to-point and point-to-multipoint systems for voice, data, video, and multimedia, spanning fixed-line networks (e.g., copper cables and fiber optics), wireless technologies (e.g., cellular radio and satellite links), and hybrid infrastructures.[11] It excludes non-electromagnetic methods like mechanical semaphores or pneumatic tubes, focusing instead on scalable, high-capacity systems governed by standards for interoperability, such as those developed by the International Telecommunication Union (ITU).[12] While overlapping with information technology in network deployment, telecommunications primarily addresses signal transmission and channel management rather than data processing or storage.[13] Global regulatory frameworks, such as those from the ITU and national bodies like the U.S. Federal Communications Commission (FCC), define its boundaries to include interstate and international services via radio, wire, satellite, and cable, ensuring spectrum allocation and service reliability.[11][14] As of 2023, the field supports over 8 billion mobile subscriptions and petabytes of daily data traffic, driven by demands for low-latency connectivity in applications from telephony to internet backhaul.[12]Core Principles of Communication
Communication systems transmit information from a source to a destination through a channel, as formalized in Claude Shannon's 1948 model, which includes an information source generating messages, a transmitter encoding the message into a signal, the signal passing through a noisy channel, a receiver decoding the signal, and delivery to the destination.[15] This model emphasizes that noise introduces uncertainty, necessitating encoding to maximize reliable transmission rates.[15] The Shannon-Hartley theorem defines the channel capacity C as the maximum reliable transmission rate: , where B is the bandwidth in hertz, S is the signal power, and N is the noise power.[16] This formula, derived from information theory, reveals that capacity increases logarithmically with signal-to-noise ratio and linearly with bandwidth, guiding the design of systems to approach theoretical limits through efficient coding rather than brute-force power increases.[16] In digital telecommunications, the Nyquist-Shannon sampling theorem stipulates that a bandlimited signal with maximum frequency must be sampled at a rate exceeding to enable perfect reconstruction, avoiding aliasing distortion where higher frequencies masquerade as lower ones.[17] This principle underpins analog-to-digital conversion, ensuring that sampled data captures the full information content of continuous signals, with practical implementations often using oversampling margins to account for non-ideal filters.[18] These principles extend to modulation, where signals are adapted to channel properties—such as amplitude, frequency, or phase variations—to optimize power efficiency and spectrum usage, and to error detection and correction codes that enable rates near capacity by redundantly encoding data to combat noise-induced errors.[19] Empirical validations, such as in early telephone lines achieving rates close to predicted capacities, confirm the causal role of bandwidth and noise in limiting throughput.[20]Historical Development
Pre-Electronic Methods
Pre-electronic telecommunications encompassed visual, acoustic, and mechanical signaling methods reliant on human observation, sound propagation, or animal carriers, predating electrical transmission. Smoke signals, one of the earliest long-distance visual techniques, involved controlled fires producing visible plumes to convey basic messages such as warnings or calls to assemble, with evidence of use among ancient North American tribes, Chinese societies, and African communities for distances up to several miles depending on visibility.[21] [22] Drums and horns provided acoustic alternatives, transmitting rhythmic patterns interpretable as coded information; African talking drums, for instance, mimicked tonal languages to relay news across villages, effective over 5-10 kilometers in forested terrain.[22] Carrier pigeons served as biological messengers, domesticated by 3000 BCE in Egypt and Mesopotamia for delivering written notes attached to their legs, leveraging innate homing instincts to cover hundreds of kilometers reliably.[23] Persians under Cyrus the Great employed them systematically around 500 BCE for military dispatches, while Romans and later Europeans adapted the method for wartime and commercial alerts, achieving success rates of about 90% under favorable conditions before being supplanted by faster alternatives.[23] Mechanical semaphore systems emerged in the 17th century for naval and military use, employing flags or arms positioned to represent letters or numbers, as proposed by Robert Hooke in 1684 but initially unadopted.[24] By the late 18th century, optical telegraph networks scaled these principles: Claude Chappe's semaphore, patented in France in 1792, used pivoting arms on towers to signal via telescope-visible codes, with the first operational line between Paris and Lille (193 km) completed in 1794, transmitting messages in minutes versus days by courier.[25] Under Napoleon, the network expanded to over 500 stations covering 3,000 km by 1815, prioritizing military intelligence and commodity prices, though weather and line-of-sight limitations restricted reliability to clear days.[26] Similar systems appeared in Sweden (1794) and Britain (e.g., Liverpool-Holyhead line, 1820s), but electrical telegraphs rendered them obsolete by the 1840s due to superior speed, privacy, and all-weather operation.[27] Heliographs, reflecting sunlight via mirrors for Morse-like flashes, extended visual signaling into the 19th century, with British military use achieving 100+ km ranges in arid environments until radio dominance.[26]Electrical Telegraph and Telephone Era (19th Century)
The electrical telegraph emerged from early experiments with electromagnetic signaling, with practical systems developed independently in Europe and the United States during the 1830s. In Britain, William Fothergill Cooke and Charles Wheatstone patented a five-needle telegraph in 1837, which used electric currents to deflect needles indicating letters on a board, initially deployed for railway signaling over short distances.[28] Concurrently in the United States, Samuel F. B. Morse, collaborating with Alfred Vail, refined a single-wire system using electromagnets to record messages on paper tape via dots and dashes, known as Morse code, patented in 1840. This code enabled efficient transmission without visual indicators, relying on battery-powered pulses over copper wires insulated with tarred cloth or gutta-percha.[29] The first public demonstration of Morse's telegraph occurred on May 24, 1844, when he transmitted the message "What hath God wrought" from the U.S. Capitol in Washington, D.C., to Baltimore, Maryland, over a 40-mile experimental line funded by Congress.[29][30] This event marked the viability of long-distance electrical communication, reducing transmission times from days by mail or horse to seconds, fundamentally altering news dissemination, commerce, and military coordination. By 1850, U.S. telegraph lines spanned over 12,000 miles, primarily along railroads, with companies like the Magnetic Telegraph Company consolidating networks.[31] Expansion accelerated post-1851 with the formation of Western Union, which by 1861 linked the U.S. coast-to-coast and by 1866 operated 100,000 miles of wire, handling millions of messages annually at rates dropping from $1 per word to fractions of a cent.[32] Internationally, submarine cables connected Britain to Ireland in 1853 and enabled the first transatlantic link in 1858, though initial attempts failed due to insulation breakdowns until a durable 1866 cable succeeded, halving New York-London communication time to minutes.[33] The telephone built upon telegraph principles but transmitted voice via varying electrical currents mimicking sound waves. Alexander Graham Bell filed a patent application on February 14, 1876, for a harmonic telegraph, but revisions incorporated liquid transmitters for speech, granted as U.S. Patent 174,465 on March 7, 1876, amid disputes with Elisha Gray, who filed a caveat hours later.[34] Bell's first successful transmission occurred on March 10, 1876, stating to assistant Thomas Watson, "Mr. Watson, come here—I want to see you," over a short indoor wire using a water-based variable resistance transmitter.[35] Early devices suffered from weak signals and distortion, limited to about 20 miles without amplification, but carbon microphones introduced by Thomas Edison in 1877 improved volume and range.[36] Telephone networks evolved through manual switchboards, first installed in Boston in 1877 by the Bell Telephone Company, where operators—predominantly young women hired for their perceived patience—physically plugged cords to connect callers, replacing direct wiring impractical for growing subscribers. By 1880, the U.S. had over 60,000 telephones, with exchanges in major cities handling hundreds of lines via multiple-switch boards; New Haven's 1878 exchange pioneered subscriber numbering.[37] Long-distance calls emerged in the 1880s using grounded circuits and repeaters, spanning 500 miles by decade's end, though attenuation required intermediate stations. Competition from independent exchanges spurred innovation, but Bell's patents dominated until 1894 expirations, fostering universal service via rate regulation.[31] This era's systems prioritized reliability over speed, with telegraphy handling high-volume data and telephony enabling conversational immediacy, laying groundwork for integrated networks.[38]Radio and Early Wireless (Late 19th to Mid-20th Century)
The experimental confirmation of electromagnetic waves, predicted by James Clerk Maxwell's equations in the 1860s, laid the groundwork for wireless communication. In 1887, Heinrich Hertz generated and detected radio waves in his laboratory using a spark-gap transmitter and a resonant receiver, demonstrating their propagation, reflection, and diffraction properties similar to light.[39] [40] These experiments, conducted between 1886 and 1888, operated at wavelengths around 66 cm and frequencies in the microwave range, proving the unity of electromagnetic phenomena but initially viewed as a scientific curiosity rather than a communication tool.[41] Guglielmo Marconi adapted Hertz's principles for practical signaling, developing the first wireless telegraphy system in 1894–1895 using spark transmitters to send Morse code over distances initially limited to a few kilometers.[42] He filed his initial patent for transmitting electrical impulses wirelessly in 1896, enabling ship-to-shore communication and earning commercial viability through demonstrations, such as crossing the English Channel in 1899.[42] A milestone came on December 12, 1901, when Marconi received the Morse code letter "S" across the Atlantic Ocean from Poldhu, Cornwall, to Newfoundland, spanning 3,400 km despite atmospheric challenges, though the exact mechanism involved ionospheric reflection, later clarified.[42] Early systems suffered from interference due to untuned spark signals occupying broad spectra, prompting the 1906 International Radiotelegraph Conference in Berlin, organized by what became the ITU, to establish basic distress frequencies like 500 kHz for maritime use.[43] Advancements in detection and amplification were crucial for extending range and enabling voice transmission. John Ambrose Fleming invented the two-electrode vacuum tube diode in 1904, patented as an oscillation valve for rectifying radio signals in Marconi receivers.[44] Lee de Forest's 1906 Audion triode added a grid for amplification, patented in 1907, transforming weak signals into audible outputs and enabling the shift from damped spark waves to continuous-wave alternators for telephony.[44] By the 1910s, Edwin Howard Armstrong's 1913 regenerative circuit provided feedback amplification, boosting sensitivity but risking oscillation, while his 1918 superheterodyne receiver converted signals to a fixed intermediate frequency for stable tuning, becoming standard in receivers.[45] Commercial broadcasting emerged in the 1920s with amplitude modulation (AM) for voice and music. On November 2, 1920, Westinghouse's KDKA in Pittsburgh aired the first scheduled U.S. commercial broadcast, covering Harding's presidential election victory to an estimated audience of crystal set owners.[46] By 1922, over 500 stations operated worldwide, but spectrum congestion led to the 1927 Washington International Radiotelegraph Conference, which allocated bands like 550–1500 kHz for broadcasting and formalized ITU coordination to mitigate interference via wavelength assignments.[43] Armstrong's wideband frequency modulation (FM), patented in 1933, offered superior noise rejection by varying carrier frequency rather than amplitude, with experimental stations launching by 1939, though adoption lagged due to RCA's AM dominance until post-war VHF allocations.[45] During World War I and II, wireless evolved for military use, including directional antennas and shortwave propagation via skywaves for global reach, but civilian telecom focused on reliability. By the mid-20th century, AM dominated point-to-multipoint services, with FM gaining traction for local high-fidelity broadcasting after 1940s FCC rules reserving 88–108 MHz, enabling clearer signals over 50–100 km line-of-sight.[45] These developments shifted telecommunications from wired exclusivity to ubiquitous wireless, though early systems' low data rates—limited to Morse at 10–20 words per minute—prioritized reliability over bandwidth until tube-based amplifiers scaled power to kilowatts.[42]Post-WWII Analog to Digital Transition
The invention of the transistor at Bell Laboratories on December 23, 1947, by John Bardeen, Walter Brattain, and William Shockley revolutionized telecommunications by enabling compact, low-power digital logic circuits that supplanted unreliable vacuum tubes, paving the way for scalable digital processing in transmission and switching systems.[47][48] Digitization of transmission began with the practical implementation of pulse-code modulation (PCM), originally devised by Alec Reeves in 1937 for secure signaling. Bell Laboratories' T1 carrier system, which sampled analog voice at 8 kHz, quantized to 8 bits per sample, and multiplexed 24 channels into a 1.544 Mbps bitstream over twisted-pair lines, entered commercial service in 1962, allowing regeneration of signals to combat cumulative noise in long-haul links.[49][50] This marked the initial widespread adoption of digital telephony, initially for inter-office trunks, as analog amplification distorted signals over distance while digital encoding preserved fidelity through error detection and correction precursors. Switching transitioned from electromechanical relays to electronic stored-program control, with the No. 1 Electronic Switching System (1ESS) cut into service on January 30, 1965, in Succasunna, New Jersey, handling up to 65,000 lines via transistorized digital processors for call routing but retaining analog voice paths.[51] Full end-to-end digitalization advanced with time-division multiplexing switches like AT&T's No. 4 ESS, deployed on January 17, 1976, in Chicago, which processed both signaling and 53,760 trunks digitally, minimizing latency and enabling higher capacity through shared time slots.[52] These developments, fueled by semiconductor scaling, reduced costs by over an order of magnitude per channel and integrated voice with emerging data traffic, supplanting analog vulnerabilities to interference.[48]Internet and Digital Networks (Late 20th Century)
The ARPANET, initiated by the U.S. Advanced Research Projects Agency (ARPA) in 1969, pioneered packet-switched networking, diverging from traditional circuit-switched telecommunications by dividing data into independently routed packets to improve efficiency and fault tolerance.[53] This network first linked UCLA and the Stanford Research Institute on October 29, 1969, with full connectivity among four initial nodes—UCLA, Stanford Research Institute, UC Santa Barbara, and the University of Utah—achieved by December 1969.[54] Packet switching concepts, formalized by Leonard Kleinrock in his 1961 paper and book, addressed bandwidth sharing and queueing theory, enabling robust data transmission across heterogeneous systems.[55] Vinton Cerf and Robert Kahn developed the TCP/IP protocol suite in the early 1970s to enable interoperability among diverse networks, publishing the seminal specification in May 1974.[56] ARPANET transitioned to TCP/IP as its standard on January 1, 1983, establishing the foundational architecture for the Internet by supporting end-to-end reliable data delivery over unreliable links.[55] This shift facilitated the connection of multiple independent networks, contrasting with the dedicated paths of analog telephony and laying groundwork for scalable digital infrastructure. The National Science Foundation Network (NSFNET), deployed in 1986, extended high-speed TCP/IP connectivity to academic supercomputing centers, initially at 56 kbps and upgraded to T1 speeds (1.5 Mbps) by 1988, serving as a national backbone that bridged military and civilian research communities.[57] NSFNET's policies initially prohibited commercial use but evolved by 1991 to allow it, culminating in its decommissioning in 1995 as private providers assumed backbone roles, marking the commercialization of Internet infrastructure.[57] Tim Berners-Lee conceived the World Wide Web in March 1989 while at CERN, proposing a hypermedia system for information sharing via HTTP for protocol, HTML for markup, and URLs for addressing, with the first web client and server implemented in 1990 and publicly released in August 1991.[58] This layered atop TCP/IP networks, transforming digital telecommunications from specialized data exchange to a user-accessible global repository, with web traffic surging from negligible to dominant by the mid-1990s.[58]Broadband and Mobile Expansion (21st Century to Present)
The 21st century marked a profound acceleration in broadband access, transitioning from narrowband dial-up connections predominant in the late 1990s to widespread high-speed services via digital subscriber line (DSL), cable modems, and eventually fiber-optic networks. DSL and cable broadband began commercial deployment in the early 2000s, with U.S. households seeing rapid uptake; by 2007, broadband subscriptions overtook dial-up in many developed markets, driven by demand for streaming and online applications.[59] Fiber-to-the-home (FTTH) deployments gained momentum in the mid-2000s, particularly in Asia, where countries like Japan and South Korea achieved early high penetration rates exceeding 20% by 2010, enabling gigabit speeds unattainable via copper infrastructure.[60] Global fixed broadband penetration reached 36.3 subscribers per 100 inhabitants in OECD countries by mid-2024, more than double the non-OECD average, reflecting disparities in infrastructure investment and regulatory environments.[61] Fiber optic adoption has surged since 2010, with the market value growing from approximately $7.72 billion in 2022 to $8.07 billion in 2023, fueled by demand for multi-gigabit services and data center interconnects.[62] By 2025, fixed and mobile broadband connections worldwide totaled 9.4 billion subscriptions, up from 3.4 billion in 2014, though urban-rural divides persist, with rural areas in OECD nations lagging in high-speed access.[63] Parallel to fixed broadband, mobile telecommunications evolved through successive generations, with third-generation (3G) networks rolling out from 2001, introducing packet-switched data services at speeds up to 2 Mbps, which supplanted 2G's circuit-switched voice focus and enabled basic mobile internet.[64] Fourth-generation (4G) Long-Term Evolution (LTE) standards were finalized in 2008, with commercial launches in 2009; by the mid-2010s, 4G dominated, offering download speeds averaging 20-100 Mbps and supporting video streaming and cloud services globally.[64] Fifth-generation (5G) networks, standardized by 3GPP in 2017, began commercial deployment in 2019, emphasizing ultra-reliable low-latency communication (URLLC) alongside enhanced mobile broadband (eMBB). By the end of 2024, over 340 5G networks were launched worldwide, covering 55% of the global population, with standalone (SA) architectures enabling advanced features like network slicing.[65] As of April 2025, 5G connections exceeded 2.25 billion, representing a fourfold faster adoption rate than 4G, driven by spectrum auctions and carrier investments in sub-6 GHz and millimeter-wave bands.[66] By early 2025, 354 commercial 5G networks operated globally, with leading markets like China, the U.S., and South Korea achieving over 50% population coverage, though spectrum availability and infrastructure costs continue to hinder uniform expansion in developing regions.[67] The convergence of fixed and mobile broadband has intensified since the 2010s, with hybrid fixed-wireless access (FWA) solutions leveraging 5G for rural broadband, reducing reliance on costly fiber trenching.[61] Internet usage reached 68% of the world's population in 2024, equating to 5.5 billion users, predominantly via mobile devices in low-income areas where fixed infrastructure lags.[68] Challenges include digital divides exacerbated by regulatory hurdles and uneven investment, yet empirical evidence links 10% increases in mobile broadband penetration to 1.6% GDP per capita growth, underscoring causal economic benefits.[69]Technical Foundations
Basic Elements of Telecommunication Systems
A basic telecommunication system consists of an information source, transmitter, transmission channel, receiver, and destination.[70] These elements form the core structure enabling the transfer of information from originator to recipient, as modeled in standard communication theory./01:Introduction_to_Electrical_Engineering/1.03:Structure_of_Communication_Systems) The information source generates the original message, such as analog signals from speech (typically 300–3400 Hz bandwidth for voice telephony) or digital data packets.[71] The transmitter processes the source signal for efficient transmission, incorporating steps like signal encoding to reduce redundancy, modulation to adapt the signal to the channel (e.g., amplitude modulation for early radio systems transmitting at carrier frequencies around 500–1500 kHz), and amplification to boost power levels, often up to several kilowatts for long-distance broadcast./01:Introduction_to_Electrical_Engineering/1.03:Structure_of_Communication_Systems) Source encoding compresses data using techniques like pulse-code modulation, which digitizes analog voice by sampling at 8 kHz per the Nyquist theorem (twice the highest frequency), yielding 64 kbps bit rates in early digital telephony standards.[71] The transmission channel serves as the physical or propagation medium conveying the modulated signal, categorized as guided (e.g., twisted-pair copper wires supporting up to 100 Mbps in Ethernet over distances of 100 meters) or unguided (e.g., free-space radio waves at 900 MHz for cellular, prone to attenuation over 1–10 km paths).[71] Channel characteristics, including bandwidth (e.g., 4 kHz for telephone lines) and susceptibility to noise, dictate system capacity via Shannon's theorem, where maximum data rate bits per second, with as bandwidth and as signal-to-noise ratio./01:Introduction_to_Electrical_Engineering/1.03:Structure_of_Communication_Systems) At the receiving end, the receiver reconstructs the original message by reversing transmitter operations: demodulation extracts the baseband signal (e.g., via envelope detection for AM), decoding restores data with error correction (e.g., forward error correction codes achieving bit error rates below in modern systems), and output transduction converts electrical signals to human-perceptible forms like audio via speakers.[70] The destination interprets the recovered message, such as a user's ear receiving reconstructed voice or a computer processing digital bits.[71] In operation, these elements interact causally: the source drives transmitter modulation, channel propagation introduces distortions (quantifiable as path loss in dB, e.g., 20 log(d) for free space), and receiver compensates via filtering and equalization to minimize mean squared error between input and output signals./01:Introduction_to_Electrical_Engineering/1.03:Structure_of_Communication_Systems) Early systems, like Samuel Morse's 1837 telegraph using on-off keying at 10–40 words per minute, exemplified these basics with a manual key as transmitter and sounder as receiver over wire channels spanning 20 miles before repeaters.[71] Modern extensions include multiplexing to share channels among multiple sources, but the foundational chain remains invariant across wired and wireless implementations.[70]Analog Versus Digital Communications
Analog communication systems transmit information using continuous signals where the amplitude, frequency, or phase varies proportionally with the message, such as in amplitude modulation (AM) or frequency modulation (FM) for radio broadcasting.[72] These signals represent real-world phenomena like voice or music directly through electrical voltages that mimic the original waveform, but they degrade cumulatively due to noise and attenuation during propagation, as each amplification introduces further distortion without inherent recovery mechanisms.[73] In telecommunications, analog systems dominated early telephone networks and analog television, where signal fidelity diminishes over distance, limiting reliable transmission range without repeaters that exacerbate errors.[74] Digital communication systems, by contrast, convert analog information into discrete binary sequences through sampling, quantization, and encoding, transmitting data as sequences of 0s and 1s via techniques like phase-shift keying (PSK) or quadrature amplitude modulation (QAM).[75] This discretization enables signal regeneration at intermediate points, where received pulses are reshaped to ideal square waves, effectively eliminating accumulated noise up to a threshold determined by the signal-to-noise ratio.[76] Error detection and correction codes, such as Reed-Solomon or convolutional codes, further enhance reliability by identifying and repairing bit errors, achieving bit error rates as low as 10^{-9} in practical systems like fiber-optic links.[77]| Aspect | Analog Signals | Digital Signals |
|---|---|---|
| Signal Nature | Continuous waveform | Discrete binary pulses |
| Noise Handling | Additive degradation; no recovery | Regenerable; threshold-based restoration |
| Error Correction | None inherent | Built-in via coding (e.g., FEC) |
| Bandwidth Efficiency | Fixed per channel; prone to crosstalk | Supports multiplexing, compression |
| Implementation Cost | Lower initial hardware | Higher due to A/D conversion, processing |
Communication Channels and Transmission Media
Communication channels in telecommunications systems refer to the distinct paths or frequency allocations through which signals conveying information are transmitted between endpoints, while transmission media denote the physical or propagative substances—such as wires, cables, or air—that carry these signals. These media are fundamentally divided into guided types, which direct signals along a tangible conduit to minimize dispersion, and unguided types, which rely on electromagnetic wave propagation through space without physical guidance.[81] Guided media ensure more predictable signal integrity over defined paths, whereas unguided media offer flexibility but contend with environmental variables like interference and attenuation.[82] Guided transmission media encompass twisted-pair cables, coaxial cables, and optical fiber cables, each suited to varying capacities and distances based on their construction and signal propagation mechanisms. Twisted-pair cables, formed by pairing insulated copper conductors twisted to counteract electromagnetic interference, dominate short-range applications like local area networks and telephony. Category 6 twisted-pair supports bandwidths up to 250 MHz and data rates of 10 Gbps over 55 meters, though attenuation increases with frequency, necessitating amplification beyond 100 meters.[83][84] Their low cost and ease of installation make them prevalent, despite vulnerability to crosstalk and external noise.[85] Coaxial cables, with a central conductor encased in insulation, a metallic shield, and an outer jacket, provide superior shielding against interference compared to twisted-pair, enabling bandwidths into the GHz range for applications such as cable television and high-speed internet. They achieve data rates exceeding 1 Gbps in modern DOCSIS systems, with attenuation typically around 0.5 dB per 100 meters at 1 GHz frequencies.[86][87] However, their rigidity and higher installation complexity limit use to fixed infrastructures.[88] Optical fiber cables propagate signals as modulated light pulses through glass or plastic cores, yielding exceptionally low attenuation—approximately 0.2 dB/km at 1550 nm—and vast bandwidths, with laboratory demonstrations reaching 402 Tb/s over standard single-mode fibers.[89] Commercial deployments routinely support 100 Gbps over tens of kilometers without repeaters, immune to electromagnetic interference and enabling secure, high-capacity long-haul transmission critical for internet backbones.[90] Drawbacks include higher costs and susceptibility to physical damage.[87] Unguided transmission media utilize radio frequency, microwave, or infrared waves broadcast into the atmosphere or space, facilitating wireless connectivity but requiring spectrum management to mitigate interference. Radio waves (3 kHz to 300 GHz) offer omnidirectional propagation and obstacle penetration, underpinning cellular networks and broadcasting with data rates from Mbps to Gbps, though multipath fading and shared spectrum reduce reliability.[91] Terrestrial microwaves, operating above 1 GHz in line-of-sight configurations, deliver gigabit backhaul links over tens of kilometers, cheaper than cabling for remote terrains but vulnerable to atmospheric conditions.[92] Satellite systems, employing microwave bands via orbiting transponders, provide ubiquitous coverage for voice, data, and video in underserved regions, with geostationary orbits incurring 240-270 ms latency due to 36,000 km altitudes.[93] Infrared, limited to line-of-sight short ranges under 10 meters, suits indoor point-to-point links like remotes but fails outdoors due to sunlight absorption.[94] Overall, unguided media prioritize mobility and scalability at the expense of signal control and security compared to guided alternatives.[95]Modulation, Multiplexing, and Signal Processing
Modulation refers to the process of encoding an information-bearing baseband signal onto a higher-frequency carrier wave to facilitate efficient transmission over a communication channel, typically by varying the carrier's amplitude, frequency, or phase.[96] This technique shifts the signal spectrum to a passband centered around the carrier frequency, enabling propagation through media like air or cables where low-frequency signals would attenuate excessively due to physical limitations such as skin effect in conductors or free-space path loss.[96] Analog modulation methods, such as amplitude modulation (AM), frequency modulation (FM), and phase modulation (PM), directly vary the carrier in proportion to the continuous modulating signal, with FM offering superior noise immunity by preserving signal power during frequency deviations.[97] Digital modulation, prevalent in modern systems, discretizes the process using schemes like amplitude-shift keying (ASK), frequency-shift keying (FSK), and phase-shift keying (PSK), where binary or higher-order symbols map to discrete carrier states; advanced variants like quadrature amplitude modulation (QAM) combine amplitude and phase shifts to achieve spectral efficiencies up to 10 bits per symbol in applications such as Wi-Fi and cable modems.[98][97] Multiplexing allows multiple independent signals to share a single communication channel, maximizing resource utilization by partitioning the medium's capacity—whether bandwidth, time, or code—among users without mutual interference.[99] Frequency-division multiplexing (FDM) allocates distinct frequency sub-bands to each signal within the channel's total bandwidth, using bandpass filters for separation, as historically applied in analog telephony to combine voice lines over coaxial cables.[99] Time-division multiplexing (TDM), suited to digital systems, synchronizes signals by interleaving fixed-duration slots, enabling efficient statistical multiplexing in packet networks where variable traffic loads are accommodated via dynamic allocation, as in T1/E1 carrier systems carrying 24 or 30 voice channels at 1.544 or 2.048 Mbps, respectively.[100] Wavelength-division multiplexing (WDM) extends this to optical fibers by superimposing signals on different laser wavelengths, achieving terabit-per-second capacities in dense WDM (DWDM) systems with up to 80 channels spaced 50 GHz apart, limited primarily by fiber dispersion and nonlinear effects.[100] Code-division multiplexing (CDM), using orthogonal codes like Walsh sequences, permits simultaneous transmission over the full bandwidth, as in CDMA cellular standards, where signal separation relies on despreading with the correct code to suppress interference from others.[99] Signal processing encompasses the mathematical and algorithmic manipulation of signals to mitigate impairments, extract information, and adapt to channel conditions in telecommunication systems.[101] Core operations include linear filtering via finite impulse response (FIR) or infinite impulse response (IIR) filters to suppress noise or intersymbol interference, as quantified by the signal-to-noise ratio (SNR) improvement of up to 10-20 dB in adaptive equalizers for dispersive channels.[102] Analog-to-digital conversion precedes digital signal processing (DSP), involving sampling at rates exceeding the Nyquist frequency (twice the signal bandwidth) to avoid aliasing, followed by quantization and encoding; in telecom, oversampling by factors of 4-8 reduces quantization noise in applications like voice codecs compressing 64 kbps PCM to 8 kbps via techniques such as linear predictive coding.[103] DSP enables advanced functions like echo cancellation in full-duplex telephony, where adaptive algorithms generate anti-phase replicas of delayed echoes to null them within 0.5-32 ms delays, and forward error correction (FEC) using convolutional or Reed-Solomon codes to achieve bit error rates below 10^{-9} in satellite links despite 10-20 dB fading.[103][102] Modern implementations leverage field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) for real-time processing at gigasample rates, underpinning software-defined radios that dynamically reconfigure modulation and multiplexing parameters.[101]Propagation, Noise, and Error Correction
Signal propagation in telecommunications refers to the physical mechanisms by which electromagnetic waves or electrical signals travel from transmitter to receiver through various media, governed by Maxwell's equations and influenced by factors such as frequency, distance, and environmental conditions. In free space, propagation loss follows the Friis transmission equation, which quantifies received power as , where is transmitted power, and are transmitter and receiver antenna gains, is wavelength, and is distance; this demonstrates path loss scaling with the square of distance and inversely with frequency squared due to smaller effective aperture at higher frequencies.[104] Real-world scenarios introduce additional impairments like multipath fading, where signals reflect off surfaces causing constructive or destructive interference, and attenuation from absorption in atmosphere or obstacles, particularly pronounced at millimeter waves above 30 GHz where oxygen and water vapor absorption peaks.[105] Ground wave and sky wave modes enable beyond-line-of-sight propagation at lower frequencies via surface diffraction or ionospheric reflection, respectively, as utilized in AM radio broadcasting since the early 20th century.[106] Noise degrades signal integrity by adding unwanted random fluctuations, limiting the signal-to-noise ratio (SNR) and thus the achievable data rate per the Shannon-Hartley theorem, which states channel capacity , with as bandwidth and as SNR; this establishes the theoretical maximum error-free bitrate over a noisy channel, derived from probabilistic limits on distinguishable signal states amid Gaussian noise.[107] Primary noise types include thermal noise, arising from random electron motion in conductors and quantified by (k Boltzmann's constant, T temperature in Kelvin, B bandwidth), which sets a fundamental floor at room temperature of about -174 dBm/Hz; shot noise from discrete charge carrier flow in semiconductors; and interference from external sources like co-channel signals or electromagnetic emissions.[108] Impulse noise, such as lightning-induced spikes, and crosstalk between adjacent channels further corrupt signals, with effects cascading in analog systems to distortion but mitigated in digital by thresholding.[109] Error correction techniques counteract noise-induced bit errors by introducing redundancy, enabling detection and repair without retransmission in forward error correction (FEC) or via feedback in automatic repeat request (ARQ). FEC employs block codes like Reed-Solomon, which correct up to t symbol errors in codewords of length n with dimension k (t = (n-k)/2), widely applied in DSL modems since the 1990s and satellite links for burst error resilience up to 25% overhead; convolutional codes with Viterbi decoding achieve near-Shannon efficiency in continuous streams, as in 3G cellular standards.[110] Hybrid ARQ combines FEC with ARQ, using cyclic redundancy checks (CRC) for error detection and retransmission requests, as implemented in LTE protocols where initial FEC fails, balancing latency and throughput—FEC suits high-delay links like deep space (e.g., Voyager probes using concatenated Reed-Solomon and convolutional codes since 1977), while ARQ dominates reliable wired networks.[110] Modern low-density parity-check (LDPC) codes, approaching capacity within 0.5 dB as per iterative decoding, underpin 5G NR standards for enhanced spectral efficiency amid variable noise.[110] These methods causally link redundancy investment to error probability reduction, with coding gain measured in dB improvement over uncoded BER, empirically verified in standards like ITU-T G.709 for optical transport since 2003.[110]Network Architectures and Protocols
Circuit-Switching and Packet-Switching Paradigms
Circuit switching establishes a dedicated end-to-end communications path, or circuit, between two nodes prior to data transmission, reserving that path exclusively for the duration of the session regardless of actual usage.[111] This technique allocates fixed bandwidth and resources upon connection setup, typically via signaling protocols that route the call through switches, ensuring constant connectivity once established.[112] In telecommunications, circuit switching underpins the Public Switched Telephone Network (PSTN), operational since the late 19th century with manual switchboards and evolving to automated electromechanical systems by 1891, where calls traverse dedicated 64 kbps DS0 channels aggregated into higher-rate trunks like T1 (1.544 Mbps) introduced in 1962.[112][113] The paradigm guarantees low, predictable latency—often under 150 ms end-to-end—and minimal jitter, making it suitable for constant bit rate (CBR) applications such as traditional analog and digital voice telephony, where interruptions could degrade quality.[114] Resource setup involves three phases: connection establishment (via signaling like SS7 in PSTN), data transfer, and teardown, with the entire circuit idle-wasted during pauses, such as in typical phone conversations where speakers utilize only 35-50% of time.[115] This results in poor scalability for bursty or intermittent traffic, as unshared links lead to overprovisioning; for instance, early PSTN networks required separate lines per simultaneous call, limiting capacity in high-demand scenarios.[116] Packet switching, conversely, fragments messages into discrete packets—each containing header data for routing, sequence numbering, and payload—transmitted asynchronously across shared network links, with independent routing and reassembly at the receiver.[117] Originating from Paul Baran's 1964 RAND Corporation reports on distributed networks for nuclear survivability and independently from Donald Davies' 1965 work at the UK National Physical Laboratory, where he coined the term "packet," the method emphasized statistical multiplexing to exploit idle periods on links.[117][118] Its first large-scale deployment occurred in the ARPANET on October 29, 1969, using 1822 protocol interfaces at 50 kbps speeds, demonstrating resilience through alternate pathing amid failures.[117] This approach optimizes resource utilization via dynamic bandwidth allocation, achieving up to 80-90% link efficiency for variable bit rate (VBR) data traffic compared to circuit switching's 30-40%, as packets from multiple flows interleave without dedicated reservations.[116] Fault tolerance arises from distributed routing, where packets reroute around congestion or outages using protocols like those in TCP/IP, ratified in 1983 for ARPANET's evolution into the Internet.[118] Drawbacks include variable delays (queuing latency averaging 10-100 ms, potentially higher under load) and packet loss (1-5% without error correction), necessitating overhead for acknowledgments, retransmissions, and quality-of-service mechanisms like DiffServ or MPLS in telecom backbones.[115] Fundamentally, circuit switching prioritizes connection-oriented reliability for delay-sensitive, symmetric flows like circuit-based ISDN (deployed 1988 at 144 kbps) or early GSM voice (2G, 1991), while packet switching excels in store-and-forward efficiency for asymmetric, bursty data, powering IP networks that handle 99% of global internet traffic by 2023 volumes exceeding 4.5 zettabytes annually.[116] Hybrid models, such as NGNs with IMS (IP Multimedia Subsystem, standardized 2004), overlay packet cores on legacy circuits, enabling VoIP to emulate circuit guarantees via RTP/RTCP with jitter buffers, reducing PSTN reliance as global fixed-line subscriptions fell 20% from 2010-2020.[119] The shift reflects causal trade-offs: circuit's fixed allocation suits CBR but wastes capacity, whereas packet's opportunistic sharing scales economically but demands buffering for real-time needs.[115]Wired Infrastructure: Copper, Coaxial, and Fiber Optics
Copper twisted pair cables form the basis of traditional telephone infrastructure, enabling voice and data transmission through electrical signals over insulated wire pairs twisted to reduce electromagnetic interference. Developed for telephony in the late 19th century, these cables support digital subscriber line (DSL) technologies, achieving downstream speeds of up to 300 Mbps under optimal conditions with very-high-bit-rate DSL (VDSL), though performance degrades significantly beyond 1-2 kilometers due to signal attenuation and noise.[120] The 100-meter limit for high-speed Ethernet over twisted pair, as standardized by ANSI/TIA-568, further constrains their use in local area networks without repeaters.[121] Coaxial cables, featuring a central conductor surrounded by a metallic shield, provide higher bandwidth than twisted pair and have been integral to cable television systems since the mid-20th century, later adapted for broadband internet via the Data Over Cable Service Interface Specification (DOCSIS). Introduced by CableLabs in 1997, DOCSIS enables hybrid fiber-coaxial (HFC) networks to deliver downstream speeds exceeding 1 Gbps with DOCSIS 3.1 and up to 10 Gbps with DOCSIS 4.0, utilizing frequency division multiplexing over spectrum up to 1.2 GHz or more.[122] [123] Despite these capabilities, coaxial signals require amplification every few kilometers to counter attenuation, and shared medium architecture can lead to contention during peak usage.[124] Fiber optic cables transmit data as pulses of light through glass or plastic cores, offering vastly superior performance with minimal attenuation—typically 0.2-0.3 dB/km at 1550 nm wavelength—allowing reliable transmission over tens of kilometers without repeaters.[125] Deployed extensively in backbone networks since the 1980s, fiber supports terabit-per-second aggregate capacities via wavelength-division multiplexing and enables symmetric gigabit speeds in fiber-to-the-home (FTTH) setups, far outpacing copper and coaxial in bandwidth and immunity to electromagnetic interference.[126] [127] While initial deployment costs are higher due to specialized splicing and termination, fiber's longevity and scalability position it as the preferred medium for modern high-capacity telecommunications infrastructure.[128]Wireless Systems: Cellular Generations, Wi-Fi, and Satellites
Wireless systems in telecommunications facilitate communication without wired connections, leveraging radio frequency spectrum to transmit signals over air or space, enabling mobility, scalability, and coverage in remote areas. These systems include cellular networks for wide-area mobile voice and data, Wi-Fi for short-range local connectivity, and satellite links for global reach, often integrating with terrestrial infrastructure to form hybrid networks. Key challenges involve spectrum allocation, interference mitigation, signal propagation losses, and achieving high data rates amid increasing demand from devices like smartphones and IoT sensors.[129]Cellular Generations
Cellular networks evolved through generations defined by the International Telecommunication Union (ITU) under International Mobile Telecommunications (IMT) standards, transitioning from analog voice to digital broadband with enhanced capacity and efficiency. First-generation (1G) systems, deployed in the late 1970s to 1980s, used analog modulation for voice-only services; Japan's NTT launched the world's first cellular network in Tokyo on July 1, 1979, followed by AMPS in the US in 1983, offering limited capacity with frequencies around 800 MHz and handover capabilities but prone to eavesdropping due to unencrypted signals.[64] Second-generation (2G) networks, introduced in 1991 with GSM in Finland, shifted to digital time-division multiple access (TDMA) or code-division multiple access (CDMA), enabling encrypted voice, SMS, and basic data at speeds up to 9.6-14.4 kbps, using 900/1800 MHz bands for improved spectral efficiency and global roaming.[64][130] Enhancements like GPRS and EDGE (2.5G) boosted data to 384 kbps by the early 2000s. Third-generation (3G) systems, standardized as IMT-2000 and launched commercially in 2001 (e.g., UMTS in Japan), supported mobile internet and video calls with wideband CDMA (WCDMA) or CDMA2000, achieving peak speeds of 384 kbps to 2 Mbps in 1.8-2.1 GHz bands, though real-world performance often lagged due to early infrastructure limits.[64][131] Fourth-generation (4G) LTE, defined under IMT-Advanced and rolled out from 2009, employed orthogonal frequency-division multiplexing (OFDM) for all-IP packet-switched networks, delivering downlink speeds up to 100 Mbps (theoretical 1 Gbps) in sub-6 GHz and early millimeter-wave bands, facilitating streaming and cloud services with lower latency around 50 ms.[129][64] Fifth-generation (5G) New Radio (NR), standardized as IMT-2020 and commercially deployed from 2019, uses flexible sub-6 GHz and mmWave (24-40 GHz) spectrum for peak theoretical speeds of 20 Gbps, ultra-reliable low-latency communication (<1 ms), and massive machine-type communications supporting up to 1 million devices per km², enabling applications like autonomous vehicles and AR/VR; by April 2025, global 5G connections exceeded 2.25 billion, with adoption accelerating fourfold faster than 4G.[129][66] Development of 6G, focusing on terahertz frequencies and AI-integrated networks for 100 Gbps+ speeds, began standardization in 3GPP Release 20 in 2025, with commercial trials expected by 2028 and services around 2030.[132][133]| Generation | Key Introduction Year | Primary Technologies | Peak Theoretical Downlink Speed | Latency (Typical) |
|---|---|---|---|---|
| 1G | 1979-1983 | Analog FDMA (AMPS) | Voice (~2.4 kbps equiv.) | N/A |
| 2G | 1991 | Digital TDMA/CDMA (GSM) | 14.4-384 kbps (with EDGE) | 100-500 ms |
| 3G | 2001 | WCDMA/CDMA2000 | 2 Mbps | 100-500 ms |
| 4G | 2009 | LTE OFDM | 1 Gbps | ~50 ms |
| 5G | 2019 | NR (sub-6/mmWave) | 20 Gbps | <1 ms |
Wi-Fi
Wi-Fi, based on IEEE 802.11 standards, provides unlicensed spectrum-based wireless local area networking (WLAN) for indoor and short-range outdoor use, typically in 2.4 GHz, 5 GHz, and emerging 6 GHz bands, with backward compatibility across amendments. The initial 802.11 standard, ratified in 1997, supported raw data rates of 1-2 Mbps using direct-sequence spread spectrum (DSSS) in the 2.4 GHz ISM band, suitable for basic Ethernet replacement but limited by interference from devices like microwaves.[134][135] Subsequent amendments improved throughput and range: 802.11b (1999) boosted speeds to 11 Mbps via complementary code keying (CCK) in 2.4 GHz, enabling early consumer adoption; 802.11a (1999) introduced 54 Mbps OFDM in 5 GHz for less congested channels but shorter range; 802.11g (2003) combined 54 Mbps OFDM with 2.4 GHz compatibility. Later, 802.11n (2009) added MIMO and 40 MHz channels for up to 600 Mbps across dual bands; 802.11ac (Wi-Fi 5, 2013) focused on 5 GHz with wider 160 MHz channels and multi-user MIMO for gigabit speeds; 802.11ax (Wi-Fi 6, 2019) enhanced efficiency in dense environments via OFDMA and target wake time, achieving up to 9.6 Gbps. Wi-Fi 6E extends to 6 GHz for additional spectrum, reducing congestion in high-device scenarios.[135][134]| Standard (Wi-Fi Name) | Ratification Year | Bands (GHz) | Max PHY Rate |
|---|---|---|---|
| 802.11 | 1997 | 2.4 | 2 Mbps |
| 802.11b | 1999 | 2.4 | 11 Mbps |
| 802.11a | 1999 | 5 | 54 Mbps |
| 802.11n | 2009 | 2.4/5 | 600 Mbps |
| 802.11ac (Wi-Fi 5) | 2013 | 5 | 6.9 Gbps |
| 802.11ax (Wi-Fi 6) | 2019 | 2.4/5 | 9.6 Gbps |
Satellites
Satellite communications use orbiting transponders to relay signals globally, classified by altitude: geostationary Earth orbit (GEO) at 35,786 km for fixed coverage with high latency (~250 ms round-trip due to signal distance), medium Earth orbit (MEO) at 8,000-20,000 km for balanced trade-offs, and low Earth orbit (LEO) at 500-2,000 km for low latency (20-50 ms) and dynamic beamforming. GEO systems, dominant since the 1960s, offer high per-satellite capacity (e.g., up to several Gbps per transponder in Ku/Ka bands) for broadcasting and backhaul but require large antennas and suffer rain fade; examples include Inmarsat for maritime/aero services.[136][137] LEO and MEO constellations address GEO limitations via mega-constellations: Iridium (LEO, operational since 1998) provides voice/data with <40 ms latency but modest throughput (~64 kbps historically, upgraded to broadband); Starlink (SpaceX, deploying ~6,000+ satellites by 2025) delivers consumer broadband at 100+ Mbps with low latency to underserved areas using phased-array user terminals; OneWeb (MEO/LEO hybrid) targets enterprise connectivity with similar Ka-band capacities. These non-geostationary orbits (NGSO) enhance global coverage and capacity through inter-satellite links but demand frequent handovers and regulatory spectrum coordination to mitigate interference with terrestrial systems.[137][136] Raisting Earth station exemplifies GEO satellite uplink facilities, handling high-power transmission for transatlantic links.[137]Core Networks, Routing, and Interconnection
The core network in telecommunications serves as the central backbone that interconnects access networks, handles high-capacity data routing, switching, and service management functions, enabling efficient transport of voice, data, and multimedia traffic across vast distances.[138] Traditionally rooted in circuit-switched Public Switched Telephone Network (PSTN) architectures using time-division multiplexing (TDM), core networks have evolved toward packet-switched IP/Multi-Protocol Label Switching (MPLS) designs in Next Generation Networks (NGN), where all traffic is encapsulated as IP packets for convergence of services.[139] This shift, accelerated since the early 2000s, replaces disparate legacy elements like circuit switches with unified IP routers and gateways, reducing operational complexity and enabling scalability for broadband demands.[138] Routing within core networks relies on dynamic protocols to determine optimal paths for packet forwarding, distinguishing between interior gateway protocols (IGPs) for intra-domain efficiency and exterior gateway protocols (EGPs) for inter-domain connectivity. Open Shortest Path First (OSPF), a link-state IGP standardized by the Internet Engineering Task Force (IETF) in RFC 1131 in 1989 and refined in OSPFv2 (RFC 2328, 1998), computes shortest paths using Dijkstra's algorithm based on link costs, making it suitable for large, hierarchical core topologies where rapid convergence—typically under 10 seconds—is critical.[140] Border Gateway Protocol (BGP), the de facto EGP introduced in 1989 (RFC 1105) and matured as BGP-4 in RFC 1771 (1994), manages routing between autonomous systems (ASes) by exchanging policy-based path attributes like AS-path length, enabling the global Internet's scale with over 100,000 ASes advertised as of 2023.[141] These protocols operate at OSI Layer 3, with OSPF flooding link-state advertisements for topology awareness and BGP using path-vector mechanisms to prevent loops, though BGP's policy flexibility has led to vulnerabilities like route leaks, prompting enhancements such as Resource Public Key Infrastructure (RPKI) adoption since 2011.[142] Interconnection between core networks occurs through peering and transit arrangements at points of presence (PoPs) or Internet Exchange Points (IXPs), facilitating traffic exchange without universal reliance on third-party intermediaries. Settlement-free peering, where networks mutually exchange local traffic without payment, predominates for balanced ratios, reducing latency and costs compared to paid IP transit, where a customer pays an upstream provider for full Internet reachability—global transit prices fell from $0.50 per Mbps in 2010 to under $0.20 by 2023 due to fiber overbuilds and content delivery shifts.[143] Public peering at IXPs, such as those hosted by Equinix or DE-CIX, aggregates hundreds of participants for efficient multilateral exchange, handling exabytes of traffic monthly; for instance, AMS-IX processed over 10 Tbps peak in 2022.[144] These models, evolved from bilateral agreements in the 1990s NAP era, underpin Internet resilience but raise disputes over paid peering impositions, as seen in the 2014 Comcast-Netflix settlement, underscoring causal dependencies on traffic imbalances for negotiation leverage.[145]Modern Technologies and Applications
Voice and Telephony Evolution
The telephone, enabling electrical transmission of voice over wires, was patented by Alexander Graham Bell on March 7, 1876, as U.S. Patent No. 174,465 for an "improvement in telegraphy."[34] Initial systems used analog signals, where voice waveforms were directly modulated onto electrical currents via carbon microphones and transmitted point-to-point over twisted copper pairs, forming the basis of plain old telephone service (POTS).[36] By 1878, the first commercial telephone exchange operated in New Haven, Connecticut, using manual switchboards operated by human operators to connect calls via electromagnetic relays.[146] Automation advanced with Almon Brown Strowger's 1891 electromechanical stepping switch, which eliminated operator intervention for local calls by using dialed impulses to route circuits.[147] Long-distance analog transmission expanded through loaded cables and repeaters in the early 1900s, with the first transcontinental U.S. call in 1915 relying on vacuum-tube amplifiers to counter signal attenuation.[148] Crossbar switches replaced step-by-step systems in the 1930s, improving reliability via matrix-based interconnections, while microwave radio relays enabled high-capacity links by the 1950s, such as AT&T's 1951 New York-to-Washington route carrying 600 voice channels.[149] Undersea coaxial cables, like TAT-1 in 1956, connected continents with analog frequency-division multiplexing (FDM), aggregating up to 36 voice circuits per cable.[147] The shift to digital telephony began with pulse-code modulation (PCM), invented by Alec Harley Reeves in 1937 to digitize analog voice into binary pulses, reducing noise susceptibility during transmission.[49] Bell Labs deployed the first commercial PCM system in 1962 via T1 carrier lines, sampling voice at 8 kHz and quantizing to 8 bits for 64 kbps channels, enabling error-resistant multiplexing over digital hierarchies like DS1.[150] Digital switching emerged in the 1970s, with Northern Telecom's 1976 Stored Program Control (SPC) exchanges using time-division multiplexing (TDM) to route 64 kbps PCM streams, outperforming analog in scalability and integrating signaling via Common Channel Interoffice Signaling (CCIS).[151] By the 1980s, integrated services digital network (ISDN) standards from ITU-T provided end-to-end digital connectivity, with basic rate interface (BRI) combining two 64 kbps bearer channels for voice and data.[152] Voice over Internet Protocol (VoIP) disrupted traditional telephony in the 1990s by packetizing voice into IP datagrams, with VocalTec's 1995 InternetPhone software enabling the first PC-to-PC calls using H.323 protocols over narrowband internet.[153] Session Initiation Protocol (SIP), standardized by IETF in 1999 (RFC 2543), facilitated scalable signaling for VoIP gateways interfacing PSTN trunks.[154] Adoption surged with broadband; by 2004, Skype's peer-to-peer model supported free global calls, eroding circuit-switched revenues as softswitches like those from Cisco handled media via RTP/RTCP.[155] Mobile voice telephony originated with first-generation (1G) analog systems, such as Nippon Telegraph's 1979 cellular network in Tokyo using FDMA for 2.4 kbps voice at 900 MHz.[64] Second-generation (2G) digital standards, including GSM in 1991 with TDMA and 13 kbps full-rate codec, introduced encrypted circuit-switched voice, enabling global roaming via SIM cards.[156] Third-generation (3G) UMTS in 2001 retained circuit-switched voice domains alongside packet data, using adaptive multi-rate (AMR) codecs for improved quality up to 12.2 kbps.[157] Fourth-generation (4G) LTE from 2009 shifted to all-IP architectures, implementing voice over LTE (VoLTE) via IMS core for IMS-based real-time transport, supporting HD voice at 23.85 kbps with wider bandwidths.[158] Fifth-generation (5G) networks, deployed from 2019, employ voice over new radio (VoNR) for low-latency native voice at up to 64 kbps using EVS codec, integrating with edge computing for ultra-reliable low-latency communication (URLLC).[130]Data Services and the Internet Backbone
Data services in telecommunications encompass the delivery of digital information transmission beyond traditional voice, including internet access, file transfers, and streaming, primarily through broadband technologies that replaced early narrowband connections like dial-up modems operating at speeds under 56 kbps.[159] The evolution accelerated in the late 1990s with digital subscriber line (DSL) utilizing existing copper telephone lines to achieve asymmetric speeds up to several Mbps, followed by cable broadband leveraging coaxial infrastructure for downstream rates exceeding 100 Mbps by the 2010s.[160] Fiber-optic broadband, deploying dense wavelength-division multiplexing (DWDM), now dominates high-capacity services, offering symmetrical gigabit-per-second speeds and supporting the surge in data demand driven by video streaming and cloud computing.[161] The internet backbone forms the foundational high-capacity network interconnecting continental and global traffic, comprising peering arrangements among Tier 1 internet service providers (ISPs) that operate extensive fiber-optic meshes without purchasing transit from others.[162] Key Tier 1 providers, including AT&T, Verizon, NTT, and Deutsche Telekom, maintain global reach through owned infrastructure, facilitating settlement-free exchanges at internet exchange points (IXPs) where traffic volumes in the terabits per second are routed efficiently.[163] This core layer handles the majority of long-haul data, with undersea fiber-optic cables spanning over 1.5 million kilometers and carrying more than 95% of intercontinental traffic at capacities reaching hundreds of terabits per second per system via multiple fiber pairs.[164][165] Global internet traffic has expanded rapidly, reflecting the backbone's scaling; for instance, fixed and mobile data volumes grew at compound annual rates exceeding 20% from 2020 onward, propelled by increased device connectivity and content consumption, necessitating continual upgrades in backbone capacity through advanced modulation and spatial multiplexing.[68][166] By 2024, worldwide internet users reached 5.5 billion, underscoring the backbone's role in sustaining petabyte-scale daily exchanges while vulnerabilities like cable faults highlight the concentrated risks in this infrastructure.[68][167] Emerging technologies, such as coherent optics, continue to enhance spectral efficiency, ensuring the backbone's alignment with projected traffic trajectories into the 2030s.[168]Broadcasting and Multimedia Delivery
Broadcasting in telecommunications encompasses the one-to-many dissemination of audio, video, and data content via dedicated spectrum or network infrastructure, enabling simultaneous reception by numerous users without individualized addressing.[169] This paradigm contrasts with point-to-point communication by leveraging efficient spectrum use for mass distribution, historically rooted in analog radio frequency modulation for amplitude modulation (AM) and frequency modulation (FM) radio since the early 20th century, and analog television standards like NTSC in the United States adopted in 1953.[170] The shift to digital broadcasting, initiated in the 1990s, markedly enhanced spectral efficiency, allowing multiple channels within the same bandwidth previously occupied by a single analog signal, with digital systems achieving up to six times greater capacity through compression and error correction.[171] Digital terrestrial television (DTT) represents a core broadcasting method, utilizing ground-based transmitters to deliver signals over VHF and UHF bands. Standards vary regionally: the ATSC system, standardized by the Advanced Television Systems Committee in 1995 and mandated for U.S. full-power stations with a transition deadline of June 12, 2009, supports 8VSB modulation for high-definition content.[172] In Europe, the DVB-T standard, developed from 1991 onward, employs OFDM modulation and saw widespread adoption with analog switch-offs completing in many countries by 2016.[173] Japan's ISDB-T, introduced in 2003, integrates terrestrial integrated services digital broadcasting with mobile reception capabilities.[174] These transitions freed analog spectrum—such as the U.S. 700 MHz band auctioned for $19.6 billion in 2008—for mobile broadband, underscoring causal links between broadcasting evolution and spectrum reallocation for higher-value uses.[170] Satellite broadcasting extends terrestrial reach via geostationary or low-Earth orbit platforms, employing standards like DVB-S2 for direct-to-home services, which Ku-band frequencies enable high-throughput delivery to remote areas.[175] Cable systems, historically using coaxial infrastructure, now integrate hybrid fiber-coaxial (HFC) networks for digital delivery, supporting DOCSIS protocols that achieve gigabit speeds for video transport. Multimedia delivery broadens beyond traditional broadcasting to include Internet Protocol Television (IPTV) and over-the-top (OTT) streaming, where telecom networks multicast live content via IGMP for efficiency in managed IP environments.[176] Key protocols include RTP over UDP for real-time transport in IPTV, ensuring low-latency packet sequencing, while adaptive streaming via HTTP Live Streaming (HLS) or Dynamic Adaptive Streaming over HTTP (DASH) adjusts bitrate dynamically to network conditions in unicast scenarios.[177] Contemporary multimedia systems emphasize quality of service (QoS) in telecom backhaul, with content delivery networks (CDNs) caching data at edge nodes to minimize latency—global CDN traffic reached 40% of internet video by 2020.[178] Hybrid approaches combine broadcast with broadband, as in DVB-I for IP-integrated TV, facilitating seamless transitions amid declining linear TV viewership, where U.S. broadcast radio reach hovered at 90% through 2023 before slight declines.[179] Error correction via forward error correction (FEC) and modulation schemes like QAM ensure robustness against noise, with ITU recommendations specifying parameters for service quality in diverse propagation environments.[180] These mechanisms underpin reliable delivery, though challenges persist in spectrum congestion and the need for ongoing standardization to accommodate ultra-high-definition (UHD) and immersive formats.Specialized Applications: IoT, Edge Computing, and 5G/6G
The Internet of Things (IoT) refers to the interconnection of physical devices embedded with sensors, software, and connectivity capabilities to exchange data via telecommunications networks.[181] By the end of 2024, the global number of connected IoT devices stood at approximately 18.8 billion, reflecting a 13% year-over-year increase driven by enterprise adoption in sectors like manufacturing and logistics.[182] Forecasts for 2025 project 19 to 27 billion devices, fueled by expansions in consumer electronics, industrial automation, and smart infrastructure.[183] Cellular IoT connections, a subset reliant on mobile telecommunications, approached 4 billion by late 2024, with an expected compound annual growth rate of 11% through 2030 due to enhanced network slicing and low-power wide-area technologies.[184] Telecommunications infrastructure underpins IoT scalability by providing ubiquitous connectivity, but challenges persist in spectrum efficiency and security for massive device densities.[185] In industrial settings, IoT enables predictive maintenance and real-time monitoring, where telecom backhaul transports sensor data to central analytics without centralized cloud dependency.[186] Edge computing distributes data processing to locations proximate to the data source—such as base stations or on-premises servers—rather than relying solely on distant core networks, thereby optimizing telecommunications for latency-sensitive workloads.[187] This architecture reduces transmission delays to milliseconds, enhances bandwidth utilization, and improves data sovereignty in telecom environments by localizing computation.[188][189] For IoT applications, edge computing mitigates congestion in core networks by filtering and analyzing data at the periphery, supporting use cases like autonomous vehicles and remote diagnostics where round-trip latency below 10 milliseconds is critical.[190] Fifth-generation (5G) wireless networks integrate with IoT and edge computing by offering enhanced mobile broadband, ultra-reliable low-latency communication (URLLC), and support for massive machine-type communications (mMTC), enabling up to 1 million devices per square kilometer.[191][192] As of 2025, 5G covers about one-third of the global population, with 59% of North American smartphone subscriptions on 5G networks, facilitating edge deployments in fixed wireless access and private networks.[193][194] The synergy arises from 5G's sub-1-millisecond latency potential when paired with edge nodes, allowing real-time IoT processing in telecommunications for applications like augmented reality and industrial robotics.[195][196] Sixth-generation (6G) technologies, researched since 2020, target terabit-per-second speeds, sub-millisecond end-to-end latency, and integrated sensing-communications to extend IoT and edge paradigms beyond 5G limitations.[197] Standardization efforts, led by bodies like 3GPP, commence with a 21-month study phase in mid-2025, aiming for initial specifications by 2028 and commercial viability around 2030.[198][199] In telecommunications, 6G envisions AI-native networks for dynamic resource allocation in edge-IoT ecosystems, potentially supporting holographic communications and ubiquitous sensing, though propagation challenges at terahertz frequencies necessitate advances in materials and beamforming.[200] Early prototypes demonstrate feasibility for edge-integrated massive IoT, but deployment hinges on resolving energy efficiency and spectrum harmonization issues.[201]Economic Aspects
Industry Structure, Competition, and Market Dynamics
The telecommunications industry exhibits an oligopolistic structure characterized by a small number of dominant firms controlling significant market shares, high barriers to entry including substantial capital requirements for infrastructure deployment and spectrum acquisition, and interdependent pricing strategies among competitors.[202][203] Globally, the sector's total service revenue reached $1.14 trillion in 2023, with growth driven primarily by data services rather than traditional voice, yet profitability remains pressured by rising capital expenditures for 5G and fiber networks.[204] In many national markets, three to four major operators account for over 80-90% of subscribers, as seen in the United States where T-Mobile held approximately 40% mobile market share in 2024, followed by Verizon and AT&T at 30% each.[205] Leading global players include state-influenced giants like China Mobile, which serves over 1 billion subscribers, alongside private incumbents such as Verizon, AT&T, Deutsche Telekom, Vodafone, and Nippon Telegraph and Telephone (NTT), which together dominate revenue and infrastructure assets.[206] Market capitalization rankings as of 2024 place T-Mobile US at the top among telecom firms, surpassing China Mobile, reflecting investor emphasis on growth in advanced wireless services.[207] These firms often maintain vertical integration, controlling both network infrastructure and retail services, which reinforces economies of scale but limits new entrants to mobile virtual network operators (MVNOs) that lease capacity without owning physical assets.[208] Competition primarily manifests in service differentiation, pricing wars for consumer plans, and investments in spectrum auctions and technology upgrades, though infrastructure-based rivalry remains constrained by the sunk costs of nationwide coverage—often exceeding tens of billions per operator for 5G rollouts.[209] Regulatory frameworks, including antitrust scrutiny and licensing, further shape rivalry; for instance, mergers like T-Mobile's 2020 acquisition of Sprint reduced U.S. national operators from four to three, enhancing scale for 5G but prompting concerns over reduced consumer choice.[210] Emerging challengers, such as fixed-wireless access providers using 5G for broadband, intensify competition in underserved areas, yet incumbents' control of prime spectrum bands (e.g., sub-6 GHz and mmWave) sustains their advantages.[211] Market dynamics are marked by ongoing consolidation through mergers and acquisitions, with global telecom M&A deal values nearly tripling from $16 billion in Q1 2025 to higher quarterly figures amid pursuits of synergies in AI integration and edge computing.[212] Privatization waves in the 1990s and early 2000s transitioned many markets from state monopolies to oligopolies, fostering initial price declines but leading to stabilized pricing as operators recoup investments; average revenue per user (ARPU) has stagnated or declined in mature markets due to commoditization of mobile data.[213] Technological convergence with IT sectors, including cloud and IoT, drives partnerships over direct competition, while geopolitical factors like U.S.-China tensions influence supply chains for equipment from vendors like Huawei, prompting diversification and elevating costs.[214] Overall, the industry's trajectory hinges on balancing capex for next-generation networks against revenue pressures, with operators increasingly seeking adjacencies in enterprise services to offset consumer segment saturation.[215]Investment, Revenue Growth, and Global Trade
Global telecommunications investment, primarily in the form of capital expenditures (capex) by operators, peaked during the initial 5G rollout phases but has since moderated. Worldwide telecom capex declined by 8% in 2024, reflecting completion of core network upgrades and a shift toward maintenance and optimization rather than expansion.[216][217] Forecasts indicate a further contraction at a 2% compound annual growth rate (CAGR) through 2027, as operators prioritize return on prior investments amid economic pressures like inflation.[216] In the United States, the largest market, capex reached $80.5 billion in 2024 before anticipated declines in 2025 due to softening demand and recession risks.[218] The U.S. led global investment with $107 billion annually, followed by China at $59.1 billion, underscoring concentration in advanced economies funding fiber and wireless infrastructure.[219] Revenue growth in the sector has remained positive but subdued, driven by rising data consumption and 5G adoption rather than subscriber expansion. Global telecom service revenues increased 4.3% in 2023 to $1.14 trillion, with total industry revenues reaching approximately $1.53 trillion in 2024, up 3% from the prior year.[220][215] Projections suggest a 3% CAGR through 2028, potentially lifting revenues to $1.3 trillion, though core services like mobile and fixed broadband will dominate at 75% of totals amid maturing markets.[221][222] Telecom services overall are expected to expand at a 6.5% CAGR from 2025 to 2030, reaching $2.87 trillion by 2030, fueled by enterprise demand for connectivity and cloud integration.[223] Global trade in telecommunications equipment and services reflects technological competition and geopolitical tensions, with equipment exports forming the bulk of merchandise flows. The telecom equipment market was valued at $636.86 billion in 2024, projected to grow to $673.95 billion in 2025 at a 5.8% CAGR, supported by demand for 5G and fiber-optic gear.[224] Alternative estimates place the 2025 market at $338.2 billion, expanding at 7.5% CAGR to $697 billion by 2035, highlighting variance in scope but consistent upward trajectory from infrastructure needs.[225] China dominates equipment exports via firms like Huawei, but U.S. restrictions since 2019 on national security grounds have diverted trade, boosting alternatives from Ericsson and Nokia while reducing bilateral flows.[226] Services trade, embedded in broader commercial services estimated at $7.6 trillion globally in 2023, sees telecom contributions growing modestly at 4-5% annually through 2026, constrained by regulatory barriers and digital taxes.[227][228]| Key Metric | 2023 Value | 2024 Value | Projected 2025 Growth |
|---|---|---|---|
| Global Service Revenues | $1.14T | N/A | ~3% CAGR to 2028[221] |
| Total Industry Revenues | N/A | $1.53T | 3% YoY[215] |
| Worldwide Capex | N/A | -8% YoY | -2% CAGR to 2027[216] |
| Equipment Market | N/A | $636.86B | 5.8% YoY[224] |