Hubbry Logo
Computer networkComputer networkMain
Open search
Computer network
Community hub
Computer network
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Computer network
Computer network
from Wikipedia

In computer science, computer engineering, and telecommunications, a network is a group of communicating computers and peripherals known as hosts, which communicate data to other hosts via communication protocols, as facilitated by networking hardware.

Within a computer network, hosts are identified by network addresses, which allow network software such as the Internet Protocol to locate and identify hosts. Hosts may also have hostnames, memorable labels for the host nodes, which are rarely changed after initial assignment. The physical medium that supports information exchange includes wired media like copper cables, optical fibers, and wireless radio-frequency media. The arrangement of hosts and hardware within a network architecture is known as the network topology.[1][2]

The first computer network was created in 1940 when George Stibitz connected a terminal at Dartmouth to his Complex Number Calculator at Bell Labs in New York. Today, almost all computers are connected to a computer network, such as the global Internet or embedded networks such as those found in many modern electronic devices. Many applications have only limited functionality unless they are connected to a network. Networks support applications and services, such as access to the World Wide Web, digital video and audio, application and storage servers, printers, and email and instant messaging applications.


History

[edit]

Early origins (1940 – 1960s)

[edit]

In 1940, George Stibitz of Bell Labs connected a teletype at Dartmouth to a Bell Labs computer running his Complex Number Calculator to demonstrate the use of computers at long distance.[3][4] This was the first real-time, remote use of a computing machine.[3]

In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system[5][6][7] using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s). In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT.[8][9][10][11] Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year.[12] McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963).

In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers.[13] Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project.[14]

During the 1960s,[15][16] Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network.[17][18][19][20] Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability.[21][22][23][24] Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle.[25][26][27][28] The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links.[29][27][30] Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks.[31][32][33][34]

ARPANET (1969 – 1974)

[edit]

In 1962 and 1963, J. C. R. Licklider sent a series of memos to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users. This ultimately became the basis for the ARPANET, which began in 1969.[35] That year, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California, Santa Barbara, and the University of Utah.[35][36] Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman.[37][38][39] In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET.[40][41] His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.[42][43]

In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network.[44][45] That same year, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet,[46] a local area networking system he created with David Boggs.[47] It was inspired by the packet radio ALOHAnet, started by Norman Abramson and Franklin Kuo at the University of Hawaii in the late 1960s.[48][49] Metcalfe and Boggs, with John Shoch and Edward Taft, also developed the PARC Universal Packet for internetworking.[50] That year, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.[51]

The internet (1974 – present)

[edit]

In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication.[52] Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking.[53] In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks"[54] and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention.[55][56]

In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1979, Robert Metcalfe pursued making Ethernet an open standard.[57] In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus,[58] Hal Murray, Dave Redell and Yogen Dalal.[59] In 1986, the National Science Foundation (NSF) launched the National Science Foundation Network (NSFNET) as a general-purpose research network connecting various NSF-funded sites to each other and to regional research and education networks.[35]

In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 800 Gbit/s were added (as of 2025). The scaling of Ethernet has been a contributing factor to its continued use.[57] In the 1980s and 1990s, as embedded systems were becoming increasingly important in factories, cars, and airplanes, network protocols were developed to allow the embedded computers to communicate. In the late 1990s and 2000s, ubiquitous computing and an Internet of Things became popular.[60][61]

Commercial usage

[edit]

In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes. In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric. In 1972, commercial services were first deployed on experimental public data networks in Europe.[62][63] Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75.[18] This underlying infrastructure was used for expanding TCP/IP networks in the 1980s.[64] In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.

Hardware

[edit]
[edit]

The transmission media used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer. Common examples of networking technologies include:

Wired

[edit]
Bundle of glass threads with light emitting from the ends
Fiber-optic cables are used to transmit light from one computer/network node to another.

The following classes of wired technologies are used in computer networking.

  • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.[citation needed]
  • ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network.
  • Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
World map with red and blue lines
2007 map showing submarine optical fiber telecommunication cables around the world
  • An optical fiber is a glass fiber that carries pulses of light that represent data via lasers and optical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF).[65]

Wireless

[edit]
Black laptop with the router in the background
Computers are very often connected to networks using wireless links.

Network connections can be established wirelessly using radio or other electromagnetic means of communication.

  • Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 40 miles (64 km) apart.
  • Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-power transceiver.
  • Radio and spread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
  • Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
  • Extending the Internet to interplanetary dimensions via radio waves and optical means, the Interplanetary Internet.[66]
  • IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.[67]

The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput).

Network nodes

[edit]

Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions.

Network interfaces

[edit]
A network interface circuit with a port for ATM
An ATM network interface in the form of an accessory card. Many network interfaces are built-in.

A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry.

In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and hubs

[edit]

A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.

Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule.

An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches.

Bridges and switches

[edit]

Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches.

Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame.[68] They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply.

Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks.

Routers

[edit]
A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections

A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks.

Modems

[edit]

Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology.

Firewalls

[edit]
This is an image of a firewall separating a private network from a public network

A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Communication

[edit]

Protocols

[edit]
Protocols in relation to the Internet layering scheme.
The TCP/IP model and its relation to common protocols used at different layers of the model
When a router is present, message flows go down through protocol layers, across to the router, up the stack inside the router, and back down again and is sent on to the final destination where it climbs back up the stack
Message flows between two devices (A-B) at the four layers of the TCP/IP model in the presence of a router (R). Red flows are effective communication paths, black paths are across the actual network links.

A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics, such as being connection-oriented or connectionless, or using circuit switching or packet switching.

In a protocol stack, often constructed per the OSI model, communications functions are divided into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP, the World Wide Web protocol. HTTP runs over TCP over IP, the Internet protocols, which in turn run over IEEE 802.11, the Wi-Fi protocol. This stack is used between a wireless router and a personal computer when accessing the web.

Packets

[edit]
Network Packet

Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network.

Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.

With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free.

The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message.

Common protocols

[edit]

Internet protocol suite

[edit]

The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet.[69]

IEEE 802

[edit]

IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model.

For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs[70] (but it is also found in WLANs[71]) – it is what the home user sees when the user has to enter a "wireless access key".

Ethernet
[edit]

Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.

Wireless LAN
[edit]

Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet.

SONET/SDH

[edit]

Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

Asynchronous Transfer Mode

[edit]

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.

ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user.[72][needs update]

Cellular standards

[edit]

There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).[73]

Routing

[edit]
Routing calculates good paths through a network for information to take. For example, from node 1 to node 6 the best routes are likely to be 1-8-7-6, 1-8-10-6 or 1-9-10-6, as these are the shortest routes.

Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.

In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.

Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks.

Architecture

[edit]
Common network topologies

Topology

[edit]

The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts.

Common topologies are:

  • Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree.
  • Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point.
  • Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology.
  • Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
  • Fully connected network: each node is connected to every other node in the network.
  • Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing.

The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding.

Overlay network

[edit]
A sample overlay network

An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.[74]

Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed.

The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network.[74] Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.

Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.

Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination[citation needed].

For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast,[75] resilient routing and quality of service studies, among others.

Scale

[edit]

Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale.

Nanoscale network

[edit]

A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques.[76]

Personal area network

[edit]

A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[77] A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.

Local area network

[edit]

A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.[78]

A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity.[citation needed] Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s,[79] standardized by IEEE in 2010.

  • A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider.
  • A storage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.[citation needed]

Campus area network

[edit]

A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.). For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.

Backbone network

[edit]

A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.

For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks.

  • An enterprise private network or intranet is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.

Metropolitan area network

[edit]

A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area.

Wide area network

[edit]

A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer.

Global area network

[edit]

A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[80]

Scope

[edit]

An intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees).[81] Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).[81]

Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.

Intranet

[edit]

An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information.

Extranet

[edit]

An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology.

Internet

[edit]
Partial map of the Internet based on 2005 data.[82] Each line is drawn between two nodes, representing two IP addresses. The length of the lines indicates the delay between those two nodes.

An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers.

The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services.

Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Darknet

[edit]

A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F)[83] — using non-standard protocols and ports.

Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.[84]

Virtual private networks

[edit]

A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.

Services

[edit]

Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.

The World Wide Web, E-mail,[85] printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like 210.121.67.18),[86] and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address.[87]

Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.

Performance

[edit]

Bandwidth

[edit]

Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation).

Network delay

[edit]

Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay:

A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds.

Performance metrics

[edit]

The parameters that affect performance typically can include throughput, jitter, bit error rate and latency.

In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.[88] Other types of performance measures can include the level of noise and echo.

In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements.[89][verification needed][full citation needed]

There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.[90]

Network congestion

[edit]

Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers.

Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard.

For the Internet, RFC 2914 addresses the subject of congestion control in detail.

Network resilience

[edit]

Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."[91]

Security

[edit]

Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.

Network security

[edit]

Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources.[92] Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals.

Network surveillance

[edit]

Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.

Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.

Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.[93]

However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T.[93][94] The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".[95][96]

End to end encryption

[edit]

End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.

Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.

Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox.

The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.

SSL/TLS

[edit]

The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.[65]

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A computer network consists of two or more interconnected devices, such as computers, servers, and peripherals, linked via communication channels to exchange data, share resources like printers and storage, and enable electronic communications. These systems rely on hardware components including hosts, routers, switches, and transmission links—either wired or —to facilitate connectivity. Computer networks are categorized by spatial scope, ranging from personal area networks (PANs) that connect devices within a short range, such as Bluetooth-enabled gadgets for an individual user, to local area networks (LANs) covering a single building or campus, metropolitan area networks (MANs) spanning cities, and wide area networks (WANs) like the that operate across global distances. The , the largest WAN, interconnects billions of devices worldwide using the TCP/IP protocol suite, which provides reliable data transmission through layered abstraction for addressing, routing, and error correction. This suite emerged from military-funded research, with precursors like launching in 1969 as the first operational packet-switched network, transitioning to TCP/IP standardization in 1983 to unify disparate systems. Key defining characteristics include topologies such as bus, , or configurations that determine data flow efficiency and , alongside protocols governing packet encapsulation, forwarding, and congestion control to ensure scalable, robust operation. While enabling transformative applications from to real-time global collaboration, networks inherently face challenges like latency, bandwidth limitations, and vulnerability to failures or attacks, necessitating ongoing innovations in switching, , and quality-of-service mechanisms.

Fundamentals

Definition and Core Principles

A computer network is a comprising two or more interconnected devices, such as computers, servers, and peripherals, designed to transmit, receive, and share and resources. These devices communicate over physical or media using standardized rules to ensure reliable exchange of , enabling functionalities like resource pooling, centralized management, and . The primary purpose stems from the need to overcome limitations of isolated s by allowing efficient and flow, as evidenced by the in networked devices, with over 15 billion connected globally by 2023. At its core, computer networking operates on principles of modularity and standardization, particularly through layered architectures that divide communication processes into hierarchical levels. For instance, the TCP/IP model organizes functions into link, internet, transport, and application layers, where each handles specific tasks like routing packets or ensuring end-to-end delivery, facilitating interoperability across heterogeneous systems. This layering principle, rooted in causal separation of concerns, allows independent evolution of components—such as upgrading transport protocols without altering physical media—while protocols like IP for addressing and TCP for reliable transmission enforce consistent data handling. Data transmission in networks relies on , a foundational principle where messages are segmented into discrete packets, each routed independently via algorithms considering and congestion. This method optimizes bandwidth utilization compared to , as packets share links dynamically, with empirical data showing it supports variable traffic loads effectively, as in the Internet's handling of trillions of packets daily. Reliability principles incorporate error detection via checksums, acknowledgments for retransmission, and redundancy to mitigate failures, ensuring despite physical layer imperfections like noise or rates up to 1-2% in typical Ethernet links. Scalability emerges from hierarchical addressing (e.g., IPv4's 32-bit scheme supporting 4.3 billion addresses) and routing protocols that adapt to growing node counts without centralized bottlenecks.

Basic Components and Data Flow

Computer networks comprise end systems, such as hosts including computers, servers, and mobile devices that generate or consume , interconnected via intermediate systems like routers and switches that facilitate forwarding. End systems operate at the network's periphery, while intermediate systems form the core infrastructure for data relay across multiple links. Communication links, including twisted-pair copper cables, fiber optics, or wireless channels, physically connect these systems and carry bit streams. Data flow begins at a source , where application-layer messages are segmented into smaller units called packets during transmission down a , such as the TCP/IP model. Each packet consists of a header containing source/destination addresses, sequencing, and error-checking information, plus a payload of original . Packets traverse links independently via , allowing dynamic routing without dedicated paths, which enhances efficiency in shared networks. Upon reaching an intermediate system, such as a router, the packet's network-layer header is inspected to match against routing tables populated via protocols like OSPF or BGP, determining the optimal outgoing link based on metrics including hop count or bandwidth. The packet is then queued, processed up to the network layer for forwarding decisions, and sent down to the for transmission to the next hop. Switches operate similarly at the data-link layer within local segments, using MAC addresses for frame forwarding to reduce collisions in LANs. At the destination , arriving packets are buffered, reordered using sequence numbers if needed, and reassembled by ascending the , with checksums verifying integrity before delivery to the application. This layered encapsulation and decapsulation ensures reliable end-to-end delivery despite potential or reordering en route, as intermediate systems do not inspect higher-layer payloads. Delays in flow arise from transmission (bit propagation time), propagation (signal travel), queuing at congested nodes, and processing overhead.

Historical Development

Early Concepts and Precursors (Pre-1960s)

The , invented by and demonstrated publicly on May 24, 1844, when he transmitted the message "What hath God wrought" from Washington, D.C., to , established the first extensive wired communication networks, enabling rapid long-distance signaling via coded electrical impulses over copper wires. These systems, which expanded globally by the mid-19th century with submarine cables like the 1858 transatlantic link, demonstrated scalable point-to-point connectivity and techniques, such as those using relays and to extend signal range, laying infrastructural groundwork for later data transmission despite their analog, human-operated nature. The , patented by on March 7, 1876, advanced circuit-switched voice networks, with the first commercial exchange opening in , on January 28, 1878, supporting up to 21 subscribers via manual switchboards. By the early 20th century, automated exchanges using Strowger switches (introduced in 1892) and crossbar systems (1920s) enabled larger-scale interconnections, handling thousands of simultaneous calls through electromechanical routing, which influenced concepts of dynamic path selection in future networks. These infrastructures provided reliable, real-time connectivity over twisted-pair wiring, proving the feasibility of switched networks for distributed communication, though limited to analog audio and requiring dedicated circuits per connection. Early digital computing experiments highlighted remote access potential. On September 11, 1940, researcher demonstrated the first remote computer operation at an meeting in , by connecting a teletype terminal via standard lines to his Complex Number Calculator (CNC)—an electromechanical relay-based machine operational since January 8, 1940—in , approximately 250 miles away. Attendees submitted mathematical problems (e.g., solving complex equations), which were encoded, transmitted, computed, and results returned in real-time, marking the initial instance of networked computing despite rudimentary bandwidth (around 50 bits per second) and error-prone analog phone channels. This proof-of-concept underscored the viability of leveraging existing telecom for computational sharing, though pre-1950s computers remained isolated due to their size, cost, and lack of standardized interfaces. Conceptual visions emerged amid post-World War II information overload. In his July 1945 Atlantic Monthly essay "As We May Think," Vannevar Bush proposed the Memex—a hypothetical desk-sized electromechanical device for storing vast microfilm records, enabling rapid associative retrieval via nonlinear "trails" linking documents, akin to human memory paths. While not a multi-machine network, the Memex anticipated hyperlinked information systems by emphasizing indexed, user-navigable data repositories over linear filing, influencing later distributed knowledge architectures; Bush, drawing from his differential analyzer work (1927 onward), envisioned mechanized selection but relied on vacuum-tube selectors rather than digital links. These ideas, rooted in analog and electromechanical paradigms, prefigured digital networking by prioritizing efficient information association, though practical implementation awaited transistorized computing. By the late 1950s, military applications tested integrated systems. The U.S. Air Force's (SAGE) project, initiated in 1951 and with initial sites operational by 1958, linked over 20 large AN/FSQ-7 computers across 23 centers via dedicated microwave and landline networks, from hundreds of stations for real-time air defense against potential Soviet threats. Each 250-ton computer handled 400 lines and modems for exchange, demonstrating hierarchical, fault-tolerant distributed with operators, but its scale—costing $8 billion adjusted—and centralization highlighted pre-packet challenges like single points of failure and inefficient bandwidth use. These efforts, driven by imperatives, validated computer interconnectivity for command-and-control, bridging telegraph/ legacies to digital eras without adopting modern protocols.

Packet Switching and ARPANET (1960s-1970s)

Packet switching emerged as a foundational concept for computer networks in the mid-1960s, driven by the need for resilient, efficient data transmission amid concerns over nuclear survivability. laid early theoretical groundwork through his 1961 PhD thesis and a 1962 publication, applying to demonstrate the viability of store-and-forward networks where messages are broken into smaller units routed independently. , working at , advanced practical designs in his August 1964 report "On Distributed Communications Networks," proposing to divide messages into fixed-size "blocks" transmitted via a distributed mesh of nodes to ensure redundancy and against failures. Independently, at the UK's National Physical Laboratory (NPL) formalized the approach in a November 1965 internal memo, coining the term "" for segmenting data into discrete packets with headers for routing, emphasizing statistical multiplexing for better resource utilization over . These ideas converged in the development of ARPANET, funded by the U.S. Department of Defense's Advanced Research Projects Agency (ARPA) to connect research institutions. Influenced by Baran's and Kleinrock's work— with Kleinrock consulting on ARPANET—ARPA issued a request for proposals in 1967, awarding Bolt, Beranek and Newman (BBN) the contract in 1968 to build Interface Message Processors (IMPs), specialized packet switches handling 50 kbps links. The first IMP was installed at the University of California, Los Angeles (UCLA) on August 30, 1969, followed by the second at Stanford Research Institute (SRI) on October 1. The inaugural transmission occurred on October 29, 1969, at 10:30 p.m. PDT, when UCLA student Charley Kline, under Kleinrock's supervision, attempted to send "" to SRI; the system crashed after transmitting "LO," marking the first successful packet exchange despite the partial failure. By December 5, 1969, the network linked four nodes: UCLA, SRI, (UCSB), and . Expansion continued rapidly; by 1970, supported 13 hosts across additional sites like BBN, MIT, and Harvard, demonstrating via the 1822 protocol between hosts and IMPs. In the 1970s, evolved with protocol refinements, including early experiments in resource sharing and sent the first network in 1971 using the "@" symbol. The network grew to 15 nodes (23 hosts) by 1971 and facilitated key innovations like the 1972 public demonstration at the International Computer Communication Conference, where it connected 40 sites by 1972. Despite challenges like congestion, validated packet switching's superiority for bursty data traffic, influencing global standards and paving the way for broader .

TCP/IP Standardization and Internet Expansion (1980s-1990s)

In March 1982, the declared TCP/IP the standard protocol suite for all military computer networking, mandating its adoption across defense-related systems. This decision formalized the protocols developed by and Bob Kahn, enabling interoperable communication over diverse networks. On January 1, 1983, the , the primary experimental network, completed its transition from the Network Control Program to TCP/IP, marking a pivotal moment that unified disparate packet-switched networks under a common framework and is widely regarded as the operational birth of the . The (NSF) further propelled expansion by establishing NSFNET in 1985 as a high-speed backbone connecting centers and institutions, initially operating at 56 kbps and upgrading to T1 speeds by 1988. This network facilitated academic collaboration, growing from 217 connected networks in July 1988 to over 50,000 by April 1995, while enforcing an that prohibited commercial traffic until its later phases. Concurrently, the (DNS), proposed by in RFC 882 and 883 published November 1983, replaced numeric IP addresses with human-readable hierarchical names, deploying root name servers by 1987 to support scalable addressing amid rising host counts. The 1990s accelerated global reach through technological and policy shifts. Tim Berners-Lee's , proposed in 1989 and released into the on April 30, 1993, introduced hypertext-linked information sharing via HTTP, , and URLs, transforming the from a text-based research tool into an accessible multimedia platform that accounted for 1% of traffic by late 1993. NSFNET's decommissioning in April 1995 privatized the backbone, allowing commercial Internet service providers (ISPs) to dominate, with user numbers surging from approximately 45 million in 1996 to 150 million worldwide by 1999, driven by browser innovations like and . This commercialization dismantled barriers to public adoption, fostering and widespread connectivity.

Broadband Proliferation and Commercialization (2000s-2010s)

The 2000s marked the rapid transition from dial-up to broadband internet access, driven by advancements in digital subscriber line (DSL) and cable modem technologies that leveraged existing telephone and coaxial cable infrastructures. In the United States, broadband adoption surged as DSL providers expanded deployments, with services overtaking cable modems in subscriber growth by late 2000, enabling download speeds up to several megabits per second over standard phone lines without interrupting voice service. Globally, internet users grew from 361 million in 2000 to 1.9 billion by 2010, with broadband proliferation as a primary catalyst, shifting connections from narrowband's 56 kbps limits to always-on, higher-capacity links. Commercialization intensified through competition among incumbent telephone companies and cable operators, who invested in upgrading networks to offer residential high-speed services. Cable modem subscriptions exceeded 10 million by Q3 2002, supported by standards that facilitated asymmetric speeds favoring downloads, aligning with emerging consumer demands for media streaming. like and regional telcos bundled broadband with other services, fostering market consolidation and infrastructure investments amid deregulated environments that encouraged private capital over public funding. By 2009, approximately 65% of U.S. adults used high-speed , reflecting matured where ISPs competed on speed tiers and pricing, though rural areas lagged due to deployment costs. Into the 2010s, fiber-to-the-home (FTTH) deployments emerged as a premium alternative, with Verizon launching FiOS in 2005 offering symmetrical gigabit potentials, though initial rollout focused on urban markets. This period saw broadband speeds evolve from sub-megabit averages in the early to multi-megabit standards by decade's end, enabling bandwidth-intensive applications like video-on-demand and , which in turn pressured ISPs to upgrade backhaul and last-mile connections. Competition dynamics shifted toward bundled offerings, with cable providers gaining market share through hybrid fiber-coax upgrades, while DSL waned in high-density areas due to distance-limited speeds. Overall, proliferation was propelled by technological feasibility and consumer demand rather than regulatory mandates, resulting in uneven global coverage but substantial network densification in developed economies.

Recent Milestones (2020s Onward)

The rollout of fifth-generation () mobile networks marked a significant advancement in cellular connectivity, with commercial deployments expanding rapidly after initial launches in 2019. By April 2025, global connections exceeded 2.25 billion, achieving adoption four times faster than prior generations and covering approximately one-third of the world's population through enhanced infrastructure investments. In the United States, carriers like reached coverage for 100 million people by mid-decade, enabling applications in smart cities, remote healthcare, and industrial automation via higher bandwidth and lower latency compared to 4G. Satellite-based broadband networks emerged as a milestone in global coverage, particularly through SpaceX's constellation. Public beta service began in July 2020, following test satellite launches, with non-disclosure agreements initially limiting access. By 2025, had deployed over 10,000 satellites via frequent missions, serving more than 6 million active customers with speeds and latency improvements supporting remote areas previously underserved by terrestrial infrastructure. This low-Earth orbit approach reduced propagation delays to under 50 milliseconds, contrasting with traditional geostationary satellites and facilitating for maritime, , and rural applications. Wireless local area network standards advanced with (IEEE 802.11ax) achieving widespread enterprise and consumer adoption post-2020, delivering up to 9.6 Gbps theoretical throughput via (OFDMA) and . Wi-Fi 6E extended operations to the 6 GHz band for reduced interference. The certified Wi-Fi 7 (802.11be) in early 2024, introducing multi-link operation across 2.4, 5, and 6 GHz bands for aggregated speeds exceeding 40 Gbps, with preliminary deployments reaching a $1 billion market size ahead of full commercialization. Wired Ethernet progressed to support and AI workloads, with 400 Gbps standards ratified and deployed by 2020, followed by 800 Gbps in production by mid-decade. The Ethernet Alliance's 2025 roadmap outlined paths to 1.6 Tbps and 3.2 Tbps, driven by hyperscale demands for energy-efficient, high-density interconnects in cloud environments. These speeds enabled terabit-scale backhaul for and reduced latency in clusters, with over converged Ethernet (RoCEv2) optimizing AI training traffic.

Physical and Logical Structures

Network Topologies

Network topology describes the arrangement of nodes, links, and their interconnections in a computer network, influencing , reliability, and . Topologies are categorized as physical or logical: physical topology represents the actual geometric layout of cabling and devices, while logical topology illustrates the flow pathways irrespective of physical connections. Physical topologies determine signal characteristics and fault propagation, whereas logical topologies govern protocol behaviors such as addressing and routing. Common physical topologies include bus, , ring, , and . In a bus topology, all devices connect to a single shared cable terminated at both ends to prevent signal reflection; this was prevalent in early Ethernet networks like introduced in 1980. Advantages include low cost and simplicity for small networks with minimal cabling, but disadvantages encompass vulnerability to cable failure disrupting the entire network and difficulties in troubleshooting due to signal attenuation limiting segment length to about 500 meters. Star topology connects each device to a central hub or switch via dedicated , dominant in modern local area networks using twisted-pair cabling since the 1990s with . It offers advantages such as easy addition or removal of nodes without network disruption, fault isolation to individual , and scalability up to hundreds of nodes depending on switch capacity. However, failure of the central device halts all communication, and cabling volume increases with node count. Ring topology arranges nodes in a closed loop where data circulates unidirectionally, often using token-passing protocols like standardized by IEEE 802.5 in 1989. Benefits include predictable performance without collisions and equal access opportunities, suitable for medium-sized networks. Drawbacks involve a single break propagating failures around the ring and challenges in adding nodes without , though dual-ring variants enhance at higher cost. Mesh topology provides multiple interconnections, either full (every node to every other) or partial; full mesh ensures high with n(n-1)/2 links for n nodes, used in backbone networks for reliability. Advantages comprise , as multiple paths prevent single-point failures, and low latency via direct routes. Disadvantages include high installation and maintenance costs, especially for full mesh scaling poorly beyond small node counts like 10-20, and increased complexity in . Tree topology extends by hierarchical connections, combining scalability of with bus-like backbones, common in enterprise networks for organized expansion. Hybrid topologies integrate multiple types, such as star-bus or star-ring, to leverage strengths like modularity and redundancy while mitigating weaknesses; these predominate in large-scale deployments for flexibility. Selection depends on factors including node count, required throughput (e.g., up to 10 Gbps in star Ethernet), and fault tolerance needs, with simulations showing mesh outperforming others in availability above 99.999% for critical applications. Logical topologies, often bus-like in Ethernet despite star physical wiring due to shared medium emulation, enable abstractions like virtual LANs segmenting traffic flows.
TopologyKey AdvantagesKey Disadvantages
BusLow cost, easy setupSingle failure point, limited length
Fault isolation, scalableCentral dependency, more cabling
RingNo collisions, fair accessBreak propagates, hard to expand
High redundancy, reliableExpensive, complex wiring
Hierarchical scalabilityBackbone vulnerability
Guided transmission media utilize physical pathways to confine and direct electromagnetic signals, providing reliable, high-bandwidth connections with reduced susceptibility to external interference compared to wireless alternatives. These media include twisted-pair cables, coaxial cables, and optical fiber cables, each optimized for specific distance, speed, and cost trade-offs in network deployments. Twisted-pair cables consist of two or more insulated wires twisted together to mitigate and noise; they dominate Ethernet LANs due to low cost and ease of installation. Unshielded twisted-pair (UTP) Category 5e supports 1 Gbps transmission over 100 meters at 100 MHz bandwidth, while Category 6 achieves 10 Gbps up to 55 meters at 250 MHz with enhanced shielding options like foil or braided variants. Higher categories, such as Category 6A at 500 MHz, extend 10 Gbps to 100 meters, addressing growing demands for faster intra-building links. Coaxial cables feature a central encased in insulation, a metallic , and an outer jacket, enabling higher bandwidth than with better resistance to . They support data rates from 10 Mbps to 1 Gbps over distances up to several kilometers, with bandwidth capacities reaching 1 GHz in hybrid fiber-coax (HFC) systems used for cable internet; however, signal increases with , limiting unamplified runs to about 500 meters at higher speeds. Optical fiber cables propagate data via light pulses through a core of or surrounded by cladding, achieving superior performance with as low as 0.2 dB/km at 1550 nm wavelengths. Single-mode , with an 8-10 micron core, enables distances up to 140 km without at rates exceeding 100 Gbps, ideal for long-haul backbone networks; multimode , featuring a 50-62.5 micron core, handles shorter spans up to 550 meters at 100 Gbps but suffers from that limits effective bandwidth over distance. Deployment costs remain higher due to precise splicing and transceivers, yet fibers dominate inter-city and links for their immunity to electrical noise and capacity for terabit-scale aggregation. Unguided transmission media, or wireless media, disseminate signals through free space using electromagnetic waves, prioritizing flexibility and scalability over wired security but introducing vulnerabilities to obstacles, weather, and multipath fading. Radio waves (3 kHz to 1 GHz) underpin , cellular (e.g., / bands around 600 MHz to 6 GHz), and broadcast applications, offering omnidirectional coverage up to kilometers with data rates scaling to 10 Gbps in mmWave extensions. Microwaves (1-300 GHz) require line-of-sight for point-to-point links, supporting gigabit rates over tens of kilometers via directional antennas, as in backhaul towers; waves (300 GHz-400 THz) confine short-range, indoor transmissions to avoid interference, achieving up to 1 Gbps over 10 meters in device-to-device setups. links, leveraging frequencies in Ku (12-18 GHz) and Ka (26-40 GHz) bands, extend global coverage but incur latency of 250-500 ms due to geostationary orbits at 36,000 km. Network links represent the endpoint connections facilitated by these media, classified by topology as point-to-point (dedicated sender-receiver pairs for low-latency, high-throughput paths) or multipoint/broadcast (one sender to multiple receivers, as in Ethernet hubs or wireless LANs). Transmission modes dictate flow direction: simplex permits unidirectional data (e.g., sensor telemetry), half-duplex allows bidirectional alternation (e.g., legacy walkie-talkies), and full-duplex enables simultaneous send-receive via separate channels or frequency division, standard in modern switched networks to double effective throughput without collision risks. Link performance hinges on media choice, with guided options favoring deterministic latency and unguided enabling ad-hoc mobility, though all require modulation schemes like QAM to encode bits onto carriers efficiently.

Node Types and Functions

In computer networks, nodes are devices that connect to the network and participate in by sending, receiving, or forwarding packets. Nodes are primarily classified into end systems, which generate or consume data, and intermediate systems, which relay data between end systems without originating application-level content. End systems utilize protocols across all layers of models like the TCP/IP stack, whereas intermediate systems focus on lower layers for efficient forwarding. End systems, also termed hosts, encompass general-purpose computers, servers, smartphones, printers, and IoT devices that serve as sources or destinations for data flows. Their core functions include executing applications that produce or process data—such as web browsers initiating HTTP requests or servers responding with content—and encapsulating data into packets for transmission via transport and network layers, or decapsulating incoming packets for upper-layer delivery. These nodes handle end-to-end reliability, error correction, and flow control through protocols like TCP, ensuring from source to destination. Intermediate systems consist of specialized that operates at the network, , and physical layers to interconnect devices and direct traffic. Key types include:
  • Switches: Layer-2 devices that connect endpoints within a single or LAN, forwarding Ethernet frames based on MAC addresses learned via self-maintained tables to minimize collisions and enable efficient, non-broadcast multi-access communication. Unlike legacy hubs, switches support full-duplex operation and features like VLANs for logical segmentation, predominant in modern Ethernet networks since the .
  • Routers: Layer-3 devices linking disparate networks, such as LANs to WANs, by examining IP headers to determine optimal paths via tables populated by protocols like OSPF or BGP, performing and (NAT) to enable internet-scale connectivity. Routers compute routes dynamically, balancing load and adapting to failures, essential for hierarchical architecture.
  • Bridges: Early layer-2 interconnects that join network segments, filtering traffic by MAC addresses to reduce domain size and prevent loops, functioning similarly to switches but with fewer ports and without advanced features like integration in basic forms. Largely superseded by switches in contemporary deployments.
  • Gateways: Multifunctional devices or software that interface heterogeneous networks by translating protocols between incompatible architectures, such as converting between TCP/IP and legacy systems, often incorporating firewall capabilities for security enforcement through packet inspection and application.
These intermediate nodes enhance scalability and performance by offloading forwarding logic from end systems, allowing hosts to focus on application processing while ensuring reliable data propagation across diverse topologies.

Protocols and Communication Standards

Layered Reference Models

Layered reference models divide the complex functions of network communication into distinct, hierarchical abstractions to promote modularity, interoperability, and standardization. Each layer handles specific responsibilities, such as data transmission or error correction, while providing services to the layer above and relying on the layer below, enabling independent development and troubleshooting. These models emerged in the 1970s and 1980s amid efforts to interconnect diverse systems, with empirical success favoring practical implementations over purely theoretical ones. The Open Systems Interconnection (OSI) model, developed by the (ISO), conceptualizes seven layers: physical, , network, , session, , and application. Published initially in 1984 as ISO 7498, with the current version ISO/IEC 7498-1:1994, it aimed to create a universal framework for protocol development to facilitate open interconnectivity across vendor systems. The physical layer transmits raw bits over media; ensures error-free transfer between adjacent nodes; network handles and addressing; provides end-to-end reliability; session manages connections; formats data; and application interfaces with user software. Despite its influence on and diagnostics, the OSI model saw limited real-world protocol adoption due to its late development and rigidity, with implementations like the OSI protocol suite failing to gain traction against established alternatives. In contrast, the TCP/IP model, originating from DARPA's designed for the , structures communication into four layers: link, , , and , as formalized in RFC 1122 published in 1989. Evolving from protocols proposed in the mid-1970s, including initial TCP specifications in 1974, it separated connection-oriented transport (TCP) from datagram routing (IP) by 1978, enabling scalable internetworking. The manages hardware access; (IP) routes packets across networks; (TCP/UDP) ensures delivery; and encompasses higher protocols like HTTP. Mandated for hosts on January 1, 1983, this model underpins the global , demonstrating causal efficacy through iterative, implementation-driven refinement rather than top-down specification. While the offers granular separation—mapping its lower three layers to TCP/IP's link, its network to , transport to , and upper three to application—the TCP/IP approach consolidates functions for efficiency, reflecting practical necessities over theoretical purity. OSI's session and presentation layers, for instance, are often handled within TCP/IP applications, reducing overhead in deployed systems. This divergence highlights TCP/IP's empirical dominance, as its protocols scaled to interconnect millions of networks by the , whereas OSI remained referential. Some variants, like the five-layer Department of Defense (DoD) model, insert a network access layer below internet for clarity, but TCP/IP's four-layer scheme prevails in standards documentation.

Core Protocol Suites and Mechanisms

The TCP/IP protocol suite, also known as the , forms the foundational set of communication protocols enabling interconnected networks worldwide. Developed in the 1970s by Vinton Cerf and Robert Kahn, it was first detailed in their 1974 paper and adopted as the standard for on January 1, 1983. The suite's core protocols include the (IP) for best-effort datagram delivery and routing, the Transmission Control Protocol (TCP) for reliable, ordered byte-stream transport, and the (UDP) for lightweight, connectionless datagram exchange. IP underwent formal standardization via RFC 791 in September 1981, while TCP was specified in RFC 793 that same month, establishing mechanisms for packet fragmentation, reassembly, and time-to-live to prevent routing loops. TCP implements reliability through sequence numbering, acknowledgments, and retransmissions, coupled with error detection via header and payload checksums that verify octet integrity during transit. Flow control employs a , where the receiver advertises its buffer capacity to regulate sender throughput and avoid overflow. Connection establishment uses a three-way handshake: the client sends a SYN segment, the server responds with SYN-ACK, and the client replies with ACK, negotiating initial numbers and sizes. For teardown, a four-way process involving FIN and ACK segments ensures graceful closure, though half-open connections can persist if one side fails to respond. Congestion control in TCP dynamically adjusts transmission rates to prevent network overload, using a congestion window (cwnd) that limits unacknowledged segments in flight. Core algorithms include slow start, which exponentially increases cwnd from one segment until a threshold, followed by congestion avoidance via additive increase and multiplicative decrease (AIMD) upon detecting loss through duplicate ACKs or timeouts. IP supports fragmentation with 16-bit identifiers and offset fields, allowing reassembly at destinations, though path MTU discovery mitigates excessive fragmentation by probing maximum transmission units. UDP omits these reliability features, relying on IP's minimal error handling, making it suitable for applications like DNS queries or streaming where speed trumps delivery guarantees. Auxiliary protocols enhance the suite's functionality: ICMP provides error reporting and diagnostics, such as echo requests for ping, while ARP maps IP addresses to link-layer addresses in local networks. Though alternatives like the OSI protocol suite were proposed for layered , TCP/IP's pragmatic, end-to-end design and widespread adoption by the mid-1980s rendered it the , powering the global Internet's and resilience.

Addressing, Routing, and Management Protocols

Addressing in computer networks assigns unique identifiers to devices for data packet delivery. In the Internet Protocol version 4 (IPv4), addresses are 32-bit numbers expressed in dotted decimal notation, such as 192.168.1.1, divided into four octets. This format provides approximately 4.3 billion unique addresses, structured with a network portion identifying the and a host portion specifying the device. Subnetting extends the network prefix by borrowing bits from the host portion using a subnet mask, enabling division of a large network into smaller subnetworks for improved and . The IPv4 specification, defined in RFC 791 published in September 1981, forms the basis for this addressing scheme in packet-switched networks. IPv6 addresses the limitations of IPv4's finite space with 128-bit addresses, offering about 3.4 × 10^38 unique identifiers and supporting features like stateless address autoconfiguration and simplified header processing. Specified in RFC 8200 updated in July 2017, has accelerated due to IPv4 exhaustion, with global adoption reaching over 43% of traffic to services by early 2025, projected to surpass 50% later that year. Regional variations persist, with achieving 85% adoption by May 2025, while the lags below the global average. Routing protocols determine paths for packets across networks by exchanging topology information among routers. Interior Gateway Protocols (IGPs) operate within a single autonomous system (AS), including distance-vector protocols like Routing Information Protocol (RIP), which uses hop count as a metric limited to 15 hops to prevent infinite loops. Link-state protocols such as Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS) flood link-state advertisements to compute shortest paths via Dijkstra's algorithm, supporting hierarchical areas for scalability in large networks. Exterior Gateway Protocols (EGPs) manage inter-AS routing; Border Gateway Protocol version 4 (BGP-4), standardized in RFC 4271 from January 2006, employs path-vector mechanisms to select routes based on policy attributes like AS path length, enabling the global Internet's routing fabric. Multiprotocol Label Switching (MPLS) provides label-based forwarding to support traffic engineering (MPLS-TE), Layer 2 and Layer 3 virtual private networks (L2/L3 MPLS VPN), and Segment Routing, a source-based routing approach that simplifies path control and reduces per-flow state in MPLS networks. Network management protocols facilitate monitoring, configuration, and fault detection. (ICMP), integral to the IP suite, handles error reporting and diagnostics, with tools like ping using ICMP Echo Request/Reply messages to test . (SNMP), developed by the IETF, allows managers to query agents on devices for operational data via Management Information Bases (MIBs). SNMPv1, introduced in 1988, relies on community strings for basic access; SNMPv2c adds bulk retrieval but retains weak security; SNMPv3, specified in RFCs from 1998 onward, incorporates user-based authentication and encryption for enhanced protection. These protocols operate over UDP, prioritizing reliability through acknowledgments in management operations.

Classification by Scope and Scale

Geographic and Size-Based Categories

Computer networks are classified by geographic scope, which correlates with physical coverage area, typical data transmission distances, and the number of interconnected nodes. These categories—ranging from personal-scale setups to global infrastructures—influence hardware choices, latency expectations, and requirements. Standard delineations include Personal Area Networks (PANs) for individual use, Local Area Networks (LANs) for localized environments, Metropolitan Area Networks (MANs) for urban extents, and Wide Area Networks (WANs) for inter-regional connectivity. Personal Area Networks (PANs) operate over very short ranges, typically 1 to 10 meters, connecting a handful of personal devices such as smartphones, wearables, and peripherals owned by one individual. Technologies like (IEEE 802.15.1), operating at frequencies around 2.4 GHz with data rates up to 3 Mbps in classic mode or 2 Mbps in low-energy variants, enable data sharing without extensive . PANs emerged in the late 1990s with Bluetooth's commercialization in 1999, prioritizing low power consumption over high throughput, with node counts rarely exceeding 8 in configurations. Local Area Networks (LANs) extend coverage to buildings, homes, or campuses, spanning up to 2 kilometers with wired Ethernet (IEEE 802.3) or Wi-Fi (IEEE 802.11) wireless links. Ethernet LANs, standardized in 1983, now support speeds from 100 Mbps (Fast Ethernet, 1995) to 400 Gbps in data centers as of 2017, accommodating 10 to thousands of nodes via switches and hubs. Wi-Fi LANs, introduced in 1997, provide similar connectivity with ranges up to 100 meters indoors, though signal attenuation limits effective node density to hundreds per access point. LANs emphasize high bandwidth and low latency, often using private IP addressing for internal traffic. Metropolitan Area Networks (MANs) bridge multiple LANs across a or metropolitan region, covering 5 to 50 kilometers, and connect thousands of nodes through fiber optic or links. Defined in IEEE 802.6 standards from the 1980s, MANs serve operators or municipal services, with bandwidths historically at 10-100 Mbps but now exceeding 10 Gbps via dense (DWDM). They facilitate city-wide resource sharing, such as in educational consortia or public safety systems, balancing cost with broader reach compared to WANs. Wide Area Networks (WANs) span continents or countries, interconnecting LANs and MANs over distances exceeding 50 kilometers using public carriers like leased lines, MPLS, or satellite links, supporting millions of nodes globally. The , operational since ARPANET's evolution in the 1980s and public expansion in 1991, exemplifies a WAN with backbone speeds reaching 400 Gbps on undersea fiber cables totaling over 1.4 million kilometers as of 2023. WANs prioritize reliability over speed, with protocols like TCP/IP managing variable latency from 10 ms to hundreds of milliseconds, and employ technologies such as for optimization since the . Size-based distinctions within these geographic categories often align with node counts: small networks (under 10 devices) suit PANs or home LANs; medium-scale (10-100 nodes) fit office LANs; large-scale (over 100 nodes) characterize enterprise LANs, MANs, or distributed WAN segments. Coverage area inversely affects achievable throughput due to signal and , with smaller networks enabling gigabit speeds and larger ones relying on hierarchical to manage complexity.

Organizational and Access Types

Client-server and represent the primary organizational architectures for computer networks, determining how resources are shared and managed among devices. In client-server models, specialized server nodes host centralized resources such as files, applications, or databases, while client devices initiate requests for access, enabling efficient administration, , and in environments with 10 or more users. This structure supports through server upgrades and enhances via dedicated controls, though it risks downtime from server failures affecting multiple clients. Examples include web hosting, where browsers query HTTP servers, and enterprise domain controllers managing user logins. Peer-to-peer (P2P) architectures decentralize operations, with each node capable of acting as both client and server to directly exchange or resources without intermediary , ideal for small-scale setups under 10 devices or resilient applications like distributed file systems. Advantages include , as resource availability persists despite individual node outages, and lower infrastructure costs, but drawbacks encompass inconsistent performance, heightened vulnerability to propagation, and difficulties in enforcing uniform policies. P2P underpins systems like for file distribution, where peers upload and download segments collaboratively, reducing reliance on central bandwidth. Hybrid architectures merge client-server centralization with P2P elements for optimized resource use, such as in content delivery networks (CDNs) where edge servers handle client requests while peers cache data locally. This approach balances manageability with distribution, common in modern cloud-hybrid setups, though it complicates configuration compared to pure models. Network access types classify the technologies enabling end-user connectivity to core , varying by medium, speed, and contention mechanisms. Wired Ethernet access, standardized under , delivers deterministic, full-duplex links up to 100 Gbps over twisted-pair or fiber, minimizing latency in controlled environments like offices. Wireless access via WLAN () employs RF signals for untethered connections reaching 10 Gbps theoretically, prioritizing mobility but susceptible to interference and shared medium contention via CSMA/CA protocols. Broadband wireline access includes asymmetric DSL (), which modulates data over telephone copper lines for downstream speeds up to 24 Mbps, serving residential users since the 1990s but limited by distance and line quality. Cable modem access shares infrastructure for hybrid fiber-coax (HFC) networks, achieving 1 Gbps downstream via standards, though upstream is constrained and prone to neighborhood congestion. Fiber-optic access, such as FTTH using protocols, provides symmetric gigabit-to-terabit capacities with low attenuation, deployed widely by 2025 for low-latency applications like 8K streaming. Legacy dial-up access, using V.92 modems over POTS at 56 kbps, persists in remote areas but yields to due to inefficiency. Mobile access types, including , offer cellular connectivity up to 20 Gbps peak via base stations, emphasizing ubiquitous coverage over fixed high-speed alternatives.

Performance Characteristics

Key Metrics and Measurement

Key performance metrics in computer networks quantify capacity, delay, , variability, and reliability, enabling assessment of operational effectiveness under varying loads and conditions. Bandwidth represents the maximum theoretical data transmission rate, typically measured in bits per second (bps), bits per second (Mbps), or gigabits per second (Gbps), and is determined by the physical and link-layer properties of the medium. Latency, or propagation delay, measures the time required for a packet to traverse from source to destination, often expressed as round-trip time (RTT) in milliseconds (ms), influenced by factors such as distance, hops, and queuing. Throughput denotes the actual sustained data transfer rate achieved, usually lower than bandwidth due to protocol overhead, contention, and errors, and is evaluated in effective bps under real workloads.
MetricDefinitionTypical UnitCommon Measurement Methods
BandwidthMaximum capacity for data transfer without congestion.bps, Mbps, GbpsLink speed queries (e.g., via SNMP) or speed tests.
LatencyTime delay for packet propagation and processing.msPing or utilities for RTT.
ThroughputRealized data rate after accounting for losses and overhead.bps, MbpsTools like for TCP/UDP stream testing.
JitterVariation in packet arrival times, affecting time-sensitive applications.msMonitoring probes or packet capture analysis (e.g., ).
Packet LossPercentage of transmitted packets not received, often due to errors or drops.%Sequence number tracking in protocols like ICMP or application-layer stats.
Jitter, quantified as the standard deviation of latency samples, disrupts applications like VoIP or video streaming where consistent timing is critical, with acceptable levels typically below 30 ms for such uses. rates above 1% can degrade TCP performance via retransmissions, while UDP-based services suffer direct data gaps; measurement involves comparing sent and acknowledged packet counts over test intervals. Additional metrics include error rates (e.g., , BER, for integrity) and utilization (percentage of bandwidth in use), monitored via protocols like SNMP for device polling or for flow-level insights. These metrics are interrelated—high latency or often correlates with in congested networks—and are benchmarked using standardized tools to establish baselines for diagnostics and .

Congestion, Reliability, and Optimization

Network arises when traffic demand surpasses the capacity of links, routers, or switches, resulting in performance degradation. Primary causes include limited bandwidth relative to usage, excessive connected hosts generating broadcast storms, and sudden traffic bursts from applications. These factors lead to effects such as queueing delays, due to buffer overflows, and reduced overall throughput, exacerbating issues in shared mediums like the . Congestion control mechanisms operate at multiple layers to prevent collapse. Transport protocols like TCP detect congestion via or explicit signals, responding by reducing the congestion window size to slow transmission rates and probing for available capacity through gradual increases. Network-level approaches include to smooth bursts and policing to discard excess packets, while (ECN) allows routers to mark packets instead of dropping them, enabling endpoints to adjust proactively. Network reliability refers to the probability of successful data delivery without errors or failures over time, measured by metrics such as (BER), rate, and (MTBF). errors from noise or interference are mitigated through error detection codes like cyclic redundancy checks (CRC), which append checksums to frames for verification. For correction, (FEC) techniques embed redundant data, allowing receivers to reconstruct lost bits without retransmission, particularly useful in or high-latency links. Higher-layer reliability in protocols such as TCP incorporates sequence numbers, acknowledgments, and timeouts for retransmitting lost packets, achieving near-perfect delivery in unreliable underlying networks. Optimization enhances efficiency by balancing load and prioritizing flows. Quality of service (QoS) frameworks classify and queue traffic based on policies, reserving bandwidth or limiting latency for voice/video over bulk data transfers. Load balancing algorithms distribute sessions across paths or servers using metrics like round-trip time or utilization, preventing single points of overload. Additional methods encompass compression to reduce sizes and caching to minimize repeated fetches, collectively improving throughput and reducing congestion susceptibility.

Security Considerations

Common Threats and Vulnerabilities

Distributed denial-of-service (DDoS) attacks represent a primary threat to computer networks, where attackers flood targeted systems with excessive traffic from multiple sources to exhaust bandwidth and resources, rendering services unavailable. In 2023, DDoS incidents rose 31% year-over-year, with an average of 44,000 attacks launched daily worldwide. These attacks exploit network limits and often leverage botnets of compromised devices for amplification. Malware propagation, including worms and trojans, exploits network interconnectivity to spread autonomously or via user interaction, compromising hosts and enabling or lateral movement. Worms like those targeting unpatched vulnerabilities in protocols such as SMB have historically caused widespread infections, as seen in outbreaks disrupting enterprise networks. variants encrypt and demand payment, with 65% of financial organizations reporting such incidents in 2024, up from prior years due to improved evasion techniques. Eavesdropping and man-in-the-middle (MITM) attacks intercept unencrypted traffic on wired or wireless networks, capturing sensitive data like credentials or session tokens. These vulnerabilities arise from protocols lacking inherent , such as early HTTP implementations, allowing passive sniffing on shared media like Ethernet hubs or active interception via . serves as a common vector, tricking users into revealing access details that enable unauthorized network entry, accounting for a significant portion of initial breaches. Insider threats and misconfigurations amplify vulnerabilities, where authorized users or flawed setups like open ports expose networks to exploitation. Default credentials on routers and switches, unchanged from factory settings, have facilitated breaches, while unpatched in network devices leaves known exploits open, as cataloged in federal advisories. Spoofing attacks, including IP and forgery, bypass access controls and safeguards, enabling traffic redirection or amplification in reflection-based DDoS.
  • DDoS: Overwhelms capacity; mitigated by traffic filtering but persistent due to distributed sources.
  • Malware Spread: Leverages protocol flaws; requires endpoint and .
  • MITM/Eavesdropping: Targets transmission; countered by TLS enforcement.
  • Phishing/Insider Access: Human-factor entry; demands .
  • Spoofing/Misconfigs: Exploits trust models; addressed via validation and auditing.

Protective Technologies and Best Practices

Firewalls serve as a primary protective technology in computer networks by monitoring and controlling incoming and outgoing traffic based on security rules, thereby preventing unauthorized access. Traditional firewalls operate at the network layer using stateful packet inspection to track connection states, while next-generation firewalls incorporate application-layer awareness and threat intelligence for deeper inspection. Intrusion Detection Systems (IDS) passively monitor network traffic for suspicious patterns matching known attack signatures or anomalies, generating alerts for administrators without blocking traffic. In contrast, Intrusion Prevention Systems (IPS) actively block detected threats in real-time by dropping malicious packets, functioning as an extension of firewalls in inline mode. Deployment of IDS/IPS reduces breach risks by identifying exploits before endpoint compromise, with studies showing IPS blocking up to 99% of known threats in tested environments. Network segmentation divides networks into isolated zones using technologies like Virtual Local Area Networks (VLANs), limiting lateral movement of attackers and containing breaches to smaller areas. enhance by enforcing traffic controls via lists (ACLs) between segments, reducing packet sniffing and overall exposure. Proper implementation, such as classifying assets and applying microsegmentation, aligns with NIST guidelines to minimize damage from incidents like propagation. Best practices include adopting a defense-in-depth strategy, layering multiple controls rather than relying on a single , as recommended in NIST SP 800-14 for securing systems. Organizations should regularly patch vulnerabilities, with indicating that 60% of breaches involve unpatched software exploited within 30 days of disclosure. Implementing least-privilege access, continuous monitoring, and incident response planning further mitigates risks, per functions of protect, detect, and respond. Employee training on recognition and secure configurations complements technical measures, reducing human-error-induced incidents that account for 74% of breaches according to Verizon's 2023 Investigations Report, though adapted for network contexts.

Encryption, Authentication, and Access Controls

Encryption protects data transmitted over computer networks by rendering it unreadable to unauthorized parties through cryptographic algorithms. Symmetric encryption, such as the (AES) approved by NIST in 2001, uses a key for both encryption and decryption, enabling efficient bulk data protection in protocols like , which secures IP communications at the network layer. Asymmetric encryption, employing public-private key pairs like RSA developed in 1977, supports key exchange and digital signatures for initial session setup in protocols such as (TLS), which evolved from SSL and secures application-layer traffic, including connections handling over 95% of web traffic as of 2023. Authentication mechanisms verify the identity of communicating entities to prevent impersonation attacks in networks. Port-based Network Access Control (PNAC) under , standardized in 2001, authenticates devices before granting LAN or WLAN access, often using the (EAP) framework to support methods like passwords, certificates, or . (RADIUS), defined in RFC 2865 published in 2000, centralizes authentication for remote users via UDP-based servers, commonly integrated with EAP for enterprise security under WPA2/WPA3 standards ratified in 2004 and 2018, respectively. Access controls enforce policies to restrict network resource usage based on predefined rules, mitigating unauthorized entry. Access Control Lists (ACLs), implemented on routers and switches since the 1980s in , consist of sequential permit or deny statements evaluated against packet headers like source/destination IP addresses and ports, processing millions of packets per second in high-traffic environments. Firewalls extend ACLs with stateful inspection, tracking connection states to allow return traffic while blocking unsolicited inbound packets, as in next-generation firewalls that inspect payloads for threats beyond simple header matching. (RBAC), formalized in NIST standards like SP 800-53 revision 5 from 2020, assigns permissions to user roles rather than individuals, reducing administrative overhead in large networks by limiting privileges to least necessary levels. These mechanisms collectively address causal risks like man-in-the-middle attacks, where unencrypted or unauthenticated sessions enable data interception, as evidenced by breaches like the 2017 incident exposing 147 million records due to unpatched network vulnerabilities.

Applications and Services

Traditional and Enterprise Uses

Computer networks have traditionally enabled resource sharing among connected devices, such as printers, storage disks, and files, within local area networks (LANs) to reduce hardware duplication and improve efficiency. This capability emerged prominently with the development of Ethernet at PARC in 1973, which facilitated high-speed data exchange for shared peripherals in office environments. Early LANs also supported client-server models for applications like remote file access and basic electronic mail, allowing users to retrieve data from centralized servers without physical media transport. In enterprise settings, networks scale these functions to support organizational-wide operations, including intranet-based , collaborative document management, and access to shared databases for business processes. Enterprise networks integrate (VoIP) systems for internal , enabling cost-effective voice, video, and messaging over IP infrastructure rather than separate PSTN lines, with features like call and integration with for . They also underpin (ERP) and (CRM) applications, where distributed servers handle across branches, as seen in systems connecting employee devices to central data centers for inventory tracking and sales automation. protocols within these networks enforce access controls for sensitive file transfers, mitigating risks in high-volume enterprise data flows.

Emerging Paradigms (IoT, Cloud, Edge)

The represents a in computer networking by interconnecting billions of heterogeneous devices, enabling and across domains such as , , and smart cities. As of 2025, the number of connected IoT devices is projected to exceed 18 billion globally, with estimates reaching up to 20.1 billion, driven by advancements in sensor technology and integration. This proliferation demands networks optimized for low-power, wide-area communication, contrasting traditional client-server models with mesh and star topologies that prioritize scalability over centralized control. Key protocols include for lightweight, publish-subscribe messaging suited to unreliable connections, CoAP for constrained devices emulating HTTP over UDP, and for short-range, low-energy mesh networks in . However, IoT networks face causal challenges in reliability, as intermittent connectivity and resource constraints amplify vulnerability to failures, necessitating protocols with built-in redundancy like MQTT's quality-of-service levels. Cloud computing has reshaped network architecture by centralizing resources in remote data centers, facilitating on-demand scalability and virtualization that decouple services from physical hardware. This paradigm increases bandwidth demands on access networks by up to several factors, as applications offload processing to the cloud, requiring enhanced quality of service for latency-sensitive traffic. Complementary technologies such as Software-Defined Networking (SDN), which separates control planes from data planes for programmable routing, and Network Functions Virtualization (NFV), which runs network services like firewalls on virtual machines, enable dynamic resource allocation in cloud environments. Empirically, SDN and NFV reduce operational costs by 25-50% through efficient hardware utilization, though they introduce dependencies on high-speed interconnects, exposing networks to single points of failure if not redundantly engineered. Edge computing emerges as a distributed to cloud-centric models, processing proximate to sources to minimize transit delays inherent in centralized architectures. In network terms, edge paradigms position computation at gateways or base stations, yielding latency reductions where 58% of users experience under 10 ms to edge servers compared to cloud datacenters, versus only 29% for the latter. This benefits real-time applications like autonomous vehicles, where edge cuts response times from hundreds of milliseconds in cloud setups to tens, conserving bandwidth by filtering locally before aggregation. Challenges include heightened risks from dispersed nodes, as distributed processing complicates uniform threat monitoring, and limits due to heterogeneous hardware, demanding hybrid edge-cloud protocols for . Overall, these paradigms—IoT for endpoint density, for elastic scaling, and edge for proximity—interoperate via architectures, fostering resilient networks but requiring empirical validation of trade-offs in power, cost, and performance.

Economic and Regulatory Dimensions

Market Dynamics and Economic Impacts

The global enterprise networking market reached an estimated USD 124.59 billion in 2025, projected to expand at a (CAGR) of 9.2% to USD 193.77 billion by 2030, propelled by surging demand for AI-driven , migration, and advanced wireless standards like 7. This trajectory underscores the sector's responsiveness to enterprise needs for scalable, secure infrastructure amid , with networking alone valued at USD 43.54 billion in 2025 and forecasted to grow at 17.2% CAGR through 2033 due to hyperscale deployments. Key growth drivers include the proliferation of IoT devices and , which necessitate robust, low-latency interconnections, though supply chain constraints—such as shortages exacerbated by geopolitical frictions—have intermittently slowed hardware deployments since 2020. Competition in the market is oligopolistic, dominated by Cisco Systems, which commands 30-77% share in core segments like switches and routers through its integrated hardware-software ecosystem and entrenched customer relationships. Rivals such as Juniper Networks (now under HPE influence), Broadcom, Huawei, and Arista Networks challenge this hegemony via specialized offerings in high-performance Ethernet and software-defined networking, spurring innovation in automation and programmability to capture margins in AI-optimized fabrics. U.S. export controls on Chinese firms like Huawei have intensified this dynamic, redirecting market flows toward diversified suppliers and prompting Western incumbents to onshore critical components, albeit at higher costs that could temper short-term profitability. Economically, computer networks catalyze by facilitating flows and remote operations; empirical analysis across 116 countries from 2014-2019 links faster speeds to measurable labor uplifts, as enhanced connectivity reduces coordination frictions in supply chains and work. In the U.S., internet-based networking contributes an estimated USD 175 billion directly to the economy via platforms, ecosystems, and connectivity enablers, amplifying broader ICT sector impacts that bolster GDP through capital deepening and spillovers. Yet, network outages and deliberate disruptions—such as government-imposed shutdowns—inflict quantifiable losses, eroding confidence and output by hampering transactions and investment, with partial blackouts alone causing multimillion-dollar daily hits in affected regions. Despite these gains, growth has lagged expectations post-internet commercialization, suggesting networks enhance within sectors but struggle to drive economy-wide accelerations without complementary or breakthroughs.

Policy Debates and Controversies

One prominent policy debate surrounding computer networks centers on , which mandates that internet service providers (ISPs) treat all data traffic equally without blocking, throttling, or prioritizing content based on source or type. Proponents argue this prevents ISPs from discriminating against competitors or extracting fees from content providers, thereby fostering innovation and consumer choice, as evidenced by the U.S. Federal Communications Commission's (FCC) 2015 Open Internet Order that classified broadband as a Title II service to enforce such rules. Opponents contend that strict neutrality regulations deter infrastructure investment by limiting ISPs' ability to recoup costs through differentiated services, with empirical analysis of U.S. rule changes in 2010, 2015, and showing varied but generally modest impacts on telecommunication investment levels rather than catastrophic declines. The repeal under the FCC's Restoring Internet Freedom Order shifted oversight to lighter-touch antitrust enforcement, and subsequent data indicated no widespread throttling or blocking incidents, challenging claims of imminent degradation. A 2024 study on mobile markets found net neutrality rules potentially inefficient, yielding negative welfare effects due to reduced incentives for quality improvements in competitive environments. Network privacy and government surveillance policies have sparked controversies over the tension between and individual rights, particularly in how data traverses computer networks. Revelations from in 2013 exposed U.S. (NSA) programs like and Upstream, which intercepted traffic for bulk metadata collection, prompting debates on whether such practices violate Fourth Amendment protections without sufficient oversight. Critics, including the , argue that warrantless chills free expression and erodes trust in networked communications, as upstream collection under Section 702 of the (FISA) has incidentally captured domestic data without . Empirical surveys reveal widespread public concern, with 81% of Americans in 2019 believing it is not possible to live without leaving traces, amplifying calls for reforms like ending bulk collection authorized by the of 2015. Proponents of expanded cite counterterrorism successes, such as thwarting plots via metadata analysis, but lack of declassified evidence fuels skepticism about efficacy versus overreach, with policies like the FCC's 2024 net neutrality reinstatement granting new authority over privacy and cybersecurity to mitigate such risks. Regulatory efforts to address monopolistic tendencies in network infrastructure have involved antitrust actions and spectrum policies, given the high fixed costs and in deploying and networks. The U.S. Department of Justice's breakup of exemplified early interventions to curb monopoly abuse in , fostering that accelerated innovations like fiber-optic deployment, though critics note persistent local franchise monopolies for cable and internet services. Modern debates focus on whether dominant ISPs or equipment providers, such as facing U.S. bans since 2019 over risks, warrant stricter antitrust scrutiny, with network effects amplifying concerns about and . Empirical reviews suggest that while regulation can prevent , overregulation in converging telecom-computer markets may hinder mergers beneficial for scaling next-generation networks, as seen in post-1996 Act analyses showing mixed outcomes on versus consolidation. Internationally, policies like the European Union's requirements highlight sovereignty debates, where mandates to route traffic domestically aim to protect against foreign but risk fragmenting global networks and increasing latency costs.

Future Directions and Innovations

Advanced Technologies (AI, 5G/6G, Quantum)

enhances computer network operations through algorithms that enable , automated troubleshooting, and real-time optimization of traffic routing and . In , AI systems self-configure, self-heal, and self-optimize, minimizing manual interventions and improving reliability across software-defined networks. For , AI facilitates continuous monitoring of user behaviors and flows, enabling early detection of anomalies such as cyberattacks or via that surpasses traditional rule-based methods. These applications, including tokenized identity management and automated threat remediation, have been deployed in enterprise environments to reduce response times to incidents from hours to minutes. Fifth-generation (5G) wireless networks, with widespread commercial deployment by 2025, provide peak data rates exceeding 10 Gbps, sub-millisecond latency, and support for up to one million devices per square kilometer, facilitating applications in industrial and communication. As of August 2025, 173 operators across 70 countries are investing in 5G Standalone (SA) architectures, which decouple control and user planes for enhanced slicing and edge integration, outperforming non-standalone modes in performance metrics like throughput and reliability. In the United States, 5G SA deployments drove median download speeds to over 200 Mbps in Q4 2024, with coverage reaching urban and suburban areas via mid-band . Private 5G networks for enterprises are projected to grow at a 41% through 2028, driven by needs for localized, low-latency connectivity in and logistics. Sixth-generation (6G) wireless technology remains in the research and early standardization phase as of 2025, targeting terahertz frequencies for data rates up to 1 Tbps and latencies below 0.1 milliseconds, with native integration of AI for dynamic and sensing capabilities. In June 2025, the initiated scoping for technical specifications during meetings in , focusing on non-terrestrial networks and joint communication-sensing systems to enable holographic communication and digital twins. Regulatory efforts, including the U.S. NTIA's January 2025 request for comments on policy support and the FCC's August 2025 report, emphasize allocation above 100 GHz and challenges to accelerate development toward commercial viability by the early 2030s. Quantum networking leverages and superposition to enable secure, tamper-evident communication protocols like (QKD), which detects eavesdropping through the , outperforming classical in vulnerability to computational attacks. Developments include IonQ's September 2025 demonstration of transfer over fiber optic infrastructure using repeaters, extending entanglement distribution beyond laboratory distances. The quantum internet roadmap progresses through physical-layer quantum channels, link-layer repeaters, and network-layer routing, with experimental testbeds achieving multi-node entanglement in 2025, though scalability remains limited by decoherence and photon loss. Commercial quantum networks, such as those integrating with existing fiber for hybrid classical-quantum links, are emerging for financial and governmental applications requiring unconditional security, but full-scale quantum internet deployment awaits advances in error-corrected quantum memories.

Challenges and Open Problems

Scalability in computer networks faces significant hurdles due to the explosive growth of connected devices, particularly in massive IoT deployments, where data volumes can exceed petabytes daily and lead to congestion, with projections indicating over 75 billion IoT devices by 2025 straining existing infrastructures. Network architectures often encounter bottlenecks from architectural constraints and resource limits, such as insufficient bandwidth allocation during business expansion, necessitating approaches like WAN optimization to mitigate degradation. End-to-end visibility remains elusive in hybrid cloud-edge environments, complicating policy enforcement and contributing to operational inefficiencies reported by 45% of surveyed organizations. Security challenges intensify with the rise of AI-augmented threats, including variants that evaded detection in 60% of incidents by mid-2025, and state-sponsored incorporating , which attributes to and talent shortages in over half of cyber events. Edge security lags in distributed systems, exposing vulnerabilities in flows, while traditional perimeter defenses fail against sophisticated social engineering, as evidenced by a 30% uptick in such attacks per analyses. These issues underscore the need for adaptive protocols, yet implementation gaps persist due to staffing shortages projected to affect 70% of enterprises by year-end. Reliability and resiliency pose ongoing problems, with networks prone to single points of failure amid rapid scaling, where 45% of businesses cite data privacy concerns as barriers to resilient upgrades. inconsistencies arise from heterogeneity and legacy integrations, leading to latency spikes exceeding 100ms in overloaded segments, as observed in enterprise growth studies. Open problems center on quantum networking integration for beyond-5G paradigms, where correction in noisy intermediate-scale quantum (NISQ) systems remains unresolved, limiting entanglement distribution over distances beyond 100km without . Scalability of quantum-secure channels in faces causal barriers from decoherence rates, with current prototypes achieving only 1% in multi-node setups, demanding breakthroughs in topological qubits. AI-quantum hybrids for network optimization encounter integration paradoxes, such as non-deterministic quantum outputs conflicting with AI's data-driven predictability, hindering real-time applications like antenna tilting in , where simulations show 20-30% efficiency gains but require fault-tolerant hybrids. Secure embedding in industrial controls lacks standardized protocols, exposing gaps in zero-trust models for legacy OT systems. Energy-efficient routing under variable loads persists as an unsolved optimization, with algorithms falling short of NP-hard bounds in dynamic topologies.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.