Recent from talks
Nothing was collected or created yet.
Bandwidth (computing)
View on Wikipedia
In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth,[1] data bandwidth,[2] or digital bandwidth.[3][4]
This definition of bandwidth is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics,[citation needed] in which bandwidth is used to refer to the signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power. The actual bit rate that can be achieved depends not only on the signal bandwidth but also on the noise on the channel.
Network capacity
[edit]The term bandwidth sometimes defines the net bit rate peak bit rate, information rate, or physical layer useful bit rate, channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The maximum rate that can be sustained on a link is limited by the Shannon–Hartley channel capacity for these communication systems, which is dependent on the bandwidth in hertz and the noise on the channel.
Network consumption
[edit]The consumed bandwidth in bit/s corresponds to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The consumed bandwidth can be affected by technologies such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in hertz (the average spectral bandwidth of the analog signal representing the bit stream) during a studied time interval.
Channel bandwidth may be confused with useful data throughput (or goodput). For example, a channel with x bit/s may not necessarily transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For instance, much internet traffic uses the transmission control protocol (TCP), which requires a three-way handshake for each transaction. Although in many modern implementations the protocol is efficient, it does add significant overhead compared to simpler protocols. Also, data packets may be lost, which further reduces the useful data throughput. In general, for any effective digital communication, a framing protocol is needed; overhead and effective throughput depends on implementation. Useful throughput is less than or equal to the actual channel capacity minus implementation overhead.
Maximum throughput
[edit]The asymptotic bandwidth (formally asymptotic throughput) for a network is the measure of maximum throughput for a greedy source, for example when the message size (the number of packets per second from a source) approaches close to the maximum amount.[5]
Asymptotic bandwidths are usually estimated by sending a number of very large messages through the network, measuring the end-to-end throughput. As with other bandwidths, the asymptotic bandwidth is measured in multiples of bits per seconds. Since bandwidth spikes can skew the measurement, carriers often use the 95th percentile method. This method continuously measures bandwidth usage and then removes the top 5 percent.[6]
Multimedia
[edit]Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data compression (source coding), defined as the total amount of data divided by the playback time.
Due to the impractically high bandwidth requirements of uncompressed digital media, the required multimedia bandwidth can be significantly reduced with data compression.[7] The most widely used data compression technique for media bandwidth reduction is the discrete cosine transform (DCT), which was first proposed by Nasir Ahmed in the early 1970s.[8] DCT compression significantly reduces the amount of memory and bandwidth required for digital signals, capable of achieving a data compression ratio of up to 100:1 compared to uncompressed media.[9]
Web hosting
[edit]In web hosting service, the term bandwidth is often used to describe the amount of data transferred to or from the website or server within a prescribed period of time, for example bandwidth consumption accumulated over a month measured in gigabytes per month.[citation needed][10] The more accurate phrase used for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.[citation needed]
A similar situation can occur for end-user Internet service providers as well, especially where network capacity is limited (for example in areas with underdeveloped internet connectivity and on wireless networks).
Internet connections
[edit]| Bit rate | Connection type |
|---|---|
| 56 kbit/s | Dial-up |
| 1.5 Mbit/s | ADSL Lite |
| 1.544 Mbit/s | T1/DS1 |
| 2.048 Mbit/s | E1 / E-carrier |
| 4 Mbit/s | ADSL1 |
| 10 Mbit/s | Ethernet |
| 11 Mbit/s | Wireless 802.11b |
| 24 Mbit/s | ADSL2+ |
| 44.736 Mbit/s | T3/DS3 |
| 54 Mbit/s | Wireless 802.11g |
| 100 Mbit/s | Fast Ethernet |
| 155 Mbit/s | OC3 |
| 600 Mbit/s | Wireless 802.11n |
| 622 Mbit/s | OC12 |
| 1 Gbit/s | Gigabit Ethernet |
| 1.3 Gbit/s | Wireless 802.11ac |
| 2.5 Gbit/s | OC48 |
| 5 Gbit/s | SuperSpeed USB |
| 7 Gbit/s | Wireless 802.11ad |
| 9.6 Gbit/s | OC192 |
| 10 Gbit/s | 10 Gigabit Ethernet, SuperSpeed USB 10 Gbit/s |
| 20 Gbit/s | SuperSpeed USB 20 Gbit/s |
| 40 Gbit/s | Thunderbolt 3 |
| 100 Gbit/s | 100 Gigabit Ethernet |
Edholm's law
[edit]Edholm's law, proposed by and named after Phil Edholm in 2004,[11] holds that the bandwidth of telecommunication networks double every 18 months, which has proven to be true since the 1970s.[11][12] The trend is evident in the cases of Internet,[11] cellular (mobile), wireless LAN and wireless personal area networks.[12]
The MOSFET (metal–oxide–semiconductor field-effect transistor) is the most important factor enabling the rapid increase in bandwidth.[13] The MOSFET (MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959,[14][15][16] and went on to become the basic building block of modern telecommunications technology.[17][18] Continuous MOSFET scaling, along with various advances in MOS technology, has enabled both Moore's law (transistor counts in integrated circuit chips doubling every two years) and Edholm's law (communication bandwidth doubling every 18 months).[13]
References
[edit]- ^ Douglas Comer, Computer Networks and Internets, page 99 ff, Prentice Hall 2008.
- ^ Fred Halsall, to data+communications and computer networks, page 108, Addison-Wesley, 1985.
- ^ Cisco Networking Academy Program: CCNA 1 and 2 companion guide, Volym 1–2, Cisco Academy 2003
- ^ Behrouz A. Forouzan, Data communications and networking, McGraw-Hill, 2007
- ^ Chou, C. Y.; et al. (2006). "Modeling Message Passing Overhead". In Chung, Yeh-Ching; Moreira, José E. (eds.). Advances in Grid and Pervasive Computing: First International Conference, GPC 2006. Springer. pp. 299–307. ISBN 3540338098.
- ^ "What is Bandwidth? - Definition and Details". www.paessler.com. Retrieved 2019-04-18.
- ^ Lee, Jack (2005). Scalable Continuous Media Streaming Systems: Architecture, Design, Analysis and Implementation. John Wiley & Sons. p. 25. ISBN 9780470857649.
- ^ Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao" (PDF). Reprints from the Early Days of Information Sciences. 60. Retrieved 13 October 2019.
- ^ Lea, William (1994). Video on demand: Research Paper 94/68. House of Commons Library. Archived from the original on 20 September 2019. Retrieved 20 September 2019.
- ^ Low, Jerry (27 March 2022). "How Much Hosting Bandwidth Do I Need For My Website?". WHSR.
- ^ a b c Cherry, Steven (2004). "Edholm's law of bandwidth". IEEE Spectrum. 41 (7): 58–60. doi:10.1109/MSPEC.2004.1309810. S2CID 27580722.
- ^ a b Deng, Wei; Mahmoudi, Reza; van Roermund, Arthur (2012). Time Multiplexed Beam-Forming with Space-Frequency Transformation. New York: Springer. p. 1. ISBN 9781461450450.
- ^ a b Jindal, Renuka P. (2009). "From millibits to terabits per second and beyond - over 60 years of innovation". 2009 2nd International Workshop on Electron Devices and Semiconductor Technology. pp. 1–6. doi:10.1109/EDST.2009.5166093. ISBN 978-1-4244-3831-0. S2CID 25112828.
- ^ "1960 - Metal Oxide Semiconductor (MOS) Transistor Demonstrated". The Silicon Engine. Computer History Museum.
- ^ Lojek, Bo (2007). History of Semiconductor Engineering. Springer Science & Business Media. pp. 321–3. ISBN 9783540342588.
- ^ "Who Invented the Transistor?". Computer History Museum. 4 December 2013. Retrieved 20 July 2019.
- ^ "Triumph of the MOS Transistor". YouTube. Computer History Museum. 6 August 2010. Archived from the original on 2021-11-07. Retrieved 21 July 2019.
- ^ Raymer, Michael G. (2009). The Silicon Web: Physics for the Internet Age. CRC Press. p. 365. ISBN 9781439803127.
Bandwidth (computing)
View on GrokipediaFundamentals
Definition and Scope
In computing, bandwidth refers to the maximum rate at which data can be transferred across a given path or communication channel, typically expressed in bits per second (bps). This metric quantifies the potential capacity of a system to handle data flow rather than the actual amount transferred in practice, serving as a theoretical upper limit influenced by hardware and protocol constraints.[1][8] Bandwidth is distinct from throughput, which measures the real-world rate of successful data transmission after accounting for factors like congestion, errors, or retransmissions, and from latency, which represents the delay in data propagation from sender to receiver. For instance, high bandwidth allows for greater volume of data but does not mitigate delays caused by propagation time or processing overheads, highlighting how these concepts interplay in system performance without overlapping in definition.[1][9] The concept applies across various computing contexts, including networks where it defines link capacities—such as Ethernet standards rated at 10 Mbps for early implementations or 1 Gbps for modern Gigabit Ethernet—storage interfaces like USB (up to 5 Gbps for USB 3.0) and SATA (up to 6 Gbps for SATA III), and processors where memory bandwidth indicates the data rate between CPU and RAM, often reaching hundreds of GB/s in high-end systems. These applications underscore bandwidth's role in enabling efficient data movement in digital architectures.[10][11][12] Originally borrowed from signal processing in the early 20th century—where it described the range of frequencies in a band, first coined around 1930 in electronics—the term was adapted to digital contexts in the late 1940s through Claude Shannon's information theory, which formalized channel capacity as a function of bandwidth and signal-to-noise ratio, shifting focus to bit-rate capacities in computational and communication environments.[13][12][14]Units and Measurement
In computing, bandwidth is primarily expressed in bits per second (bps), a unit that quantifies the maximum rate of data transfer over a communication channel. This base unit scales using multiples such as kilobits per second (Kbps or Kb/s, equal to 10^3 bps), megabits per second (Mbps or Mb/s, 10^6 bps), gigabits per second (Gbps or Gb/s, 10^9 bps), and terabits per second (Tbps or Tb/s, 10^12 bps), following decimal prefixes as defined by international standards organizations. In contrast, binary prefixes like kibibits per second (Kibps, 2^10 bps) are sometimes used in contexts involving binary data storage or processing to distinguish from decimal notation, though decimal multiples predominate in network specifications for consistency. Bandwidth measurements often encounter confusion with byte-based units, particularly in storage and file transfer contexts where megabytes per second (MB/s) or gigabytes per second (GB/s) are common; a byte consists of 8 bits, so 1 MB/s equates to 8 Mbps. This distinction is critical, as hardware manufacturers and software tools may report figures in either format, leading to potential misinterpretation—for instance, a 100 MB/s disk transfer rate corresponds to 800 Mbps. To measure bandwidth, various techniques and tools assess both theoretical capacity and real-world performance, differentiating between peak bandwidth (the maximum possible under ideal conditions) and sustained bandwidth (achievable over time with typical loads). Network testing tools like iPerf generate traffic to simulate data flows and measure throughput across LANs or WANs, providing metrics in bps for bidirectional or unidirectional tests. Internet speed tests, such as those from Ookla's Speedtest, evaluate consumer broadband by downloading or uploading data packets to remote servers, reporting results in Mbps while accounting for latency and packet loss. For hardware interfaces, specifications from datasheets detail peak bandwidth, such as Ethernet ports rated at 1 Gbps or 10 Gbps under controlled conditions. Standards from organizations like the IEEE and ITU formalize bandwidth units and specifications to ensure interoperability. The IEEE 802.3 standard for Ethernet defines channel bandwidths in bps, such as 1000BASE-T at 1 Gbps over twisted-pair cabling. Similarly, ITU recommendations outline bandwidth in telecommunications protocols, emphasizing bps for digital signals. In wireless contexts, IEEE 802.11 standards for Wi-Fi specify maximum bandwidths, like 802.11ac achieving up to 6.93 Gbps aggregate across multiple spatial streams on 160 MHz channels, though actual rates vary with modulation and interference. These definitions prioritize decimal bps to align with global regulatory and equipment compatibility requirements.Network Concepts
Capacity and Allocation
In networking, capacity represents the maximum potential data rate that a communication link or channel can theoretically support under ideal conditions. This is determined by the inherent properties of the transmission medium and associated technologies. For instance, fiber optic cables, leveraging light signals through glass or plastic fibers, can achieve capacities exceeding 100 Gbps per channel in practical deployments, far surpassing traditional copper-based systems due to minimal signal attenuation over distance.[15] Bandwidth allocation in networks occurs through dedicated or shared mechanisms to provision this capacity to users or devices. Dedicated allocation, as seen in leased lines, reserves a fixed portion of the link exclusively for a single connection, ensuring consistent availability but potentially underutilizing resources during idle periods. In contrast, shared allocation, common in Ethernet networks, employs contention-based access where multiple users compete for the medium, leading to variable performance based on traffic load. Fundamental paradigms include circuit switching, which pre-establishes a dedicated end-to-end path for the duration of a session, thereby statically allocating bandwidth, versus packet switching, which fragments data into packets and dynamically allocates bandwidth on demand, enabling more efficient multiplexing across multiple flows.[16][16] Several factors influence the effective capacity of a network link. The physical medium plays a primary role: copper twisted-pair wires, limited by electromagnetic interference and signal degradation, typically support lower capacities (e.g., up to several Gbps over short distances), while fiber optics enable terabit-scale potentials through high-frequency light modulation, and wireless media contend with spectrum scarcity and propagation losses, capping capacities in the Mbps to Gbps range depending on frequency bands. Protocol overhead, such as headers and error-checking in frames or packets, consumes a portion of the raw capacity, reducing the usable bandwidth for payload data. Additionally, duplex modes affect capacity: half-duplex operation alternates transmission and reception on the same channel, halving effective throughput compared to full-duplex, which permits simultaneous bidirectional communication and thus doubles the overall capacity.[16][17] A practical example of capacity differences arises in broadband access technologies like DSL and cable modems, where upstream and downstream directions often exhibit asymmetry to match typical usage patterns. Modern DSL technologies, such as VDSL, utilizing existing copper telephone lines, commonly provide higher downstream capacity (e.g., up to 100 Mbps) than upstream (e.g., 20 Mbps) due to frequency division allocating more spectrum to downloads.[18] Cable modems, deployed over coaxial infrastructure, similarly feature asymmetry but with potentially higher overall capacities (e.g., up to 27 Mbps downstream and 2 Mbps upstream in early deployments), as they share neighborhood nodes where downstream channels are broader to support broadcast-like distribution.[19][20]Throughput and Limitations
In computer networking, throughput represents the effective data transfer rate achieved after accounting for overhead, errors, and contention among multiple users or flows, distinguishing it from the theoretical maximum bandwidth of the link.[21] Maximum throughput refers to the highest sustainable rate under ideal conditions, often termed goodput when emphasizing reliable delivery without retransmissions.[21] Several factors limit actual throughput relative to theoretical bandwidth. Protocol overhead, such as the headers in TCP/IP stacks, can reduce effective bandwidth by 5-10% depending on packet size; for instance, in Ethernet with a 1500-byte MTU, the combined Ethernet (18 bytes), IP (20 bytes), and TCP (20 bytes) headers introduce approximately 5% overhead.[22] Error correction mechanisms like forward error correction (FEC) in wireless networks add redundant parity bits to detect and repair errors without retransmission, but this overhead can reduce throughput by 10-50% or more in noisy environments, trading capacity for reliability.[23] Queuing delays further constrain throughput by causing packets to wait in buffers during congestion, leading to increased latency and potential packet drops that degrade overall performance.[24] A fundamental theoretical limit on throughput is provided by the Shannon-Hartley theorem, which defines the maximum channel capacity as where is the bandwidth in hertz, and is the signal-to-noise ratio; this equation, originally for analog channels, establishes the upper bound on error-free data rates in noisy environments and informs digital network designs by highlighting noise as an irreducible constraint on achievable throughput.[14] For example, in Gigabit Ethernet with a theoretical bandwidth of 1 Gbps, real-world throughput typically reaches about 940 Mbps due to protocol overhead and other inefficiencies, demonstrating the gap between nominal capacity and practical performance.[25]Consumption and Efficiency
Bandwidth consumption refers to the actual amount of network capacity utilized by data transmission activities over a given period, often measured as a percentage of the total available bandwidth to assess efficiency and potential bottlenecks. For instance, file downloads typically consume high bursts of bandwidth, where transferring a 1 GB file over a 100 Mbps connection can utilize up to 100% of the link capacity for approximately 80 seconds, depending on protocol overhead.[26] In contrast, video calls exhibit more consistent but lower average usage; a single high-definition video conference session generally requires 1-2 Mbps of download bandwidth and 0.75-1.4 Mbps of upload bandwidth, representing about 1-2% utilization on a standard 100 Mbps residential connection.[27] These differences highlight how application-specific demands influence overall network utilization, with bursty traffic like downloads potentially saturating links briefly while steady streams like calls maintain moderate, ongoing consumption. To enhance bandwidth efficiency, networks employ techniques such as data compression, which reduces the volume of transmitted data without loss of information. The gzip algorithm, widely used for web content, can achieve compression ratios that shrink text-based files by 70-80%, effectively lowering bandwidth needs for HTTP transfers by transmitting smaller payloads.[28] Quality of Service (QoS) mechanisms further optimize usage by prioritizing critical traffic, ensuring that voice or video packets receive preferential treatment during congestion to maintain performance, thereby improving overall bandwidth utilization in mixed-traffic environments.[29] Traffic shaping complements these by regulating outbound data rates to smooth bursts and prevent downstream congestion, matching transmission speeds to interface capabilities and avoiding packet drops in slower segments.[30] Monitoring bandwidth consumption is essential for maintaining efficiency, with protocols like NetFlow and SNMP enabling detailed tracking of traffic flows and interface statistics. NetFlow collects per-flow data on source, destination, and volume, allowing administrators to identify high-usage applications and utilization trends across routers.[31] SNMP, meanwhile, polls device metrics such as octet counters to compute real-time utilization percentages, facilitating proactive adjustments. Internet Service Providers (ISPs) often implement bandwidth throttling—intentionally reducing speeds for specific users or traffic types—to manage shared resources, a practice defined as any non-network management degradation of access to lawful content.[32] A key challenge in bandwidth management arises from oversubscription in shared networks, where total subscribed capacity exceeds physical infrastructure limits, leading to contention among users. Broadband providers commonly operate under contention ratios of 20:1 to 50:1, meaning multiple customers share the same upstream bandwidth, which can degrade performance when simultaneous demands peak. This results in noticeable slowdowns during high-usage periods, such as evenings, where measured speeds may drop by 20-50% due to ISP bandwidth management policies enforcing service tiers.[33][34]Applications
Multimedia and Data Transfer
Multimedia applications, such as video streaming and audio playback, demand specific bandwidth allocations to ensure smooth delivery without interruptions. For video streaming, high-definition (HD) content typically requires about 5 Mbps, while 4K ultra-high-definition streams necessitate at least 25 Mbps to maintain quality and prevent degradation.[35][36] Audio services like Spotify use lower bandwidths; high-quality streaming operates at approximately 160 kbps, allowing efficient delivery over modest connections.[37] These requirements scale with content resolution and bitrate, influencing the overall network load for end-users consuming media. To optimize performance across varying network conditions, adaptive bitrate streaming dynamically adjusts video quality by switching between multiple encoded versions based on available bandwidth. This technique encodes content at several bitrates (e.g., from 480p at lower rates to 4K at higher ones) and selects the appropriate stream in real-time, reducing the risk of playback issues on fluctuating connections.[38] Platforms like Netflix and YouTube employ this method to match delivery to the user's bandwidth, ensuring seamless viewing without manual intervention.[39] In data transfer scenarios, such as bulk file sharing via torrents, bandwidth directly determines transfer duration. For instance, downloading a 1 GB file at 100 Mbps takes roughly 80 seconds, calculated as the file size in bits (8 gigabits) divided by the speed in bits per second.[40] Higher bandwidth accelerates these peer-to-peer exchanges, but shared network resources among participants can introduce variability, emphasizing the need for sufficient capacity in large-scale distributions. Insufficient bandwidth leads to buffering in multimedia playback, where the player pauses to load more data ahead of the current position. This occurs when the incoming data rate falls below the playback speed, causing delays that degrade user experience.[41] Additionally, the choice between lossy and lossless formats impacts bandwidth needs; lossy compression (e.g., MP3 for audio or H.264 for video) discards non-essential data to shrink file sizes and reduce transmission requirements, whereas lossless formats preserve all original information at the cost of higher bandwidth usage.[42][43] The evolution of multimedia delivery has transformed from dial-up era constraints of 56 kbps, which limited content to basic audio or low-resolution clips, to modern high-definition streaming enabled by broadband advancements. Content delivery networks (CDNs) play a pivotal role in this progression by caching media files on distributed edge servers, thereby balancing bandwidth load across global users and minimizing latency for HD and 4K delivery.[44][45][46]Web Hosting and Servers
In web hosting, server bandwidth refers to the volume of data that can be transferred to and from a server over a given period, typically measured in gigabytes (GB) or terabytes (TB) per month, directly impacting website performance by determining how quickly content loads for users. Insufficient bandwidth can lead to throttling, slow page speeds, or downtime during high demand, while adequate allocation ensures smooth scalability for growing traffic. Hosting providers enforce these limits to manage resources efficiently across their infrastructure.[47] Allocated transfer limits vary by hosting plan, with shared hosting often capping bandwidth at 100 GB to unlimited (branded as "unmetered") per month to accommodate small to medium sites, whereas dedicated servers typically offer unmetered or high-volume options like 10 TB/month without strict caps, allowing for consistent performance under load. For instance, in cloud-based hosting like Amazon Web Services (AWS) EC2, bandwidth is not capped but charged on a pay-as-you-go basis for outbound data transfer, starting at $0.09 per GB for the first 10 TB per month after a 100 GB free tier, which incentivizes optimization for cost control in web hosting scenarios. These limits affect site performance by preventing overload; exceeding them in metered plans may trigger overage fees or temporary suspensions, whereas unmetered plans prioritize fair usage policies to avoid abuse.[48][49][50] Several factors influence bandwidth consumption on web servers, including unpredictable traffic spikes from viral content, which can multiply usage exponentially—for example, a sudden surge in visitors due to social media sharing may increase data transfer by several times the average, straining server capacity and potentially causing latency. Static content, such as images and CSS files, primarily consumes bandwidth through direct file delivery, where large media assets like high-resolution photos can account for the bulk of transfer volume per page view. In contrast, dynamic content generated by database queries or server-side scripts, such as personalized user pages or e-commerce carts, often requires more bandwidth overall due to additional data processing and larger response sizes, though it varies based on query complexity and output.[51][52][53] Optimization techniques are essential for managing bandwidth in web hosting environments, with caching mechanisms like Content Delivery Networks (CDNs) playing a key role by storing static assets at edge locations closer to users, thereby reducing the load on the origin server and cutting data transfer needs by up to 60% in high-traffic scenarios. Load balancing distributes incoming requests across multiple servers, preventing any single server from becoming a bottleneck during peaks and ensuring efficient bandwidth utilization without over-provisioning resources. Integrating these with compression protocols further minimizes transfer sizes, allowing hosting setups to handle increased demands while maintaining performance.[54][55] Different hosting types handle bandwidth allocation distinctly: shared hosting divides server resources, including bandwidth, among multiple sites, which can lead to contention during concurrent high usage and result in variable performance for individual sites. Dedicated servers, by contrast, provide guaranteed capacity with exclusive access to the full bandwidth allotment, ideal for resource-intensive applications requiring predictable throughput. Providers like AWS exemplify this through scalable instances where bandwidth pricing aligns with usage, enabling web hosts to provision dynamically without fixed monthly limits.[56][57][50]Internet Connections and Broadband
Internet connections deliver bandwidth to end-users through various access technologies, enabling consumer and enterprise access to networks. These connections determine the practical bandwidth available for activities like streaming, browsing, and cloud services, often shaped by infrastructure limitations and service provider policies. Bandwidth in this context refers to the maximum data transfer rate allocated to a user's link, influenced by factors such as signal quality and shared medium contention.[58] Common types of internet connections include digital subscriber line (DSL), cable, fiber-optic, and satellite. DSL uses existing telephone lines to provide bandwidth up to 100 Mbps, though most plans cap at around 30 Mbps due to distance from the central office.[59] Cable internet leverages coaxial cables for shared bandwidth, offering download speeds up to 1 Gbps and upload speeds up to 50 Mbps in modern deployments, but performance can degrade during peak hours from neighborhood contention.[58] Fiber-optic connections transmit data via light signals through glass fibers, achieving symmetrical speeds up to 10 Gbps or more with minimal latency, making them ideal for high-bandwidth demands.[59] Satellite internet, such as Starlink, provides 100–300 Mbps download speeds to remote areas but suffers from high latency (often 25–60 ms) due to signal travel to orbit, limiting real-time applications.[60][61] Broadband is defined by regulatory bodies as a minimum threshold for reliable high-speed internet. In the United States, the Federal Communications Commission (FCC) updated its broadband benchmark in 2024 to 100 Mbps download and 20 Mbps upload speeds, up from the prior 25/3 Mbps standard established in 2015, to reflect evolving consumer needs like 4K streaming and remote work.[62] This definition emphasizes fixed connections and often features asymmetry, with download speeds significantly higher than uploads to prioritize content consumption over production.[63] Service providers offer tiered plans based on bandwidth levels to match user needs and revenue models. Basic tiers provide 25-50 Mbps for light use, while premium options exceed 500 Mbps for households with multiple devices.[64] Upgrades like DOCSIS 3.1 for cable networks enable multi-gigabit speeds up to 10 Gbps by using orthogonal frequency-division multiplexing, allowing providers to deliver higher bandwidth without full infrastructure overhauls.[65] Net neutrality regulations, reinstated in the U.S. in 2024, prevent internet service providers from throttling or prioritizing certain traffic, ensuring equitable bandwidth allocation across applications and sites.[66] Global variations in internet bandwidth highlight disparities between urban and rural areas. Urban regions benefit from dense fiber and cable deployments, achieving averages like South Korea's 234 Mbps fixed broadband download speed, driven by nationwide fiber investment and 97% 5G coverage.[67] In contrast, rural areas often rely on DSL or satellite, resulting in speeds below the global fixed broadband average of 112 Mbps as of October 2025, exacerbating the digital divide.[67][68]Trends and Developments
Edholm's Law
Edholm's law, proposed by Phil Edholm, chief technology officer at Nortel Networks, in a 2004 IEEE Spectrum article, observes that the bandwidth and data rates in telecommunications networks double approximately every 18 months across three primary domains: wireless (mobile cellular), nomadic (fixed broadband access like DSL and Wi-Fi), and wireline (enterprise LANs and WANs such as Ethernet). This exponential growth rate mirrors the pace of Moore's law for computing power but applies specifically to communication capacities, with slower-growing domains lagging behind faster ones by a consistent time interval of about five years. The law is an empirical observation derived from historical trends dating back to the 1970s, highlighting how advancements in modulation, spectrum efficiency, and transmission media have sustained this trajectory. In the wireless domain, bandwidth has evolved from 2G networks offering typical speeds of around 0.384 Mbps in the early 2000s to 3G at up to 2 Mbps, 4G LTE reaching 100 Mbps averages by the 2010s, and 5G delivering peak rates exceeding 10 Gbps in the 2020s, driven by wider spectrum bands and massive MIMO technologies. Fixed broadband has progressed from DSL modems providing 1–8 Mbps in the late 1990s to cable and fiber optics enabling 100 Mbps to 1 Gbps by the 2010s, with average U.S. residential speeds rising from 127 kbps in 2000 to over 200 Mbps in 2025. Enterprise wireline networks, exemplified by Ethernet, started at 10 Mbps in 1983 and advanced to 100 Mbps in 1995, 1 Gbps in 1999, 10 Gbps in 2002, 100 Gbps in 2010, and 400 Gbps standards by 2017, supporting high-capacity data centers and backhaul. These domains exhibit parallel logarithmic growth curves when plotted over time, with wireless consistently trailing wireline by the predicted lag, as evidenced by longitudinal data analyses.[69][70][71] The implications of Edholm's law extend to technological convergence, where surging bandwidth has enabled the migration of applications from wireline to wireless environments, such as Voice over IP (VoIP) telephony and IP-based video streaming, which became feasible as mobile rates approached fixed-line capabilities in the 2010s. This growth has blurred distinctions between network types, fostering integrated services like unified communications and cloud computing, while projecting potential convergence around 2030 if trends persist. However, the law also underscores challenges in sustaining exponential increases, as physical limits—such as Shannon's capacity theorem for channel efficiency and material constraints in silicon photonics—may impose upper bounds, potentially capping growth at human perceptual thresholds like visual pixel processing rates. To illustrate the observed growth, the following table summarizes representative peak bandwidth milestones across the domains from the 1980s to the 2020s:| Decade | Wireless (Peak Mbps) | Fixed Broadband (Typical Mbps) | Wireline Ethernet (Standard Mbps) |
|---|---|---|---|
| 1980s | N/A (1G analog ~0.01) | Dial-up ~0.056 | 10 (1983) |
| 1990s | 2G ~0.384 | DSL ~1–8 | 100 (1995), 1,000 (1999) |
| 2000s | 3G ~2, 4G early ~10 | Cable/DSL ~25–100 | 10,000 (2002) |
| 2010s | 4G ~100, 5G early ~1,000 | Fiber ~100–1,000 | 100,000 (2010), 400,000 (2017) |
| 2020s | 5G ~10,000+ | Fiber >1,000 | 800,000+ (emerging) |
