Hubbry Logo
Internet accessInternet accessMain
Open search
Internet access
Community hub
Internet access
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Internet access
Internet access
from Wikipedia

Internet access is a facility or service that provides connectivity for a computer, a computer network, or other network device to the Internet, and for individuals or organizations to access or use applications such as email and the World Wide Web. Internet access is offered for sale by an international hierarchy of Internet service providers (ISPs) using various networking technologies. At the retail level, many organizations, including municipal entities, also provide cost-free access to the general public. Types of connections range from fixed-line cable (such as DSL and fiber optic) to mobile (via cellular) and satellite.[1]

The availability of Internet access to the general public began with the commercialization of the early Internet in the early 1990s, and has grown with the availability of useful applications, such as the World Wide Web. In 1995, only 0.04 percent of the world's population had access, with well over half of those living in the United States [2] and consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology. By 2014, 41 percent of the world's population had access,[3] broadband was almost ubiquitous worldwide, and global average connection speeds exceeded one megabit per second.[4]

History

[edit]

The Internet developed from the ARPANET, which was funded by the US government to support projects within the government, at universities and research laboratories in the US, but grew over time to include most of the world's large universities and the research arms of many technology companies.[5][6][7] Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted.[8]

In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks (LANs) or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s while modem data-rates grew from 1200 bit/s in the early 1980s to 56 kbit/s by the late 1990s. Initially, dial-up connections were made from terminals or computers running terminal-emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal-to-host connections. The introduction of network access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users; although slower, due to the lower data rates available using dial-up.

An important factor in the rapid rise of Internet access speed has been advances in MOSFET (MOS transistor) technology.[9] The MOSFET invented at Bell Labs between 1955 and 1960 following Frosch and Derick discoveries,[10][11][12][13][14][15] is the building block of the Internet telecommunications networks.[16][17] The laser, originally demonstrated by Charles H. Townes and Arthur Leonard Schawlow in 1960, was adopted for MOS light-wave systems around 1980, which led to exponential growth of Internet bandwidth. Continuous MOSFET scaling has since led to online bandwidth doubling every 18 months (Edholm's law, which is related to Moore's law), with the bandwidths of telecommunications networks rising from bits per second to terabits per second.[9]

Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access that is always on, and faster than the traditional dial-up access"[18][19] and so covers a wide range of technologies. The core of these broadband Internet technologies are complementary MOS (CMOS) digital circuits,[20][21] the speed capabilities of which were extended with innovative design techniques.[21] Broadband connections are typically made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card.

Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not interfere with voice use of phone lines.[22] Broadband provides improved access to Internet services such as:

In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue.[23] In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries[24] and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million.[25]

The broadband technologies in widest use are of digital subscriber line (DSL), ADSL, and cable Internet access. Newer technologies include VDSL and optical fiber extended closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology.

In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless, satellite, and microwave Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available.

Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless.

Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE.

Availability

[edit]
Internet Connectivity Access layer

In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafés, where computers with Internet connections are available. Some libraries provide stations for physically connecting users' laptops to LANs.

Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based.

Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a confined location since multiple ones combined can cover a whole campus or park, or even an entire city can be enabled.

Additionally, mobile broadband access allows smartphones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made, subject to the capabilities of that mobile network.

Speed

[edit]

The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection from 220 (V.42bis) to 320 (V.44) kbit/s.[26] However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s.[27]

Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s.[28] In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s.[29] A 2006 Organisation for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s.[30] And in 2015 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s upstream (from the user's computer to the Internet).[31] The trend is to raise the threshold of the broadband definition as higher data rate services become available.[32]

The higher data rate dial-up modems and many broadband services are "asymmetric"—supporting much higher data rates for download (toward the user) than for upload (toward the Internet).

Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer.[33] Actual end-to-end data rates can be lower due to a number of factors.[34] In late June 2016, internet connection speeds averaged about 6 Mbit/s globally.[35] Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user.

Network congestion

[edit]

Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well, and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users who experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases, the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable.

When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.

Outages

[edit]

An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia.[36] Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93%[37] of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.[38]

On April 25, 1997, due to a combination of human error and a software bug, an incorrect routing table at MAI Network Service (a Virginia Internet service provider) propagated across backbone routers and caused major disruption to Internet traffic for a few hours.[39]

Technologies

[edit]

When the Internet is accessed using a modem, digital data is converted to analog for transmission over analog networks such as the telephone and cable networks.[22] A computer or other device accessing the Internet would either be connected directly to a modem that communicates with an Internet service provider (ISP) or the modem's Internet connection would be shared via a LAN which provides access in a limited area such as a home, school, computer laboratory, or office building.

Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet access speed is limited by the upstream link to the ISP. LANs may be wired or wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, LocalTalk, FDDI, and other technologies were used in the past.

Ethernet is the name of the IEEE 802.3 standard for physical LAN communication[40] and Wi-Fi is a trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards.[41] Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or more wireless antenna called access points.

Many "modems" (cable modems, DSL gateways or Optical Network Terminals (ONTs)) provide the additional functionality to host a LAN so most Internet access today is through a LAN such as that created by a WiFi router connected to a modem or a combo modem router,[citation needed] often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this raises the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections, or in other words, how customers' modems (Customer-premises equipment) are most often connected to internet service providers (ISPs).

Dial-up technologies

[edit]

Dial-up access

[edit]

Dial-up Internet access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection.[42]

Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet).[22]

[edit]

Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and accessing them as a single data channel.[43] It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking – and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking.[44]

Hardwired broadband access

[edit]

The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. The following technologies use wires or cables in contrast to wireless broadband described later.

Integrated Services Digital Network

[edit]

Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting voice and digital data, and is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies.[45]

Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9 Mbit/s. ISDN has been replaced by DSL technology,[46] and it required special telephone switches at the service provider.[47]

Leased lines

[edit]

Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created.[48]

T-carrier technology[49] dates to 1957 and provides data rates that range from 56 and 64 kbit/s (DS0) to 1.5 Mbit/s (DS1 or T1), to 45 Mbit/s (DS3 or T3).[50] A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and 1500 kbit/s. T-carrier lines require special termination equipment such as Data service units[51][52][53] that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP.[54] In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels (64 kbit/s) on an E1 (2.0 Mbit/s) and 512 user channels or 16 E1s on an E3 (34.4 Mbit/s).

Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world)[49] are the standard multiplexing protocols used to carry high-data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries 155.520 Mbit/s. Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (622.080 Mbit/s), OC-48c (2.488 Gbit/s), OC-192c (9.953 Gbit/s), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams.[48] Optical transport network (OTN) may be used instead of SONET[55] for higher data transmission speeds of up to 400 Gbit/s per OTN channel.

The 1, 10, 40, and 100 Gigabit Ethernet IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to 40 km.[56]

Cable Internet access

[edit]

Cable Internet provides access using a cable modem on hybrid fiber coaxial (HFC) wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. Using a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means – usually fiber optic cable or digital satellite and microwave transmissions.[57] Like DSL, broadband cable provides a continuous connection with an ISP.

Downstream, the direction toward the user, bit rates can be as much as 1000 Mbit/s in some countries, with the use of DOCSIS 3.1. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s. DOCSIS 4.0 promises up to 10 Gbit/s downstream and 6 Gbit/s upstream, however this technology is yet to have been implemented in real-world usage. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings; commercial buildings do not always include wiring for coaxial cable networks.[58] In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted.[57]

Digital subscriber line (DSL, ADSL, SDSL, and VDSL)

[edit]

Digital subscriber line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication.[22] These frequency bands are subsequently separated by filters installed at the customer's premises.

DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e., in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric.[59] With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal.[60]

Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1)[61] is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires[62] and up to 85 Mbit/s down- and upstream on coaxial cable.[63] VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection.

VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL.[64] Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases.

DSL Rings

[edit]

DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400 Mbit/s.[65]

Fiber to the home

[edit]

Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN).[66] These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar in function and architecture to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. Fiber internet connections to customers are either AON (Active optical network) or more commonly PON (Passive optical network). Examples of fiber optic internet access standards are G.984 (GPON, G-PON) and 10G-PON (XG-PON). ISPs may instead use Metro Ethernet as a replacement for T1 and Frame Relay lines[67] for corporate and institutional customers,[68] or offer carrier-grade Ethernet.[69] Dedicated internet access (DIA) in which the bandwidth is not shared among customers, can be offered over PON fiber optic networks.[70]

The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, LTE) for final delivery to customers.[71] Fiber optic is immune to electromagnetic interference.[72]

In 2010, Australia began rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses.[73] The project was abandoned by the subsequent LNP government, in favor of a hybrid FTTN design, which turned out to be more expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country).[74][75][76][77]

Power-line Internet

[edit]

Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission.[78] Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s.[79]

Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must detect existing usage and avoid interfering with it.[79]

Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer.[79] In the U.S. a transformer serves a small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than a comparable European city.[80]

ATM and Frame Relay

[edit]

Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can be used to provide Internet access directly[50] or as building blocks of other access technologies. For example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates.[81][82]

While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did.

Wireless broadband access

[edit]

Wireless broadband is used to provide both fixed and mobile Internet access with the following technologies.

Satellite broadband

[edit]
Satellite Internet access via VSAT in Ghana

Satellite Internet access provides fixed, portable, and mobile Internet access.[83] Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the equatorial position of all geostationary satellites. In the southern hemisphere, this situation is reversed, and dishes are pointed north.[84][85] Service can be adversely affected by moisture, rain, and snow (known as rain fade).[84][85][86] The system requires a carefully aimed directional antenna.[85]

Satellites in geostationary Earth orbit (GEO) operate in a fixed position 35,786 km (22,236 mi) above the Earth's equator. At the speed of light (about 300,000 km/s or 186,000 miles per second), it takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies negatively affect some applications that require real-time response, particularly online games, voice over IP, and remote control devices.[87][88] TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions.[84] HughesNet, Exede, AT&T and Dish Network have GEO systems.[89][90][91][92]

Satellite internet constellations in low Earth orbit (LEO, below 2,000 km or 1,243 miles) and medium Earth orbit (MEO, between 2,000 and 35,786 km or 1,243 and 22,236 miles) operate at lower altitudes, and their satellites are not fixed in their position above the Earth. Because they operate at a lower altitude, more satellites and launch vehicles are needed for worldwide coverage. This makes the initial required investment very large which initially caused OneWeb and Iridium to declare bankruptcy. However, their lower altitudes allow lower latencies and higher speeds which make real-time interactive Internet applications more feasible. LEO systems include Globalstar, Starlink, OneWeb and Iridium. The O3b constellation is a medium Earth-orbit system with a latency of 125 ms. COMMStellation™ is a LEO system, scheduled for launch in 2015,[needs update] that is expected to have a latency of just 7 ms.

Mobile broadband

[edit]
Service mark for GSMA

Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers (cellular networks) to computers, mobile phones (called "cell phones" in North America and South Africa, and "hand phones" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used.[93]

New mobile phone technology and infrastructure is introduced periodically and generally involves a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations. The first mobile data services became available during the second generation (2G).

Second generation (2G) from 1991:
Speeds in kbit/s down and up
 · GSM CSD 9.6 kbit/s
 · CDPD up to 19.2 kbit/s
 · GSM GPRS (2.5G) 56 to 115 kbit/s
 · GSM EDGE (2.75G)  up to 237 kbit/s
Third generation (3G) from 2001:
Speeds in Mbit/s down up
 · UMTS W-CDMA 0.4 Mbit/s
 · UMTS HSPA 14.4 5.8
 · UMTS TDD 16 Mbit/s
 · CDMA2000 1xRTT 0.3 0.15
 · CDMA2000 EV-DO 2.5–4.9 0.15–1.8
 · GSM EDGE-Evolution  1.6 0.5
Fourth generation (4G) from 2006:
Speeds in Mbit/s down up
 · HSPA+ 21–672 5.8–168
 · Mobile WiMAX (802.16) 37–365 17–376
 · LTE 100–300 50–75
 · LTE-Advanced:  
   · moving at higher speeds 100 Mbit/s
   · not moving or moving at lower speeds up to 1000 Mbit/s
 · MBWA (802.20) 80 Mbit/s

The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates.

WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed.

In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage.[94]

5G was designed to be faster and have lower latency than its predecessor, 4G. It can be used for mobile broadband in smartphones or separate modems that emit WiFi or can be connected through USB to a computer, or for fixed wireless.

Fixed wireless

[edit]

Fixed wireless internet connections that do not use a satellite nor are designed to support moving equipment such as smartphones due to the use of, for example, customer premises equipment such as antennas that can't be moved over a significant geographical area without losing the signal from the ISP, unlike smartphones. Microwave wireless broadband or 5G may be used for fixed wireless.

WiMAX
[edit]

Worldwide Interoperability for Microwave Access (WiMAX) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. It enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL".[95] The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates.[96] Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi LAN. WiMAX signals also penetrate building walls much more effectively than Wi-Fi. WiMAX is most often used as a fixed wireless standard.

Wireless ISP
[edit]
Wi-Fi logo

Wireless Internet service providers (WISPs) operate independently of mobile phone operators. WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems as well, such as microwave and WiMAX.

Wi-Fi range diagram

Traditional 802.11a/b/g/n/ac is an unlicensed omnidirectional service designed to span between 100 and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna (where allowed by regulations), 802.11 can operate reliably over a distance of many km(miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are usually slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems.[97]

With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages:

  • usually regulatory bodies allow for more power and using (better-) directional antennae,
  • there exists much more bandwidth to share, allowing both better throughput and improved coexistence,
  • there are fewer consumer devices that operate over 5 GHz than over 2.4 GHz, hence fewer interferers are present,
  • the shorter wavelengths don't propagate as well through walls and other structures, so much less interference leaks outside of the homes of consumers.

Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX. There are a number of companies that provide this service.[98]

Local Multipoint Distribution Service
[edit]

Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26 GHz and 29 GHz.[99] Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s.[100] Distance is typically limited to about 1.5 miles (2.4 km), but links of up to 5 miles (8 km) from the base station are possible in some circumstances.[101]

LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards.

Hybrid Access Networks

[edit]

In some regions, notably in rural areas, the length of the copper lines makes it difficult for network operators to provide high-bandwidth services. One alternative is to combine a fixed-access network, typically XDSL, with a wireless network, typically LTE. The Broadband Forum has standardized an architecture for such Hybrid Access Networks.

Non-commercial alternatives for using Internet services

[edit]

Grassroots wireless networking movements

[edit]

Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless networks.[102] It is usually ordered by the local municipality from commercial WISPs.

Grassroots efforts have also led to wireless community networks widely deployed in numerous countries, both developing and developed ones. Rural wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available.

Where radio spectrum regulation is not community-friendly, the channels are crowded or when equipment can not be afforded by local residents, free-space optical communication can also be deployed in a similar manner for point to point transmission in air (rather than in fiber optic cable).

Packet radio

[edit]

Packet radio connects computers or whole networks operated by radio amateurs with the option to access the Internet. Note that as per the regulatory rules outlined in the HAM license, Internet access and email should be strictly related to the activities of hardware amateurs.

Sneakernet

[edit]

The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to the wearing of sneakers as the transport mechanism for the data.

For those who do not have access to or can not afford broadband at home, downloading large files and disseminating information is done by transmission through workplace or library networks, taken home and shared with neighbors by sneakernet. The Cuban El Paquete Semanal is an organized example of this.

There are various decentralized, delay tolerant peer to peer applications which aim to fully automate this using any available interface, including both wireless (Bluetooth, Wi-Fi mesh, P2P or hotspots) and physically connected ones (USB storage, Ethernet, etc.).

Sneakernets may also be used in tandem with computer network data transfer to increase data security or overall throughput for big data use cases. Innovation continues in the area to this day; for example, AWS has recently announced Snowball, and bulk data processing is also done in a similar fashion by many research institutes and government agencies.

Pricing and spending

[edit]
Broadband affordability in 2011
This map presents an overview of broadband affordability, as the relationship between average yearly income per capita and the cost of a broadband subscription (data referring to 2011). Source: Information Geographies at the Oxford Internet Institute.[103]

Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT).[104] In Mexico, the poorest 30% of the society spend an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population merely has US$9 per year to spend on ICT (US$0.75 per month). From Latin America, it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year.[104] This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries.

Dial-up users pay the costs for making local or long-distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access.

Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access.

Internet services like Facebook, Wikipedia and Google have built special programs to partner with mobile network operators (MNO) to introduce zero-rating the cost for their data volumes as a means to provide their service more broadly into developing markets.[105]

With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80–90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03.[106]

Some ISPs estimate that a small number of their users consume a disproportionate portion of the total bandwidth. In response some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps. Others claim that because the marginal cost of extra bandwidth is very small with 80 to 90 percent of the costs fixed regardless of usage level, that such steps are unnecessary or motivated by concerns other than the cost of delivering bandwidth to the end user.[107][108][109]

In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps.[107] In 2008 Time Warner began experimenting with usage-based pricing in Beaumont, Texas.[110] In 2009 an effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned.[111] On August 1, 2012, in Nashville, Tennessee and on October 1, 2012, in Tucson, Arizona Comcast began tests that impose data caps on area residents. In Nashville exceeding the 300 Gbyte cap mandates a temporary purchase of 50 Gbytes of additional data.[112]

Digital divide

[edit]
Source: International Telecommunication Union.[113]
Fixed broadband Internet subscriptions in 2012
as a percentage of a country's population
Source: International Telecommunication Union.[114]
Mobile broadband Internet subscriptions in 2012
as a percentage of a country's population
Source: International Telecommunication Union.[115]
The digital divide measured in terms of bandwidth is not closing, but fluctuating up and down. Gini coefficients for telecommunication capacity (in kbit/s) among individuals worldwide[116]

Despite its tremendous growth, Internet access is not distributed equally within or between countries.[117][118] The digital divide refers to "the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access". The gap between people with Internet access and those without is one of many aspects of the digital divide.[119] Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. "Low-income, rural, and minority populations have received special scrutiny as the technological 'have-nots'."[120]

Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example, in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011.[121] In North Korea there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet.[122] The U.S. trade embargo is a barrier limiting Internet access in Cuba.[123]

Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access.[94] The majority of people in developing countries do not have Internet access.[124] About 4 billion people do not have Internet access.[125] When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007).[126][127]

Internet access has changed the way in which many people think and has become an integral part of people's economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the "political, social, economic, educational, and career opportunities" available over the Internet.[118] Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide.[128] To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world. The Global Gateway, the EU's initiative to assist infrastructure development throughout the world, plans to raise €300 billion for connectivity projects, including those in the digital sector, between 2021 and 2027.[129][130]

Growth in number of users

[edit]
Worldwide Internet users[131]
2005 2010 2017 2023
World population (billions)[132] 6.5 6.9 7.4 8.0
Worldwide 16% 30% 48% 67%
In developing world 8% 21% 41.3% 60%
In developed world 51% 67% 81% 93%
Internet users by region[131]
Region 2005 2010 2017 2023
Africa 2% 10% 21.8% 37%
Americas 36% 49% 65.9% 87%
Arab States 8% 26% 43.7% 69%
Asia and Pacific 9% 23% 43.9% 66%
Commonwealth of
Independent States
10% 34% 67.7% 89%
Europe 46% 67% 79.6% 91%

Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.7 billion in 2013.[133] With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia,[134] Africa, Latin America, the Caribbean, and the Middle East. Across Africa, an estimated 900 million people are still not connected to the internet; for those who are, connectivity fees remain generally expensive, and bandwidth is severely constrained in many locations.[135][136] The number of mobile customers in Africa, however, is expanding faster than everywhere else. Mobile financial services also allow for immediate payment of products and services.[137][138][139]

There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011.[140] In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available.[94]

Bandwidth divide

[edit]

Traditionally the divide has been measured in terms of the existing numbers of subscriptions and digital devices ("have and have-not of subscriptions"). Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita).[116][141] As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing, but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality".[141] This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH).[142] As shown in the Figure, Internet access in terms of bandwidth is more unequally distributed in 2014 as it was in the mid-1990s.

For example, only 0.4% of the African population has a fixed-broadband subscription. The majority of internet users use it through mobile broadband.[135][136][143][144]

Rural access

[edit]

One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.[145] Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service.[33]

Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas.[146] The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.[147]

The Canadian Broadband for Rural Nova Scotia initiative public private partnership is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.[148]

In New Zealand, a fund has been formed by the government to improve rural broadband,[149] and mobile phone coverage. Current proposals include: (a) extending fiber coverage and upgrading copper to support VDSL, (b) focusing on improving the coverage of cellphone technology, or (c) regional wireless.[150]

Several countries have started Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks.

Access as a civil or human right

[edit]

The actions, statements, opinions, and recommendations outlined below have led to the suggestion that Internet access itself is or should become a civil or perhaps a human right.[151][152]

Several countries have adopted laws requiring the state to work to ensure that Internet access is broadly available or preventing the state from unreasonably restricting an individual's access to information and the Internet:

  • Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with the government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web."[153]
  • Estonia: In 2000, the parliament launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the twenty-first century.[154]
  • Finland: By July 2010, every person in Finland was to have access to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection.[155]
  • France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings[156]
  • Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to electronically transmitted information.[157]
  • Spain: Starting in 2011, Telefónica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain.[158]

In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights:[128] [159]

1. We, the representatives of the peoples of the world, assembled in Geneva from 10–12 December 2003 for the first phase of the World Summit on the Information Society, declare our common desire and commitment to build a people-centered, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes and principles of the Charter of the United Nations and respecting fully and upholding the Universal Declaration of Human Rights.
3. We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs.

The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating:

4. We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organization. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers."[159]

A poll of 27,973 adults in 26 countries, including 14,306 Internet users,[160] conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right.[161] 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion.[162]

The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access:[163]

67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an "enabler" of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. In this regard, the Special Rapporteur encourages other Special Procedures mandate holders to engage on the issue of the Internet with respect to their particular mandates.
78. While blocking and filtering measures deny users access to specific content on the Internet, States have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights.
79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest.
85. Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. Each State should thus develop a concrete and effective policy, in consultation with individuals from all sections of society, including the private sector and relevant Government ministries, to make the Internet widely available, accessible and affordable to all segments of population.

Network neutrality

[edit]

Network neutrality (also net neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.[164][165][166][167] Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors.[168] Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken.[169][170] In April 2017, a recent attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai.[171] The vote on whether or not to abolish net neutrality was passed on December 14, 2017, and ended in a 3–2 split in favor of abolishing net neutrality.

Natural disasters and access

[edit]

Natural disasters disrupt internet access in profound ways. This is important—not only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary for disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages.[172]

One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable.[173] At Hurricane Katrina's peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisiana's networks were disrupted.[174] Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at "network edges were important emergency organizations such as hospitals and government agencies are mostly located".[173] Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service.[173] The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted.[175]

A second way natural disasters destroy internet connectivity is by severing submarine cables—fiber-optic cables placed on the ocean floor that provide international internet connection. A sequence of undersea earthquakes cut six out of seven international cables connected to Taiwan and caused a tsunami that wiped out one of its cable and landing stations.[176][177] The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe.[178]

With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012.[179][180] AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone.[181] This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram.[182][183]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Internet access is the capability to connect end-user devices, such as computers and smartphones, to the global Internet—a decentralized network of interconnected systems that enables data transmission via packet-switching and protocols like TCP/IP for communication, information retrieval, and service utilization. This connectivity is delivered through diverse technologies, including digital subscriber line (DSL), cable modems, fiber-to-the-premises (FTTP), fixed wireless, satellite, and mobile broadband networks like 4G and 5G. As of 2024, an estimated 5.5 billion individuals—68 percent of the global population—utilize the Internet, reflecting rapid expansion driven by mobile adoption, particularly in low- and middle-income countries where over 90 percent of new users connect via cellular data. Fixed broadband dominates in developed regions for higher speeds and reliability, while satellite and fixed wireless address remote areas, though with higher latency and costs. Penetration rates starkly diverge, achieving 93 percent in high-income nations versus 27 percent in low-income ones, underscoring persistent infrastructure, affordability, and literacy barriers that widen economic and informational gaps known as the digital divide. Notable advancements include the shift to fiber and 5G for multi-gigabit speeds, supporting bandwidth-intensive applications, yet challenges encompass unequal distribution— with 2.6 billion people offline, largely in rural or impoverished areas—and debates over regulatory frameworks like net neutrality, which influence content prioritization and innovation incentives. These factors causally link access levels to productivity disparities, as empirical data show correlated gains in GDP and education outcomes where connectivity improves.

History

Origins and Early Development (1960s-1980s)

The ARPANET, initiated by the U.S. Department of Defense's Advanced Research Projects Agency (DARPA) in fiscal year 1969, represented the first operational packet-switched network designed to enable resilient resource sharing among geographically dispersed computers during potential disruptions. Packet switching, which fragmented data into discrete packets routed independently across the network, addressed vulnerabilities in circuit-switched systems by allowing alternative paths for transmission. The network's inaugural connection occurred on October 29, 1969, linking a host computer at the University of California, Los Angeles (UCLA) to the Stanford Research Institute (SRI), with initial expansion to four nodes including the University of California, Santa Barbara (UCSB) and the University of Utah by December. Access was initially restricted to connected research institutions via dedicated leased lines and Interface Message Processors (IMPs), with users interacting through teletype terminals or early time-sharing systems. By the early , node counts expanded beyond the four sites, incorporating additional and hosts to support collaborative experiments, though exact growth figures varied as hosts outnumbered IMPs. In 1971, the introduction of the Terminal Interface Processor (TIP) enabled remote dial-up access via modems, allowing individual terminals to connect directly to the network without host affiliation, thus broadening participation for researchers. Dial-up speeds remained low, typically at 300 baud or less, reflecting the era's acoustic coupler technology and reliance on telephone lines for intermittent connectivity. Usage was confined to authorized academic and defense entities, emphasizing engineering research over public dissemination. The late 1970s saw incremental extensions through protocols like the Network Control Program (NCP), but limitations in scalability prompted development of more robust standards. In 1981, the Computer Science Network (CSNET), funded by the National Science Foundation (NSF), emerged as a complementary system to connect non-ARPANET computer science departments, initially linking three sites (University of Delaware, Princeton, and Purdue) and incorporating dial-up "Phonenet" for email relay among over 80 sites by 1984. On January 1, 1983—designated "flag day"—ARPANET fully transitioned from NCP to the Transmission Control Protocol/Internet Protocol (TCP/IP) suite, mandated by the Department of Defense in 1982, which standardized end-to-end data transmission and facilitated interoperability with emerging networks. This shift marked a pivotal engineering milestone, enabling the foundational architecture for subsequent internetworking while access remained elite, serving primarily U.S.-based military, governmental, and academic users.

Commercialization and Dial-Up Era (1990s)

The privatization of the NSFNET backbone in 1995 transitioned the internet from a government-funded research network to a commercial infrastructure, enabling widespread public access through independent ISPs. This decommissioning, completed on April 30, 1995, replaced NSFNET's restrictions—such as prohibitions on commercial traffic—with a decentralized system of network access points (NAPs) that interconnected private providers. Providers like America Online (AOL), which had begun offering services in the mid-1980s, rapidly scaled operations to serve consumers, capitalizing on the lifted barriers to commercial use. The 1993 release of NCSA Mosaic, the first widely available graphical web browser, catalyzed consumer demand by simplifying navigation of hypertext and multimedia content previously limited to text-based interfaces. Developed at the University of Illinois, Mosaic's intuitive design attracted non-technical users, spurring exponential growth in web traffic and hastening the shift toward public adoption. By making the World Wide Web visually accessible, it directly contributed to the proliferation of dial-up connections as households sought to explore emerging online services. Dial-up , utilizing modulated analog signals over standard lines at speeds up to 56 kbit/s, became the primary access method throughout the . , adoption surged from negligible levels in the early to approximately 45 million users by , driven by ISP and falling hardware costs for modems. Globally, similar phone-line-based services spread to , , and other regions with established telephony , though penetration remained uneven due to varying regulatory environments and line . Initial per-minute billing models, often exceeding $0.10 per minute plus phone charges, deterred heavy usage until mid-decade disruptions introduced unlimited flat-rate plans around $ monthly. AT&T WorldNet's 1995 flat-fee offering pressured competitors like AOL to follow by 1996, reducing barriers and accelerating sign-ups despite persistent issues like line occupation and connection unreliability. Deregulatory measures, including the NSFNET's acceptable use policy relaxation and the , fostered ISP entry by easing infrastructure and reducing monopolistic controls over loops. This competition lowered prices and expanded , with U.S. dial-up subscribers peaking above 50 million by as outpaced technological constraints. Globally, analogous privatizations enabled parallel growth, though lagged in developing markets reliant on imported modems and international gateways.

Broadband Expansion (2000s)

In the United States, the 2000s marked a shift from dial-up to broadband via digital subscriber line (DSL) and cable modem technologies, with DSL subscribers growing from approximately 760,000 in 1999 to over 12 million by 2003, driven by incumbent telephone companies upgrading copper infrastructure. Cable modem adoption complemented this, as multiple system operators leveraged existing coaxial networks, resulting in broadband access reaching about 3% of households in mid-2000 and expanding to roughly 47% of adults by early 2007. This buildout was primarily propelled by private sector investments, as deregulation under the Telecommunications Act of 1996 enabled competition without substantial federal subsidies, though some states offered targeted tax credits that incentivized deployment over direct grants. Fiber-optic pilots emerged mid-decade, with Verizon launching FiOS in Keller, Texas, in 2005, offering fiber-to-the-home (FTTH) speeds up to 30 Mbps initially, as an upgrade path for high-density areas. Globally, variations were stark; South Korea achieved broadband penetration exceeding 14 subscribers per 100 inhabitants by mid-2001, supported by private infrastructure investments in very-high-bit-rate DSL (VDSL) and early FTTH, yielding average download speeds that surpassed U.S. levels by the late 2000s, often reaching 50 Mbps or more in urban deployments. Regulatory hurdles, such as unbundling mandates in Europe and the U.S., slowed rollout in some regions by deterring investment, contrasting with lighter-touch policies elsewhere that favored private incentives like tax relief over heavy subsidies. Empirically, expanded bandwidth enabled new applications, including video streaming; by 2002, broadband households were far more likely to engage in downloading and streaming media than dial-up users, paving the way for services like Netflix's 2007 streaming launch, which required consistent high-speed connections. However, adoption remained uneven, with rural areas lagging due to high deployment costs and sparse demand, underscoring that private incentives outperformed subsidy models in urban-centric, rapid expansions observed in the U.S. and South Korea.

Mobile and Global Proliferation (2010s-2020s)

The introduction of 4G LTE networks, beginning with commercial launches in Norway in December 2009 and expanding globally in 2010, marked a pivotal shift toward widespread mobile broadband access. This technology, building on the smartphone revolution ignited by the iPhone in 2007, enabled higher data speeds and lower latency, facilitating the transition from voice-centric mobile use to data-intensive internet activities. By providing download speeds up to 100 Mbps under ideal conditions, 4G LTE spurred the development of mobile applications and streaming services, driving demand for constant connectivity. Global mobile internet adoption surged during the 2010s, with unique mobile internet users reaching approximately 4.7 billion by the end of 2023, representing 57% of the world's population. This growth was particularly pronounced in developing regions, where low-cost smartphones and affordable data plans from manufacturers like those producing feature phones transitioning to entry-level Android devices accelerated penetration. In , mobile internet usage rose from negligible levels in 2010 to 27% by 2023, fueled by market-driven price reductions in devices and service costs that outpaced government aid initiatives. Similarly, saw smartphone penetration exceed 50% in many countries by the mid-2010s, supported by innovations such as subsidized handsets and competitive mobile virtual network operators. By 2025, mobile devices accounted for over 60% of global web traffic, underscoring the dominance of wireless access in everyday internet use. The rollout of networks, commencing in with initial commercial deployments in and the , promised further enhancements in speed and capacity, with peak rates exceeding 10 Gbps. However, deployment faced significant hurdles, including delays in spectrum allocation due to protracted regulatory processes and interagency disputes over band usage, which slowed infrastructure buildout in several markets. These challenges, compounded by local government resistance to tower installations, limited the technology's immediate global proliferation despite its potential to support emerging applications like . In spectrum-constrained environments, particularly in developing regions, efficient allocation remains critical to sustaining growth without stifling innovation.

Core Technologies

Fixed Wired Access

Fixed wired access delivers internet connectivity through stationary physical cables connected to end-user premises, such as homes or offices, providing stable and high-capacity links without reliance on radio frequencies. This method contrasts with wireless access by utilizing infrastructure like copper twisted-pair lines, cables, or to transmit signals over dedicated paths. Primary technologies include (DSL) over existing copper lines, cable modems via () networks, and fiber-to-the-premises (FTTP) using optical fibers for direct light-based transmission. These technologies enable symmetric or asymmetric speeds, with fiber optic offering the highest potential bandwidth—up to multi-gigabit per second—due to low signal attenuation and immunity to electromagnetic interference, while DSL and cable typically max at hundreds of Mbps depending on distance and network upgrades. Fixed wired access generally provides lower latency and greater reliability than wireless alternatives, as it avoids spectrum congestion and environmental disruptions, making it preferable for applications requiring consistent performance like video conferencing or large file transfers. However, deployment is constrained by the need for physical infrastructure, limiting rapid expansion in rural or underserved areas compared to wireless options. As of 2024, global fixed broadband subscriptions reached approximately 1.3 billion, with the Asia-Pacific region holding over half, reflecting widespread adoption in urbanized economies where wired infrastructure supports high penetration rates exceeding 30 subscribers per 100 inhabitants in OECD countries. Average fixed broadband speeds worldwide stood at 97.3 Mbps in 2025, though fiber deployments are driving gigabit capabilities in advanced markets. Ongoing upgrades, such as DOCSIS 4.0 for cable and GPON for fiber, continue to enhance capacity to meet rising data demands from streaming and cloud services.

Dial-Up and ISDN

Dial-up internet access utilized the analog public switched telephone network (PSTN) to connect users to an internet service provider (ISP) via modems that modulated digital data into audio signals for transmission over voice-grade lines. Early modems operated at speeds as low as 300 bits per second in the 1980s, evolving to 14.4 kbps by the early 1990s and reaching a theoretical maximum of 56 kbps with V.90 and V.92 standards in the late 1990s. This upper limit stemmed from the analog-to-digital conversion constraints at the user end, where phone lines carried signals prone to noise and attenuation, combined with FCC regulations capping transmit power at approximately 12 dBm to avoid interference with carrier systems, effectively limiting reliable throughput to 53 kbps downstream. Connections required dialing the ISP, resulting in a characteristic handshake tone and occupation of the telephone line, preventing simultaneous voice calls and introducing variable connection times of 10-60 seconds. Integrated Services Digital Network (ISDN), standardized by the (ITU) in the , offered a digital alternative over existing twisted-pair lines, providing circuit-switched end-to-end digital transmission without analog modulation. The (BRI), the most common for internet access, delivered two 64 kbps bearer (B) channels for or voice plus a 16 kbps delta (D) channel for signaling, yielding up to 128 kbps aggregate throughput when B channels. Primary Rate Interface (PRI) supported higher speeds, such as 1.544 Mbps in North America (23 B + 1 D channels), but was primarily for business use. ISDN enabled faster, more reliable connections than dial-up with lower latency due to its digital nature and allowed simultaneous voice and usage by allocating channels separately, though it still required dialing and incurred per-minute charges in many regions. Adoption of dial-up peaked in the mid-1990s as personal computers and ISPs like America Online proliferated, serving tens of millions of households before broadband supplanted it by the early 2000s due to speed limitations and the inconvenience of line occupation. ISDN saw limited residential uptake despite early deployments in Europe and Japan in the late 1980s and 1990s, constrained by high installation costs—often $50-100 monthly plus setup fees—and the rapid emergence of DSL, which leveraged the same lines for asymmetric speeds exceeding 1 Mbps at lower cost. By the 2010s, both technologies had largely been decommissioned in favor of always-on broadband, though remnants persist in remote areas lacking alternatives.

DSL and Cable Modems

Digital Subscriber Line (DSL) technology provides broadband internet over twisted-pair copper telephone wires by separating high-frequency data signals from low-frequency voice traffic using splitters or filters at the customer premises. Developed initially for symmetric high-speed business applications, High-bit-rate DSL (HDSL) emerged in the early 1990s as a cost-effective alternative to T1 lines, supporting 1.544 Mbps bidirectional over two wire pairs. Asymmetric DSL (ADSL), introduced commercially in the late 1990s, prioritized downstream speeds for consumer internet, with early deployments offering up to 8 Mbps download and 1 Mbps upload over distances up to several kilometers from the provider's central office. Later variants like ADSL2+ extended reach and speeds to 24 Mbps downstream, while Very-high-bit-rate DSL (VDSL2), standardized in 2006, achieves up to 100 Mbps or more over shorter loops under 1 km by leveraging higher frequencies, though signal attenuation limits performance with distance due to copper's resistive losses. DSL maintains a dedicated circuit per user from the local exchange, yielding stable latency and throughput independent of neighboring demand, but maximum speeds rarely exceed 100 Mbps in practice, positioning it as a legacy technology amid fiber deployment. Cable modems transmit internet data over coaxial cable networks via the Data Over Cable Service Interface Specification (DOCSIS), an open standard from CableLabs enabling bidirectional IP traffic on hybrid fiber-coaxial (HFC) infrastructure shared with television signals. DOCSIS 1.0, ratified in 1997, supported initial downstream speeds up to 30-40 Mbps and upstream to 10 Mbps across a neighborhood node serving hundreds of homes, with always-on connectivity supplanting dial-up. DOCSIS 2.0 (2002) boosted upstream to 30 Mbps, while DOCSIS 3.0 (2006) introduced channel bonding of up to 8 downstream carriers for 1 Gbps theoretical peaks, though real-world plans averaged 100-400 Mbps by 2010s. DOCSIS 3.1 (2013) added OFDM modulation for gigabit services over existing coax, and DOCSIS 4.0 (2022 onward) targets 10 Gbps downstream with full-duplex operation, allowing simultaneous high-speed without spectrum splitting, though upgrades require provider in node segmentation to mitigate shared-bandwidth contention. Cable's shared risks slowdowns during peak usage as node loads increase, contrasting DSL's isolation but offering superior peak throughput via wider channel widths (6-8 MHz per carrier). DSL suits rural or underserved areas with extensive copper telephony but caps at lower speeds and symmetric variants like SDSL remain niche for businesses; cable dominates suburban markets with higher advertised rates up to 10 times DSL's but introduces variability from oversubscription. Globally, DSL and cable subscriptions declined by 150 million connections between 2020 and 2023 as fixed broadband totaled 2 billion, with fiber absorbing growth due to its superior physics-based capacity over copper and coax.

Fiber Optic and Leased Lines

Fiber optic connections transmit data as pulses of light through thin strands of glass or plastic fibers, enabling high-bandwidth internet access with minimal signal degradation over distance. Unlike copper-based technologies, fiber supports symmetric upload and download speeds, often exceeding 1 Gbps in practice, with potential up to 10 Gbps in advanced deployments. This architecture, commonly implemented via fiber-to-the-home (FTTH) or fiber-to-the-premises (FTTP), uses passive optical networks (PON) where a single fiber from the provider splits to multiple endpoints via optical splitters, reducing infrastructure costs while maintaining low latency typically under 10 milliseconds. Key advantages include resistance to , scalability for future bandwidth demands, and high reliability with uptime often above 99.99%, as does not suffer from the issues plaguing DSL or cable over long runs. Globally, FTTH deployments accelerated in 2024, passing a record 10.3 million additional homes in the United States alone, driven by demand for data-intensive applications like 4K streaming and . The technology's deployment has been uneven, with early leaders like achieving over 80% household coverage by the 2010s through government-backed infrastructure, contrasting slower rural rollouts elsewhere due to high initial trenching costs. Leased lines, often implemented over fiber optics, provide dedicated point-to-point connections between customer premises and provider networks, ensuring uncontended bandwidth without sharing infrastructure with other users. These symmetric circuits, historically rooted in early digital and mainframe links from the , now deliver guaranteed speeds from 100 Mbps to 10 Gbps or more, with agreements (SLAs) enforcing 99.9%+ availability and rapid fault resolution. Primarily targeted at enterprises, leased lines offer predictable low-latency performance critical for applications like transfer and VoIP, outperforming shared in consistency due to the absence of contention ratios. While more expensive—installation can exceed $10,000 with monthly fees scaling to thousands—they provide enhanced security through private routing and are increasingly fiber-based for multi-gigabit capacities, supplanting older T1/E1 lines.

Powerline and Other Alternatives

Powerline communication (PLC), particularly broadband over power lines (BPL), enables internet access by transmitting data signals over existing electrical wiring, serving as an alternative to dedicated telephone, coaxial, or fiber infrastructure for fixed broadband delivery. Access BPL injects high-frequency signals into medium- or high-voltage power lines for wide-area distribution, while in-home PLC uses low-voltage outlets to extend connections within buildings. Deployment began with pilots in the early 2000s, such as those by utility companies in the United States around 2003–2005, leveraging the electrical grid's ubiquity to avoid trenching costs associated with fiber or cable. Speeds for access BPL typically range from 1–45 Mbps downstream in early implementations, limited by signal attenuation over distance and electrical noise from appliances, though modern variants can approach 100 Mbps under optimal conditions. Advantages include rapid rollout using pervasive power infrastructure, with potential for integrated applications like remote metering, and lower initial capital outlay in underserved rural areas compared to fiber-to-the-home. However, disadvantages encompass variable performance due to line impedance variations, with and shortwave bands prompting FCC mitigation rules in 2004, and regulatory hurdles from spectrum allocation conflicts. Adoption of access BPL peaked modestly in the mid-2000s with trials by providers like Current Communications and Ambient Corporation, but waned by the 2010s as DSL upgrades, cable evolutions, and expansions offered superior reliability and speeds up to gigabits. By 2025, BPL remains niche, primarily in select European and Asian utilities for last-mile access in low-density regions or as a hybrid with backhaul, with global deployments serving fewer than 1 million subscribers amid competition from faster alternatives. In-home PLC standards, such as AV2 (up to 2000 Mbps theoretical throughput) and (supporting powerline, , and phoneline media with peaks over 2400 Mbps), facilitate Ethernet extension for local distribution without new cabling, though real-world speeds often fall to 100–500 Mbps due to wiring quality. Other fixed wired alternatives include (MoCA) technology, which repurposes existing TV cabling for in-building extension, delivering consistent 1 Gbps speeds with low latency superior to powerline in noisy environments. MoCA 2.5, ratified in 2016, supports up to 2.5 Gbps and integrates with cable gateways, finding use in multi-dwelling units where infrastructure persists. These methods remain supplementary rather than primary access solutions, overshadowed by scalable deployments, with powerline and MoCA best suited for bridging gaps in legacy-wired settings rather than competing directly with high-capacity last-mile technologies.

Wireless and Mobile Access

Wireless and mobile internet access utilizes radio waves to transmit data, enabling connectivity without wired infrastructure directly to the user device or premises equipment. This approach contrasts with fixed wired methods by supporting mobility and deployment in areas lacking cable feasibility, such as rural or remote locations. Key technologies encompass cellular networks for on-the-go usage, systems for global coverage, and solutions for stationary . By early 2025, mobile devices accounted for over 96% of internet connections among the digital population, underscoring the dominance of wireless methods in global access. Cellular networks form the backbone of mobile internet, evolving from voice-centric systems to data-focused architectures. Third-generation (3G) networks, commercially launched by in on October 1, 2001, introduced packet-switched data services, achieving theoretical downlink speeds up to 2 Mbps and enabling basic mobile browsing and email. Fourth-generation (4G) Long-Term Evolution (LTE) standards, standardized by in 2008 and first deployed in and in December 2009, delivered average speeds exceeding 100 Mbps, facilitating video streaming and access. Fifth-generation (5G) networks, with initial commercial rollouts in 2019, promise peak speeds of 20 Gbps, latency under 1 ms, and massive device connectivity via millimeter-wave and sub-6 GHz bands, supporting applications like and industrial automation. As of 2024, subscriptions reached billions, contributing to 5.5 billion total users or 68% global penetration. Satellite broadband extends access to underserved regions using geostationary (GEO) or low-Earth orbit (LEO) constellations. Consumer services began with ' DirecPC in 1996, offering one-way downloads up to 400 kbps via Ku-band frequencies, later evolving to two-way GEO systems like HughesNet and Viasat with speeds of 25-100 Mbps but latencies of 500-600 ms due to 36,000 km orbital distances. LEO advancements, exemplified by SpaceX's constellation (first user terminals shipped in 2020), deploy thousands of satellites at 550 km altitude, yielding latencies of 20-40 ms and download speeds of 100-500 Mbps as of 2025, though susceptible to weather interference and higher costs. Fixed wireless access (FWA) delivers to fixed locations via point-to-multipoint radio links from base stations, often leveraging unlicensed spectrum or mmWave for ranges up to several kilometers. Deployments surged post-2010 with LTE FWA, achieving 50-200 Mbps in suburban settings; FWA, standardized in Release 15 (2018), targets gigabit speeds with quick installation, serving as a alternative where trenching is uneconomical. mesh networks complement these by interconnecting nodes in a self-healing , typically using protocols (IEEE 802.11s) for last-mile distribution in urban or campus environments, reducing single-point failures but introducing potential latency from . Adoption has grown for cost-effective coverage, though throughput diminishes with node distance.

Cellular Networks (3G to 5G)

Cellular networks from onward have transformed mobile devices into primary conduits for access, shifting from circuit-switched voice dominance to packet-switched data-centric architectures that support web browsing, streaming, and cloud services. The (ITU) defined under IMT-2000 standards, emphasizing higher data throughput over 2G's limited and basic WAP capabilities. Subsequent generations— and —built on this by prioritizing all-IP networks, , and massive connectivity to accommodate surging global data demand, with mobile users reaching 4.6 billion (57% of ) by end-2023. Third-generation (3G) networks, commercially launched first by Japan's in October 2001 using W-CDMA technology, marked the onset of viable by delivering peak data rates of 384 Kbps to 2 Mbps for mobile users and up to 14.4 Mbps in stationary scenarios. These speeds enabled rudimentary applications like and low-resolution video, but real-world performance often fell short due to signal interference and limited , constraining adoption primarily to urban areas in early adopters like and parts of by mid-2000s. Global rollout accelerated post-2003 ITU spectrum allocations, yet 3G's circuit-packet hybrid design inherited inefficiencies from prior generations, yielding latencies around 100-500 ms unsuitable for real-time services. Fourth-generation (4G) networks, epitomized by Long-Term Evolution (LTE), emerged as an all-IP evolution around 2009-2010, with initial deployments in and the achieving peak downloads of 100 Mbps and uploads of 50 Mbps in 20 MHz channels, alongside sub-10 ms control-plane latency. This represented a 10-fold speed increase over , facilitated by (OFDM) and advanced antenna techniques, enabling high-definition streaming and video calls on smartphones. By the mid-2010s, 4G drove subscriptions to billions, with LTE's easing transitions while its higher —up to 5-10 bits/Hz—optimized scarce mid-band spectrum for wider coverage than 3G's denser base stations. Adoption surged due to device ecosystem growth, though rural penetration lagged owing to infrastructure costs and propagation limits. Fifth-generation (5G) networks, standardized under ITU's framework and first commercially deployed in 2019, extend 4G's IP foundation with millimeter-wave (mmWave) bands for ultra-high throughput (up to 20 Gbps theoretically) and sub-1 ms end-to-end latency, alongside massive for density-handling up to 1 million devices per square kilometer. These enhancements stem from hybrid sub-6 GHz and mmWave use, yielding 10-100 times 4G capacity via and network slicing for tailored quality-of-service in applications like and industrial IoT. By 2024, 5G covers 51% of the global population, concentrated in high-income regions, with fixed wireless access variants providing gigabit home internet alternatives where lags. Challenges persist in mmWave's short range (100-300 m per cell) versus 4G's kilometer-scale, necessitating dense deployments, while sub-6 GHz bands balance speed and coverage for broader rural viability. Ongoing 5G-Advanced upgrades promise further latency reductions below 5 ms for vehicular and remote surgery use cases.

Satellite Broadband

Satellite broadband provides Internet access through communication satellites orbiting , enabling connectivity in remote or underserved areas where terrestrial infrastructure is impractical. Users receive service via a satellite dish that transmits and receives signals to satellites, which relay data to ground stations connected to the broader . This technology has evolved from geostationary orbit (GEO) systems, positioned at approximately 35,786 kilometers above the equator for fixed positioning relative to , to low- orbit (LEO) constellations orbiting at 500-2,000 kilometers for reduced signal travel distance. Early Internet experiments date to the , with the first commercial service launched in 1996 via ' HNS-1 , initially offering low-speed, one-way data downloads supplemented by dial-up uploads. capabilities emerged in 2003 with Eutelsat's e-BIRD , enabling two-way high-speed access, though limited by GEO latency. The saw LEO advancements, culminating in SpaceX's constellation, which began deploying thousands of satellites from 2019 onward, achieving over 6,000 in orbit by 2025 to support global coverage. Major providers include (LEO, offering download speeds of 50-220 Mbps and upload of 10-30 Mbps), Viasat, and HughesNet (both GEO-dominant, with speeds typically 25-150 Mbps down but upload capped lower). LEO systems like deliver latencies of 20-50 milliseconds, suitable for video calls and gaming, compared to GEO's 500-600 milliseconds, which hinders real-time applications. As of 2025, serves millions of users worldwide, particularly in rural U.S. and developing regions, while GEO providers cover fixed U.S. areas but lag in performance metrics per Ookla tests. Despite improvements, challenges persist: GEO signals suffer from rain fade and atmospheric attenuation, reducing reliability during severe weather, while LEO requires frequent satellite handoffs and faces orbital congestion risks. Costs remain higher than fiber—Starlink residential plans at $120/month plus $599 hardware—limiting adoption, and capacity constraints can cause congestion in high-density user areas. Spectrum allocation and international regulations further complicate deployment, though LEO's scalability addresses some GEO limitations.

Fixed Wireless and Mesh Networks

Fixed wireless access (FWA) delivers broadband internet to stationary premises, such as homes or businesses, via radio signals between fixed transceivers, typically from a to a receiver, bypassing wired like or cable. This technology has historically served rural and underserved areas lacking deployment, using licensed frequencies for point-to-point links or unlicensed for broader coverage. With the advent of , FWA has expanded significantly, leveraging millimeter-wave and sub-6 GHz bands to achieve download speeds ranging from 100 Mbps to over 1 Gbps in optimal conditions, though real-world performance varies by distance, interference, and availability. In the United States, FWA subscriber growth absorbed all net additions since mid-2022, reaching millions of users by 2024, driven by operators like and Verizon. Deployment costs for FWA are substantially lower than fiber-to-the-home, often 30-50% less due to minimal trenching and rapid installation—sometimes within hours versus weeks for wired alternatives—making it viable for low-density regions. Reliability has improved with advancements, offering mean repair times of 1-3 hours compared to 8-12 hours for fiber outages, though it remains susceptible to weather-related signal degradation in unlicensed bands. Compared to fiber, FWA provides competitive latency (under 20 ms in urban setups) and value at average monthly costs around $72, but fiber edges out in sustained ultra-high speeds (up to 10 Gbps) and capacity for dense traffic. Analysts project U.S. FWA users to hit 14-18 million by 2027, positioning it as a complement rather than full replacement for wired in hybrid networks. Wireless mesh networks extend internet access by interconnecting multiple nodes—such as routers or access points—that relay data collaboratively, forming a self-healing topology for last-mile delivery or local distribution. Commonly deployed in community or municipal settings, meshes use Wi-Fi or proprietary protocols to blanket areas with coverage, as seen in projects like Guifi.net in Spain, which by 2023 connected over 35,000 nodes via user-contributed infrastructure for shared broadband. Advantages include scalability for adding nodes without central bottlenecks, resilience against single-point failures, and cost-effective expansion in urban or rural gaps where backhaul connects to fiber or FWA. However, meshes require robust upstream broadband (e.g., at least 100 Mbps) to avoid bandwidth dilution across hops, limiting efficacy in low-speed environments, and initial setup costs can exceed traditional Wi-Fi due to node density needs. In practice, mesh networks enhance FWA by distributing signals indoors or across neighborhoods, reducing dead zones and supporting seamless device handoffs, but they introduce latency per hop (typically 5-10 ms) and vulnerability to interference in unlicensed . Deployment examples include city-wide systems in Amsterdam's initiatives for public , achieving near-ubiquitous coverage by 2020, though scalability challenges arise in high-traffic scenarios without licensed . Overall, meshes excel in dynamic environments but underperform versus point-to-multipoint FWA for raw throughput in fixed setups.

Performance Characteristics

Connection Speeds and Latency

Connection speeds refer to the throughput capacity of an internet connection, measured in megabits per second (Mbps) for and rates, while latency denotes the round-trip time (RTT) for packets to travel from source to destination and back, expressed in milliseconds (ms). The U.S. (FCC) benchmarks as a minimum of 100 Mbps and 20 Mbps , with higher tiers enabling advanced applications like 4K streaming (requiring 25 Mbps) or multiple simultaneous high-bandwidth uses. Median fixed speeds in the United States reached approximately 204 Mbps as of early 2025, reflecting widespread adoption of cable and technologies, though speeds lag at around 20-30 Mbps in many cases. Globally, fixed broadband medians vary significantly, with leading nations like achieving over 380 Mbps download speeds via extensive fiber deployment, while the worldwide average hovers around 90-110 Mbps. Fiber-optic connections in advanced markets routinely deliver 1 Gbps (1000 Mbps) symmetrical speeds, enabling seamless handling of data-intensive tasks, whereas legacy DSL tops out at 100 Mbps with higher variability. Annual global fixed speed growth has averaged about 20% from to 2023, driven by infrastructure upgrades and competition, outpacing mobile broadband gains. Latency benchmarks differ markedly by access technology: fiber-optic links achieve under 10 ms RTT for local connections due to light's near-speed-of-light propagation in glass (approximately 5 μs per km), minimizing delays for real-time applications like online gaming or video conferencing, where latencies below 50 ms are preferable to avoid perceptible lag. In contrast, low-Earth orbit services like report 25-60 ms latency, a vast improvement over geostationary satellites' 600+ ms but still introducing noticeable delays in interactive uses compared to terrestrial .
TechnologyTypical Download Speed (Mbps)Typical Latency (ms)
Fiber Optic250–10,0001–10
100–1,00010–30
DSL10–10020–50
(LEO)50–50025–60
These metrics illustrate technological progress, with fiber enabling sub-10 ms latencies and gigabit speeds in deployed areas, though real-world depends on and provisioning.

Network Congestion Dynamics

Network congestion in internet access arises when the volume of data exceeds the capacity of network links or routers, resulting in packet queuing delays, increased latency, and potential . This phenomenon is most pronounced during peak usage periods, such as evenings when residential users engage in bandwidth-intensive activities like video streaming. For instance, streaming services have historically accounted for a significant portion of downstream ; in 2023, alone represented approximately 15% of global fixed download during peak hours. Such bottlenecks occur at points between ISPs and content providers, where uncoordinated surges in demand amplify queue buildup, degrading throughput for all users sharing the link. Engineering mitigations have proven effective in alleviating these overloads without relying on external mandates. Content delivery networks (CDNs) distribute cached copies of popular content closer to end-users, reducing the need to traverse long-haul backbone links; during the , when global internet traffic surged by 25-35% due to and entertainment shifts, CDNs like Akamai absorbed much of the increase, preventing widespread collapse by localizing delivery and minimizing origin server loads. Similarly, (QoS) mechanisms enable ISPs to prioritize critical packets—such as those for real-time applications—over bulk transfers during congestion, using techniques like and queuing disciplines to maintain performance differentials. These market-driven tools allow providers to allocate resources dynamically based on observed demand patterns. Historical interconnection disputes underscore the role of voluntary agreements in resolving congestion. In the early , Netflix's rapid growth strained relationships with ISPs like , leading to slowdowns as unpaid traffic exchanges overwhelmed ports; these were settled through paid or direct interconnect deals, such as Netflix's 2014 multi-year agreement with for dedicated capacity, which improved delivery without regulatory intervention. By 2020, widespread adoption of such arrangements, combined with ISP capacity expansions, ensured that even the 40% year-over-year traffic growth from pandemic-induced streaming did not trigger systemic failures. Overall, these decentralized solutions— optimizations, edge caching, and QoS—demonstrate networks' resilience to demand spikes through adaptive rather than centralized controls.

Outages and Reliability Metrics

Internet service providers (ISPs) typically guarantee uptime levels of 99.9% or higher for enterprise customers, translating to no more than about 8.76 hours of annual downtime, though actual performance varies by provider and region. Dedicated internet access services from major carriers often include agreements (SLAs) targeting 99.95% availability, with credits issued for failures exceeding thresholds. These metrics reflect investments in redundant , but consumer-grade services may fall short during peak loads or localized faults. Primary causes of outages include physical infrastructure damage, such as fiber optic cable cuts from construction accidents or animal interference, which accounted for approximately 17% of network incidents in analyzed datasets. Other frequent triggers encompass equipment failures, power disruptions, and deliberate disruptions like distributed denial-of-service (DDoS) attacks, which have risen in publicly reported cases. Mean time to repair (MTTR) for such events can span hours to days without redundancy, though private sector deployments of diverse routing paths and backup links have shortened recovery to under an hour in optimized urban networks. Rural areas exhibit lower reliability than urban counterparts due to sparser and reduced , resulting in prolonged outages from single points of failure like isolated cable damage or weather events. Advancements in (BGP) monitoring enable rapid rerouting around faults, while AI-driven detect anomalies in traffic patterns to preempt failures, contributing to year-over-year declines in unplanned through proactive . Private investments in these technologies, including multi-homed connections and automated systems, have enhanced overall ecosystem resilience by diversifying paths and minimizing propagation delays during incidents.

Economic Aspects

Internet service providers (ISPs) commonly employ tiered pricing structures based on download speeds, with higher tiers commanding premium monthly fees. In the United States, entry-level broadband plans offering 100 Mbps typically range from $50 to $80 per month, while gigabit speeds can exceed $100, excluding taxes and equipment fees. These structures reflect varying connection types, such as cable or , and local competition levels, with often providing better value at around $67 per month on average. In developing markets, pricing frequently differentiates between unlimited flat-rate plans in urban areas and capped allotments sold via prepaid vouchers, encouraging usage-based consumption to manage constraints. , dominant in these regions, often features 1-10 GB packs priced affordably but with strict overage penalties, contrasting unlimited home prevalent in developed economies. Globally, mobile costs vary starkly: offers rates as low as $0.09 per GB due to intense competition and scale, while some African nations like charge over $27 per GB amid limited and higher operational costs. Cost trends show marked declines driven by technological efficiency and market rivalry, with real prices in the U.S. falling nearly 60% over the past decade alongside surging speeds. The price per megabit has dropped approximately 92% from 2008 to 2018, continuing an exponential pattern of 80-90% reductions per decade through capacity expansions like denser deployment. Bundling with traditional TV services has waned as accelerates, with U.S. pay-TV subscribers declining to 68.7 million by 2025 from over 100 million in 2010, prompting ISPs to offer standalone at competitive rates without legacy video add-ons.

Infrastructure Investment Drivers

Global capital expenditures for broadband infrastructure surpass $100 billion annually, reflecting sustained private investment in expanding network capacity to meet rising data demands from streaming, , and . In the United States, providers invested $89.6 billion in 2024, contributing to a cumulative total exceeding $2.2 trillion since , with a significant portion allocated to optic and deployments. These investments are predominantly driven by (ROI) prospects, where high population density enables cost amortization over numerous subscribers; (FTTH) projects in urban areas often achieve payback periods of 5-10 years through efficient scaling and for gigabit speeds. Key incentives include fiscal policies like accelerated tax depreciation rather than regulatory mandates, which empirical data suggest enhance capital deployment without distorting market signals. The 2017 U.S. Tax Cuts and Jobs Act's provisions for immediate expensing of equipment spurred telecom capex, while the concurrent FCC repeal of Title II classifications—effective December 14, 2017—correlated with accelerated broadband buildouts; industry reports indicate investment rose by over $2 billion in 2017 alone upon signaling the repeal, with subsequent years showing sustained growth attributed to alleviated compliance costs and clearer ROI forecasting. Deregulated environments empirically outperform heavily regulated ones in attracting private funds, as evidenced by faster network expansions in jurisdictions prioritizing property rights and streamlined approvals over utility-style oversight. Risks from regulatory uncertainty, such as protracted permitting delays and policy reversals, disproportionately hinder rural investments where lower densities extend ROI horizons beyond a decade, reducing and prompting providers to prioritize urban overbuilds. Studies confirm that ambiguous rules on pole attachments, , and environmental reviews can increase project timelines by 20-50%, deterring capital amid high upfront costs for sparse coverage. This dynamic underscores how causal factors like predictable legal frameworks causally enable scalable , contrasting with interventions that impose burdens without commensurate subsidies.

Market Competition and Monopoly Concerns

In the United States, the residential broadband market often features duopoly structures, with cable operators like and competing against incumbent telephone companies such as and Verizon in overlapping territories, while over one-third of Americans reside in areas served by a single provider or none at all. This limited competition stems from high infrastructure costs and regulatory barriers that deter new entrants, though recent developments including overbuilders and access (FWA) providers like and Verizon have introduced alternatives in select markets, expanding options beyond traditional cable-telco pairings. Empirical studies indicate that heightened competition correlates with consumer benefits, including lower prices and improved ; for instance, markets with multiple providers exhibit prices approximately 15-25% below those in monopoly or duopoly settings, alongside faster deployment of higher speeds. On , research shows monopolistic ISPs invest less in network upgrades absent competitive pressure, with duopoly areas demonstrating slower adoption of technologies like gigabit compared to regions with three or more providers. Merger activity, such as the blocked 2015 Comcast-Time Warner Cable deal and approved 2016 Charter-Time Warner Cable acquisition, has intensified consolidation, reducing national ISP counts from over 3,000 in 2000 to fewer than 1,500 by 2023, potentially exacerbating these dynamics by limiting rivalry. Counterarguments emphasize that scale from consolidation facilitates substantial capital expenditures necessary for nationwide upgrades, as evidenced by U.S. broadband providers' $89.6 billion investment in , which proponents attribute to efficiencies gained from mergers enabling fiber and expansions that smaller fragmented operators could not fund independently. Critics of aggressive antitrust interventions, including recent scrutiny of proposed Charter-Cox synergies, warn that overreach could stifle such investments by discouraging mergers that yield cost savings passed to consumers through enhanced capacity rather than price hikes. Overall, while duopolies persist, dynamic entry via alternative technologies suggests evolving competition, though empirical merger outcomes underscore trade-offs between and infrastructural scale.

Global Availability and Disparities

As of early 2025, approximately 5.56 billion people worldwide use the internet, representing 67.9% of the global population. This marks an increase from 5.35 billion users in early 2024, with growth driven primarily by expansions in mobile connectivity and adoption in densely populated regions like Asia. The International Telecommunication Union (ITU) reports that internet penetration reached 68% by late 2024, up from 65% the previous year, adding roughly 235 million new users amid falling device costs and network infrastructure improvements. While internet penetration has grown rapidly, radio broadcasting achieved near-global reach earlier, with estimates of up to 95% population coverage per mid-2010s UN sources, due to its low cost, portability, and ability to function without reliable electricity or extensive infrastructure; this contrasts with the internet's current approximately 68% user penetration, illustrating historical precedents for widespread media access alongside persistent barriers for connectivity-dependent technologies. In the United States, penetration among adults stands at 96% as of mid-, with household subscriptions covering about 80% of homes, though overall household access exceeds 93% when including mobile and dial-up alternatives. Historical data from the World Bank indicate that U.S. user penetration grew from around 50% in 2000 to over 90% by 2019, reflecting a (CAGR) in adoption exceeding 4% for population share, fueled by innovations in and technologies rather than public subsidies. This organic diffusion continued post-2020, with only about 6.3% of households remaining offline in due to affordability and infrastructure maturity. Mobile devices play a dominant role in global internet access, with over 60% of originating from smartphones and tablets as of mid-2025, and an estimated 64% of the world's population able to connect primarily via mobile networks. In developing regions, where fixed lags, smartphones account for the majority of new connections, enabling rapid uptake through affordable plans and device proliferation; for instance, 59% of global website visits occur on mobile in 2025, underscoring the technology's portability and scalability as key drivers of penetration growth. This trend attributes expansion to advancements in efficiency and hardware , outpacing traditional wired deployments.

Geographic and Demographic Divides

Urban areas worldwide exhibit significantly higher penetration rates than rural regions, with 81% of urban dwellers using the compared to 50% in rural areas as of 2023. This geographic disparity arises primarily from the of deployment, where low densities in rural zones increase per-user costs for providers, making extension less viable without external incentives. Of the 2.6 billion people globally offline in 2024, the majority reside in rural areas of low- and middle-income countries, where sparse settlement patterns exacerbate deployment challenges over discrimination or intent. In the United States, rural broadband unserved rates remain elevated relative to urban counterparts, with Federal Communications Commission data indicating persistent gaps driven by similar density-related economics; for instance, rural locations often require disproportionate investment for coverage due to extended distances and fewer potential subscribers. Demographic divides compound these issues, as lower-income households face higher non-adoption rates—43% lack home broadband—though access has improved via affordable mobile devices. Elderly populations also lag, with only 61% of those 65 and older owning smartphones versus 96% of younger adults, yet this gap narrows through device price reductions rather than policy alone. Gender disparities in internet access persist regionally, particularly in the Middle East and North Africa, where women are 12% less likely to use the than men as of , though global gaps show signs of contraction with 189 million more men online overall but decreasing differences since 2021. Claims of affordability as the primary barrier often overlook causal realities: in low-density areas, user-side costs are secondary to provider , where fixed costs spread thinly over few users deter investment absent density premiums seen in urban cores. This underscores that divides reflect market-driven feasibility tied to and demographics, not inherent inequities in access pricing for end-users.

Empirical Factors Limiting Access

Geographic challenges, including mountainous and arid deserts, substantially elevate the costs and complexity of infrastructure deployment. Rugged landscapes necessitate more extensive for trenching, cabling, and signal propagation, often requiring aerial lines susceptible to environmental damage or specialized equipment for rocky soils. In such areas, deployment expenses can exceed those in flat terrains by factors driven by access difficulties and material needs, deterring where densities are low. Economic poverty constrains demand for internet services, as households in low-income regions prioritize over connectivity subscriptions. This reduced willingness to pay limits revenue potential, discouraging private expansion in underserved markets. As of 2024, penetration in remains at about 38%, far below the 97.7% rate in , underscoring how income disparities suppress adoption even where partial exists. Device affordability poses a parallel barrier, particularly in developing countries, where the upfront cost of smartphones or computers often exceeds local despite available networks. In many low-income settings, lack of compatible hardware restricts access more than connectivity alone, with mobile devices serving as the primary entry point yet remaining out of reach for significant portions of the population. Private-sector innovations, exemplified by low-Earth orbit satellite constellations like , mitigate these empirical limits by enabling rapid deployment to remote and challenging terrains without reliance on ground-based cabling. By 2024, has delivered high-speed, low-latency to isolated regions globally, circumventing geographic and cost hurdles through scalable satellite technology.

Policy Debates and Interventions

Network Neutrality: Arguments and Evidence

Network neutrality refers to the principle that internet service providers (ISPs) must treat all online traffic equally, prohibiting practices such as blocking lawful content, throttling speeds for specific sites or services, or offering paid prioritization (commonly termed "fast lanes") to certain users or applications. In the United States, the (FCC) reinstated net neutrality rules in April 2024 via the Open Internet Order, classifying as a Title II telecommunications service, but these were vacated by the U.S. Court of Appeals for the Sixth Circuit on January 2, 2025, following the Supreme Court's overruling of Chevron in , which limited agency authority to interpret ambiguous statutes. Proponents argue that net neutrality safeguards an open internet by preventing ISPs from creating fast lanes that favor high-paying entities, potentially distorting competition and innovation at the network edge. They contend this protects smaller content providers from being edged out by ISP-affiliated services or large payers, citing historical concerns like Comcast's 2008 throttling of BitTorrent traffic, which prompted FCC action. However, empirical evidence of widespread ISP discrimination prior to the 2015 rules remains sparse; the 2017 FCC repeal analysis noted that formal complaints were low, with only isolated incidents like Madison River's 2005 VoIP blocking resolved via voluntary settlements, and no systemic pattern of blocking or throttling emerged in the deregulated period before 2015. Peering agreements between networks, which facilitate traffic exchange without regulation, have historically self-regulated through market-negotiated terms to avoid free-riding, demonstrating that competitive incentives often suffice without mandates. Opponents maintain that net neutrality regulations impose utility-like constraints on ISPs, fostering regulatory uncertainty that discourages infrastructure and in network capacity. Empirical studies support this, finding that net neutrality rules exerted a significant negative effect on fiber-optic deployments; for instance, a 2022 analysis of countries showed stricter regulations correlated with reduced high-speed investments, as ISPs face limits on recouping costs from heavy-traffic users via . Post-2015 Title II classification, U.S. capital expenditures declined, with industry reports attributing over $50 billion in foregone to heightened compliance burdens and barred revenue models, contrasting with accelerated deployment after the 2018 repeal. Banning paid harms incentives for upgrading networks to handle surging data demands, as ISPs cannot directly charge edge providers like streaming services for disproportionate bandwidth use, potentially leading to congestion and slower overall speeds absent self-funding mechanisms. While proponents highlight risks from fast lanes, market evidence indicates that without regulation, ISPs have not broadly implemented them, suggesting competitive pressures and antitrust oversight mitigate abuses more effectively than blanket rules.

Government Subsidies: Outcomes and Critiques

The Broadband Equity, Access, and Deployment (BEAD) program, allocated $42.45 billion under the 2021 to expand high-speed in unserved areas, had disbursed no funds for eligible projects as of August 2025, resulting in zero households connected despite years of planning and state proposals. Program delays stem from stringent requirements prioritizing fiber-optic deployments over alternatives like or satellites, bureaucratic reviews, and shifts in federal guidance, including a 2025 Commerce Department overhaul removing prior mandates on labor and climate criteria. Critics argue these rules foster inefficiency and by favoring established providers, with states reporting inconsistent outcomes and overemphasis on unproven technologies amid rising private-sector alternatives. The Connect America Fund (CAF), launched by the FCC in 2011 to subsidize rural , distributed over $10 billion through 2021 but delivered subpar results, with 93% of funded households receiving only 10 Mbps /1 Mbps speeds—below modern standards—and more than 40% of supported addresses remaining unserved per independent audits contradicting provider certifications. Post-funding, major recipients ceased service to up to half of pledged locations, highlighting issues of overbuilding existing and monopoly grants that disincentivize . Academic evaluations confirm CAF's model of subsidizing single-provider monopolies in high-cost areas yielded limited efficacy in closing the , often exacerbating duplication where private investment already existed. USDA's ReConnect program, providing loans and since 2018 for rural broadband, has awarded billions but lacks established performance goals and adequate fraud risk management, per Government Accountability Office assessments, leading to uneven deployment and vulnerability to waste. While some correlate with localized productivity gains, such as 9.3% agricultural output increases in recipient areas after three years, broader critiques point to matching fund requirements and evaluation gaps that delay projects and favor inefficient builds over market-driven solutions. Globally, similar subsidies exhibit waste through overbuilding and regulatory hurdles; for instance, state aid for legacy broadband projects has violated updated rules, creating investor disincentives and redundant in areas with viable private options. Government-owned often incur higher costs and lower than private competitors, diverting resources without proportional access gains. In contrast, unsubsidized private initiatives like SpaceX's have rapidly expanded rural coverage, achieving median download speeds exceeding 100 Mbps by mid-2025—far surpassing CAF-era subsidized services—through low-Earth orbit satellites that bypass terrestrial . Such market approaches demonstrate faster, cheaper connectivity in remote regions without equivalent taxpayer outlays, underscoring critiques that subsidies entrench and outdated tech preferences over innovative, competitive deployment. Successes in targeted, auction-based grants occur when minimizing , but pervasive and irrelevance to evolving needs undermine most programs.

Framing Access as a Right or Utility

In 2011, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, released a report emphasizing that internet access facilitates the exercise of freedom of expression and that arbitrary disconnections violate international human rights standards. The report did not explicitly declare internet access itself a standalone human right but framed restrictions on it as infringing existing rights like information access under Article 19 of the Universal Declaration of Human Rights. Critics contend this framing overlooks the resource-intensive nature of internet infrastructure, which requires ongoing capital for deployment and upkeep—unlike naturally abundant essentials such as air or water—and imposes obligations on providers or states without accounting for economic scarcity or feasibility in low-density areas. Classifying internet access as a , akin to or , often leads to regulatory frameworks like the U.S. Federal Communications Commission's 2015 Title II reclassification of , which subjected providers to rules including tariff filing and unbundling mandates. Post-reclassification data show a 17.8% decline in nominal broadband investment and a 19.8% drop in real terms, attributed to heightened regulatory uncertainty deterring capital expenditures. Cross-country empirical analyses reveal that markets with minimal ex ante regulation, such as those prioritizing facility-based competition over utility-style mandates, achieve superior outcomes; for instance, regulations in nations have been linked to reduced fiber-optic investments, while lightly regulated environments like South Korea's sustain median download speeds exceeding 100 Mbps as of 2023. Market-oriented alternatives, such as demand-side vouchers, offer a less distortionary path to expanding access by targeting subsidies to underserved users without imposing supply-side mandates that could stifle innovation. The U.S. , active from 2021 to 2024, provided up to $30 monthly vouchers for low-income households, enrolling over 23 million participants and boosting adoption without requiring utility reclassification. Such mechanisms preserve incentives for private investment—evident in post-deregulation expansions in competitive U.S. markets—while avoiding the fiscal burdens and efficiency losses of mandates, which often subsidize non-marginal users.

Disruptions and Resilience

Impacts of Natural Disasters

in August 2005 caused extensive disruptions to internet access in the Gulf Coast region, with physical damage to fiber optic cables, power outages, and flooding leading to outages lasting days to weeks for wired networks. Over 60% of telecommunications networks remained inoperable three weeks after landfall, primarily due to severed undersea and terrestrial cables and reliance on vulnerable above-ground . In contrast, satellite-based systems maintained functionality, enabling limited but critical connectivity for emergency response where terrestrial lines failed. More recent events, such as Hurricane Helene in September 2024, highlighted ongoing vulnerabilities in fiber-heavy , with widespread cable cuts from landslides and flooding resulting in blackouts persisting for weeks in and parts of the Southeast. Traditional providers reported slow recovery, with full restoration in some areas delayed until late October due to inaccessible terrain and damaged backhaul lines. cellular networks fared better in initial restoration, achieving up to 99% site recovery within days through mobile towers and backup power, though dependent on undamaged links. Satellite alternatives demonstrated superior redundancy, as low-Earth orbit systems like bypassed terrestrial damage entirely, providing deployable terminals that restored high-speed access within hours for affected communities and responders. Empirical comparisons across disasters show and recovery times averaging 1-7 days versus 2-4 weeks for fiber optics prone to excavation and splicing repairs post-flood or events. Case studies underscore that privately driven diversification—such as competing constellations—yields faster, more adaptive resilience than uniform reliance on government-mandated wired standards, which often concentrate failure points in shared physical paths. Redundant designs incorporating multiple technologies mitigate single-point vulnerabilities, with evidence from Katrina and Helene indicating that operator-led backups outperform centralized mandates in enabling rapid, scalable recovery without awaiting regulatory approvals or public funding.

Cyber Threats and Infrastructure Vulnerabilities

Distributed denial-of-service (DDoS) attacks represent a primary cyber threat to internet infrastructure, overwhelming servers and networks with to disrupt access. In February 2020, (AWS) mitigated a record 2.3 terabits per second (Tbps) DDoS attack, the largest reported at the time, which targeted cloud-hosted services without causing widespread outages due to automated defenses. Such attacks exploit vulnerabilities in routing protocols like (BGP), enabling hijacking or amplification. Physical vulnerabilities, including undersea cable cuts, compound cyber risks by severing transoceanic data links that carry over 99% of international . In September 2025, cuts to three major cables (EIG, Seacom, AAE-1) in the disrupted connectivity across and the , reducing capacity by up to 25% and forcing rerouting that increased latency for services like . These incidents, often attributed to accidental anchors or fishing but increasingly suspected of sabotage amid geopolitical tensions, highlight the fragility of concentrated routes. State actors have targeted infrastructure to impair national access during conflicts. On February 24, 2022—the day of Russia's invasion of —a cyber operation disrupted Viasat's KA-SAT network, disabling modems for thousands of users including terminals and civilians, delaying communications without physical damage. BGP hijacks by state-linked groups have also rerouted traffic, as seen in repeated attempts to disrupt Ukrainian networks in 2022. DDoS attack volumes have escalated sharply, with reports indicating a 30% surge in the first half of 2024 compared to 2023, alongside average attack sizes growing 69% year-over-year to peaks exceeding 962 Gbps. BGP remains susceptible to prefix hijacking, though extensions like BGPsec—defined in RFC 8205—enable cryptographic path validation to prevent forged routes, albeit with limited adoption due to resource demands on autonomous systems. Market among internet service providers (ISPs) drives superior security investments compared to regulated monopolies, as firms differentiate on reliability to attract subscribers. Empirical analysis shows U.S. private-sector has spurred infrastructure upgrades, including resilience measures, yielding normal profit margins and sustained capital expenditures without subsidies. Regulations imposing uniform standards can raise compliance costs, potentially stifling , whereas competitive pressures incentivize proactive defenses like redundant and DDoS scrubbing to minimize and customer churn.

Emerging Developments

Low-Earth Orbit Satellite Systems

Low-Earth orbit (LEO) satellite constellations deploy hundreds to thousands of small at altitudes between 500 and 2,000 kilometers to deliver , enabling global coverage with reduced latency compared to geostationary systems. These networks use inter-satellite links and phased-array antennas on user terminals to achieve rates suitable for streaming and real-time applications, targeting underserved regions where or cellular is uneconomical. By October , operational deployments have exceeded 9,000 satellites across major systems, marking a rapid commercialization of space-based connectivity. SpaceX's leads with over 8,700 satellites in orbit as of late October 2025, of which approximately 8,600 remain operational, providing median download speeds of 104.71 Mbps and upload speeds of 14.84 Mbps in tested U.S. regions during early 2025, with latencies averaging 38 ms. 's phased rollout has expanded to over 40 countries, prioritizing high-latitude and rural areas initially before broader equatorial coverage via additional orbital shells. , a competitor, operates over 650 satellites as of April 2025, with plans for a Gen2 expansion of around 300 more units starting that year, emphasizing enterprise backhaul and maritime applications over consumer residential service. These deployments disrupt traditional access divides by enabling 100+ Mbps service in terrain-challenged locales like mountains or islands, independent of ground-based repeaters or cables. LEO systems' proximity to yields latencies of 20-50 ms, supporting applications like video conferencing that geostationary alternatives cannot, while higher orbital velocity necessitates dense constellations for continuous and minimal downtime. This architecture inherently bypasses topographic obstacles, reducing the rural-urban digital gap without subsidies for last-mile terrestrial builds, as evidenced by Starlink's 99%+ uptime in remote deployments. Challenges include spectrum sharing conflicts in Ku- and Ka-bands, with claims of interference to terrestrial and astronomical receivers; however, empirical analyses of constellation overlaps show low aggregate impact due to directional and regulatory coordination, limiting measurable disruptions to under 1% of affected signals in coordinated scenarios. Ongoing mitigations, such as adaptive , further contain these effects amid growing orbital density.

Advanced Wireless (6G and Beyond)

wireless networks represent the next evolution beyond , with research and development emphasizing terahertz (THz) frequencies to enable peak data rates up to 1 terabit per second (Tbps), far surpassing 's capabilities. Standardization bodies like the ITU's IMT-2030 framework guide these efforts, with specifications expected to finalize between and 2029, followed by lab testing and pilot trials starting around 2028 and pre-commercial deployments by 2030. These timelines depend on overcoming losses and hardware limitations inherent to THz bands above 100 GHz, which require wider bandwidths of 10 GHz or more for such speeds. AI integration forms a core pillar of 6G architecture, embedding across protocol layers for dynamic , , and to handle heterogeneous traffic loads. This AI-native design supports emerging applications, including holographic communications via advanced arrays for immersive three-dimensional data transmission and massive IoT ecosystems connecting billions of low-power devices with sub-millisecond latency. Such potentials arise from causal links between higher and , though empirical prototypes remain limited to controlled environments as of 2025. Regulatory hurdles, particularly spectrum allocation delays, threaten U.S. leadership in , as the FCC's auction authority lapsed in 2023, stalling mid-band releases needed for viable coverage and capacity. Competitors in regions with proactive policies, such as and , advance faster through coordinated public-private spectrum planning. Deployment will ultimately be propelled by private market incentives for bandwidth-intensive services like and autonomous systems, rather than subsidies, which U.S. strategies limit to targeted R&D to avoid distorting competition. Beyond , visionary concepts like quantum-secure links and neuromorphic processing loom, but their feasibility awaits validation through iterative THz and AI advancements.

AI-Driven Network Optimization

Artificial intelligence enhances network optimization by enabling and real-time adjustments that minimize disruptions and maximize throughput in internet infrastructure. In , AI algorithms process vast datasets from sensors and logs to forecast equipment failures, allowing operators to perform preemptive repairs. For instance, AI-driven has reduced network outages by up to 30% in deployed systems, as demonstrated in autonomous network trials where prevents cascading failures. This approach contrasts with reactive strategies, which often amplify downtime costs, and relies on models trained on historical fault patterns to prioritize interventions based on failure probabilities. Dynamic routing powered by AI addresses congestion by continuously evaluating traffic flows and rerouting packets through underutilized paths, improving latency and bandwidth allocation without manual oversight. Examples include adaptive algorithms that integrate data for instantaneous path selection, as seen in implementations using in-band network telemetry to evade bottlenecks in high-demand scenarios. In broadband contexts, AI extends to cybersecurity by detecting anomalous patterns indicative of threats, such as distributed denial-of-service attacks, through behavioral analysis that outperforms traditional signature-based methods. Recent developments in 7 orchestration leverage AI for automated resource management, including channel selection and multi-link operations, as in Huawei's AI Fabric 2.0, which optimizes end-to-end control for denser device environments. These optimizations yield measurable cost savings, with AI automation reducing operational expenditures by 15-20% through fault resolution and energy-efficient configurations, thereby encouraging private investment in expansion. reports that such efficiencies stem from AI's ability to handle routine tasks, freeing resources for innovation rather than maintenance, without reliance on regulatory mandates. This market-driven progress, evident in 2025 deployments, underscores AI's causal role in scaling internet access by lowering barriers to reliable service delivery.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.