Hubbry Logo
Network bridgeNetwork bridgeMain
Open search
Network bridge
Community hub
Network bridge
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Network bridge
Network bridge
from Wikipedia

A high-level overview of network bridging, using the ISO/OSI layers and terminology

A network bridge is a computer networking device that creates a single, aggregate network from multiple communication networks or network segments. This function is called network bridging.[1] Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network.[2] In the OSI model, bridging is performed in the data link layer (layer 2).[3] If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.

The main types of network bridging technologies are simple bridging, multiport bridging, and learning or transparent bridging.[4][5]

Transparent bridging

[edit]

Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments. The table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is forwarded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received. By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to.[6] Digital Equipment Corporation (DEC) originally developed the technology in 1983[7] and introduced the LANBridge 100 that implemented it in 1986. [8]

In the context of a two-port bridge, the forwarding information base can be seen as a filtering database. A bridge reads a frame's destination address and decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, preventing it from reaching the other network where it is not needed.

Transparent bridging can also operate over devices with more than two ports. As an example, consider a bridge connected to three hosts, A, B, and C. The bridge has three ports. A is connected to bridge port 1, B is connected to bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge. The bridge examines the source address of the frame and creates an address and port number entry for host A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods (broadcasts) it to all other ports: 2 and 3. The frame is received by hosts B and C. Host C examines the destination address and ignores the frame as it does not match with its address. Host B recognizes a destination address match and generates a response to A. On the return path, the bridge adds an address and port number entry for B to its forwarding table. The bridge already has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between A and B without any further flooding to the network. Now, if A sends a frame addressed to C, the same procedure will be used, but this time the bridge will not create a new forwarding-table entry for A's address/port because it has already done so.

Bridging is called transparent when the frame format and its addressing aren't changed substantially. Non-transparent bridging is required especially when the frame addressing schemes on both sides of a bridge are not compatible with each other, e.g. between ARCNET with local addressing and Ethernet using IEEE MAC addresses, requiring translation. However, most often such incompatible networks are routed in between, not bridged.

Simple bridging

[edit]

A simple bridge connects two network segments, typically by operating transparently and deciding on a frame-by-frame basis whether or not to forward from one network to the other. A store and forward technique is typically used so, as part of forwarding, the frame integrity is verified on the source network and CSMA/CD delays are accommodated on the destination network. In contrast to repeaters which simply extend the maximum span of a segment, bridges only forward frames that are required to cross the bridge. Additionally, bridges reduce collisions by creating a separate collision domain on either side of the bridge.

Multiport bridging

[edit]

A multiport bridge connects multiple networks and operates transparently to decide on a frame-by-frame basis whether to forward traffic. Additionally, a multiport bridge must decide where to forward traffic. Like the simple bridge, a multiport bridge typically uses store and forward operation. The multiport bridge function serves as the basis for network switches.

Implementation

[edit]

The forwarding information base stored in content-addressable memory (CAM) is initially empty. For each received Ethernet frame the switch learns from the frame's source MAC address and adds this together with an interface identifier to the forwarding information base. The switch then forwards the frame to the interface found in the CAM based on the frame's destination MAC address. If the destination address is unknown the switch sends the frame out on all interfaces (except the ingress interface). This behavior is called unicast flooding.

Forwarding

[edit]

Once a bridge learns the addresses of its connected nodes, it forwards data link layer frames using a layer-2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth methods were performance-increasing methods when used on switch products with the same input and output port bandwidths:

  1. Store and forward: the switch buffers and verifies each frame before forwarding it; a frame is received in its entirety before it is forwarded.
  2. Cut through: the switch starts forwarding after the frame's destination address is received. There is no error checking with this method. When the outgoing port is busy at the time, the switch falls back to store-and-forward operation. Also, when the egress port is running at a faster data rate than the ingress port, store-and-forward is usually used.
  3. Fragment free: a method that attempts to retain the benefits of both store and forward and cut through. Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frame transmissions that are aborted because of a collision will not be forwarded. Error checking of the actual data in the packet is left for the end device.
  4. Adaptive switching: a method of automatically selecting between the other three modes.[9][10]

Bridge loops

[edit]

If network bridges are connected in a way that forms redundant paths or mesh loops, broadcast frames loop through the network indefinitely, bringing it to a halt. This situation must be mitigated using a spanning tree protocol or a more intelligent forwarding algorithm like Shortest Path Bridging or TRILL.

Spanning tree protocol

[edit]

A spanning tree protocol is a distributed algorithm that organizes active ports in way to form a spanning tree, where there is only one usable path between any two nodes.

Shortest Path Bridging

[edit]

Shortest Path Bridging (SPB), specified in the IEEE 802.1aq standard and based on Dijkstra's algorithm, is a computer networking technology intended to simplify the creation and configuration of networks, while enabling multipath routing.[11][12][13] It is a proposed replacement for Spanning Tree Protocol which blocks any redundant paths that could result in a switching loop. SPB allows all paths to be active with multiple equal-cost paths. SPB also increases the number of VLANs allowed on a layer-2 network.[14]

TRILL (Transparent Interconnection of Lots of Links) is the successor to Spanning Tree Protocol, both having been created by the same person, Radia Perlman. The catalyst for TRILL was an event at Beth Israel Deaconess Medical Center which began on 13 November 2002.[15][16] The concept of Rbridges[17] [sic] was first proposed to the Institute of Electrical and Electronics Engineers in the year 2004,[18] who in 2005[19] rejected what came to be known as TRILL, and in the years 2006 through 2012[20] devised an incompatible variation known as Shortest Path Bridging.

See also

[edit]
  • Audio Video Bridging – Specifications for synchronized, low-latency streaming
  • IEEE 802.1D – Standard which includes bridging, Spanning Tree Protocol and others
  • IEEE 802.1Q – IEEE networking standard supporting VLANs
  • IEEE 802.1ah-2008 – Standard for bridging over a provider's network
  • Promiscuous mode – Network interface controller mode that eavesdrops on messages intended for others

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A network bridge is a Layer 2 device in computer networking that interconnects multiple local area networks (LANs), each supporting the MAC service, to form a single logical network by forwarding frames based on media access control (MAC) addresses while filtering traffic to reduce congestion and collisions. Bridges operate at the of the , enabling transparent communication between end stations on separate physical segments without requiring changes to higher-layer protocols. This functionality extends the effective size of a LAN beyond the limitations of a single , improving overall network performance. Bridges function through a three-step : learning, filtering, and forwarding. Upon receiving a frame, a bridge examines the source and records it in a dynamic forwarding table associated with the incoming , building knowledge of device locations over time. It then filters the frame if the destination MAC is on the same port (to prevent unnecessary broadcasts) or forwards it only to the appropriate outgoing port based on the table, rather than flooding all ports as a basic hub would. If the destination is unknown, the frame is flooded to all ports except the source, ensuring delivery while minimizing bandwidth waste. To prevent loops in redundant topologies, bridges implement the , standardized in , which dynamically selects a loop-free subset of the network by electing a root bridge and blocking redundant links via Bridge Protocol Data Units (BPDUs). Invented by in 1985 at , STP ensures reliable frame delivery by recomputing paths if failures occur, though convergence can take up to 30-50 seconds in traditional implementations. Enhanced variants like Rapid STP (IEEE 802.1w) reduce this to seconds for faster recovery. Network bridges come in several types, including transparent bridges, which operate invisibly to endpoints by learning addresses and complying with for Ethernet and similar media, and source-route bridges, used for networks to route frames via route information fields (). The standard extends bridging to support virtual LANs (s), allowing logical segmentation of a physical network for improved security and management. In modern networks, multi-port bridges evolved into Ethernet switches, which provide dedicated bandwidth per port and integrate advanced features like VLAN tagging and .

Fundamentals

Definition and Purpose

A network bridge is a networking device that operates at the (Layer 2 of the ), interconnecting multiple (LAN) segments below the Media Access Control (MAC) service boundary to form a single while filtering traffic based on MAC addresses. This architecture enables transparent communication between end stations on distinct LANs, as if they were connected to the same physical medium, ensuring compatibility with (LLC) and higher-layer protocols. The primary purpose of a network bridge is to extend LANs by linking separate segments, such as Ethernet networks, to improve through selective frame forwarding and reduce collisions by segmenting without requiring Layer 3 . In early Ethernet deployments, bridges connected multiple coaxial or twisted-pair segments to expand network coverage beyond single-segment limitations, allowing devices to share resources efficiently while maintaining a unified logical . By filtering unnecessary broadcasts and unicasts, bridges enhance throughput in shared-medium environments like CSMA/CD networks. Fundamentally, a network bridge features two or more network interfaces for segment attachment, a table (forwarding database) that dynamically maps addresses to ports, and filtering/forwarding logic to inspect and direct frames based on destination addresses. Key benefits include higher bandwidth utilization via reduced unnecessary traffic across segments, easier management than repeaters or hubs—which indiscriminately propagate all signals—and the division of networks into separate collision domains to minimize contention and retransmissions. Modern switches evolved from bridges as multi-port variants, offering scaled connectivity for denser LANs.

Historical Development

Network bridges emerged in the mid-1980s as a solution to the limitations of early Ethernet local area networks (LANs), particularly the constraints on network diameter and collision domains imposed by the (CSMA/CD) protocol. Developed primarily by engineers at (DEC), the technology addressed the need to interconnect multiple Ethernet segments without the performance penalties of repeaters or the complexity of routers. The first prototype bridge was created around 1980 by Mark Kempf at DEC's Advanced Development Group, using a processor and Lance Ethernet chips to enable store-and-forward packet filtering based on 48-bit MAC addresses. Commercial deployment followed shortly, with DEC introducing the LAN Bridge 100 in 1986 as the world's first Ethernet bridge, capable of extending LANs beyond the 2.5 km limit while reducing collisions. Companies like , through its 1987 acquisition of Bridge Communications, also contributed to early Ethernet bridging innovations, focusing on hardware for interconnecting PC networks. A pivotal milestone in 1985 was the invention of the by at DEC, which prevented loops in bridged networks by dynamically selecting a loop-free using a distributed . This , detailed in Perlman's seminal paper, allowed bridges to exchange bridge protocol data units (BPDUs) to elect a root bridge and block redundant paths, enabling reliable expansion of Ethernet LANs. STP was first implemented in DEC's two-port Ethernet bridge, transforming bridging from a simple interconnect into a robust protocol for larger . By the late , bridges evolved from basic two-port devices to multiport configurations, supporting greater scalability as LANs grew in enterprise environments. Standardization efforts began in the late 1980s under the working group, culminating in IEEE 802.1D-1990, which defined the MAC Bridge standard incorporating STP for interoperability across vendors. This standard formalized address learning, forwarding, and loop prevention, influencing bridge designs globally. In the , the distinction between bridges and switches blurred as multiport bridges with ASIC-based forwarding became prevalent, rebranded as "Ethernet switches" to emphasize higher port densities and performance; by the mid-1990s, switches had largely supplanted traditional bridges in commercial use. Subsequent updates enhanced STP's efficiency, with IEEE 802.1w-2001 introducing Rapid Spanning Tree Protocol (RSTP) to reduce convergence times from 30-50 seconds to under 10 seconds through faster BPDU handling and role-based port states. In the and , bridging concepts extended to virtual environments via (SDN) and , where virtual bridges like enable overlay networks in hypervisors and data centers, supporting scalable, programmable LANs in multi-tenant clouds. This evolution maintains bridges' core role in segmenting traffic and preventing loops amid the shift to virtualized infrastructures.

Types of Bridges

Transparent Bridges

Transparent bridges, also known as learning bridges, are network devices that interconnect (LAN) segments by forwarding frames based on dynamically learned media access control (MAC) addresses, operating without requiring explicit configuration or awareness from end hosts or routers. This transparency ensures that the bridge appears invisible to the network, as defined in the standard for MAC bridges. They function at the (Layer 2 of the ), filtering traffic to reduce unnecessary broadcasts while maintaining a single across connected segments. The primary mechanism of transparent bridges relies on self-learning, where the device examines the source of each incoming frame and records it in a forwarding table (also called a filtering database) along with the receiving . If the destination matches an entry in the table, the frame is forwarded only to the associated ; otherwise, for unknown destinations or broadcasts, the frame is flooded to all other ports except the source to ensure delivery. To handle network changes such as device mobility, entries in the forwarding table age out and are removed after a period of inactivity, typically 300 seconds by default. Transparent bridges come in simple and multiport variants to suit different scales. Simple bridges link exactly two network segments, using basic logic to forward or filter frames between them, which was common in early implementations to extend limited-distance Ethernet cabling. Multiport variants, supporting more than two ports, employ an internal switching fabric to manage traffic across multiple segments simultaneously, enabling efficient connectivity in larger topologies without altering the transparent operation. A key advantage of transparent bridges is their plug-and-play simplicity, allowing seamless integration into existing networks to segment traffic, reduce collisions, and improve performance without reconfiguration. However, this ease comes with the disadvantage of vulnerability to loops in redundant topologies, potentially causing broadcast storms that propagate indefinitely and degrade network stability unless mitigated by protocols like . Developed by in the early 1980s, transparent bridges were essential for expanding early Ethernet networks beyond single collision domains. They continue to find use in small-scale, low-complexity environments or legacy systems where advanced routing is unnecessary.

Source-Route Bridges

Source-route bridges are designed for networks, as specified in IEEE 802.5, where the sending station determines and includes the route through the network in the frame's Routing Information Field (). Unlike transparent bridges, which learn addresses dynamically without host involvement, source-route bridges rely on the source device to discover paths via test (e.g., explorer frames) that bridges append route descriptors to during propagation. The source then selects and embeds the route in subsequent data ' RIF, guiding bridges to forward along the specified path across multiple interconnected rings. This mechanism supports up to 14 hops (rings) and handles loop prevention inherently through route specification, though it requires more overhead from the (up to 18 bytes) and source computation. Developed by in the 1980s for expanding LANs, source-route bridging was widely used in enterprise environments until Ethernet's dominance in the . Variants like source-route transparent (SRT) bridges combine elements of source-routing for with transparent learning for other media. With 's obsolescence, source-route bridges are now legacy technology.

Translation Bridges

Translation bridges are specialized network devices designed to interconnect dissimilar local area networks (LANs) that employ different protocols or media access methods, such as Ethernet and or (FDDI). Unlike standard bridges that operate within homogeneous environments, translation bridges perform protocol and frame translations to enable communication between incompatible network architectures. This allows devices on one network type to exchange data with those on another, effectively extending the reach of legacy or diverse systems. The primary functions of translation bridges include frame format conversion, encapsulation and decapsulation of data packets, and handling discrepancies in addressing schemes. For instance, when bridging Ethernet to , the device converts Ethernet frames (using or Ethernet II formats) into Token Ring frames by reordering the 48-bit MAC addresses—Ethernet transmits bits in little-endian order (low-order bit first), while Token Ring uses big-endian order (high-order bit first)—and adjusting header fields like information fields (), which have no direct Ethernet equivalent and are thus stripped or cached for return traffic. Encapsulation involves wrapping non-routable protocol data (e.g., or LAT) into compatible formats, such as converting Ethernet Type II frames to Token Ring SNAP encapsulation, while decapsulation reverses the process on inbound traffic. These operations ensure seamless data flow but require careful management of (MTU) sizes, often limited to 1,500 bytes to match Ethernet constraints. Translation bridges gained prominence in the amid heterogeneous enterprise environments where multiple LAN technologies coexisted, particularly in IBM-dominated networks. Vendors like developed solutions such as Ethernet-to-Token Ring bridges and FDDI translational bridges to support migrations and integrations; for example, 's FDDI interface update enabled translational transparent bridging for VAX environments, allowing routable protocols to traverse while converting non-routable ones. These devices were essential for connecting Token Ring-based mainframes to emerging Ethernet segments, facilitating protocols like SNA over mixed media. However, their complexity arose from reconciling divergent media access controls—Ethernet's with (CSMA/CD) versus Token Ring's token-passing mechanism—often restricting support to non-routable protocols to avoid routing indicator conflicts. A key limitation of translation bridges is the added latency from frame reformatting and address manipulations, which can degrade in high-throughput scenarios compared to native bridging. This processing overhead, combined with the rise of cost-effective in the late 1990s and early 2000s, contributed to their obsolescence as Ethernet achieved dominance, rendering and FDDI largely extinct by the mid-2000s. Translation bridges are now primarily of historical interest, though similar translation functions appear in modern media converters for legacy network integrations.

Wireless Bridges

Wireless bridges, particularly WiFi-to-Ethernet bridges, are network devices that connect wireless local area networks (WLANs) based on the IEEE 802.11 standards to wired Ethernet segments. They operate as a type of transparent bridge at the data link layer (Layer 2 of the OSI model), dynamically learning and forwarding frames between the wireless and wired media to extend LAN connectivity while appearing invisible to end hosts. These bridges do not require additional drivers on connected wired devices, enabling plug-and-play integration for providing wired access to wireless networks or vice versa. Point-to-point Wi-Fi bridges, a subtype of wireless bridges, provide distinct advantages over traditional Wi-Fi repeaters, particularly in terms of performance and reliability. They deliver more stable and faster connections by avoiding the speed degradation inherent in repeaters, which retransmit signals on the same channel, effectively halving throughput with each hop. In contrast, point-to-point bridges use directional antennas to focus the signal into a concentrated beam, enabling reliable links over distances of 50 meters or more without midway performance loss, even in the presence of light obstacles. This line-of-sight approach ensures high reliability, often exceeding 99.99%, and supports speeds over 300 Mbps at extended ranges.

Operational Principles

Address Learning and Forwarding

Network bridges employ a dynamic learning process to build their filtering database, also known as the (CAM) table, by examining the source in each incoming frame. Upon receipt of a frame on an ingress , the bridge checks if the source MAC address is an individual address and the port is in the learning or forwarding state; if so, it creates or updates a dynamic entry associating that MAC address with the ingress port, provided no conflicting static entry exists and the database has sufficient capacity. This process excludes group addresses and source-routed frames, as their paths may not align with the network topology. The filtering database size varies by implementation but typically supports 1,000 to 64,000 entries to accommodate medium-sized networks. Forwarding decisions in bridges are based on the destination in the frame header, using the filtering database to determine the appropriate egress . For a known destination, the frame is forwarded only to the specific associated with that in the database. If the destination is unknown (not present in the database), or if the frame is a broadcast or , the bridge floods the frame to all other ports except the ingress to ensure delivery. Additionally, if the destination matches the ingress —indicating the frame is destined for a host on the same segment—the bridge filters (drops) the frame to prevent unnecessary transmission and reduce . The core decision logic for frame handling can be represented in the following pseudocode, derived from standard bridge operations:

Upon receiving a frame with source MAC S, destination MAC D, on ingress port P: 1. Learning: if S is individual address and P is in learning/forwarding state: if no static entry for S and database not full: update dynamic entry: FDB[S] = P (or overwrite if existing dynamic entry) 2. Forwarding and Filtering: if frame is source-routed or invalid: drop else if D is known in FDB: Q = FDB[D] if Q != P: // Not same segment forward frame to Q else: filter (drop) frame else if D is broadcast or multicast (group address): for each port R != P in forwarding state: forward frame to R else: // Unknown unicast for each port R != P in forwarding state: forward frame to R

Upon receiving a frame with source MAC S, destination MAC D, on ingress port P: 1. Learning: if S is individual address and P is in learning/forwarding state: if no static entry for S and database not full: update dynamic entry: FDB[S] = P (or overwrite if existing dynamic entry) 2. Forwarding and Filtering: if frame is source-routed or invalid: drop else if D is known in FDB: Q = FDB[D] if Q != P: // Not same segment forward frame to Q else: filter (drop) frame else if D is broadcast or multicast (group address): for each port R != P in forwarding state: forward frame to R else: // Unknown unicast for each port R != P in forwarding state: forward frame to R

This logic ensures efficient traffic management while preserving frame order within traffic classes. To maintain accuracy in dynamic environments, bridges implement aging and update mechanisms for filtering database entries. Dynamic entries are removed after an aging timer expires without renewal—typically 300 seconds by default, configurable from 10 seconds to over 1 million seconds—triggered by the absence of frames from that source MAC on the associated . When a frame arrives with a source MAC already in the database but on a different (indicating a host mobility or MAC move), the entry is updated to the new ingress , overwriting the previous association. Topology changes, such as those from reconfiguration, may prompt shorter aging timers to flush potentially mislearned entries quickly. Bridge performance is characterized by wire-speed throughput, meaning the device can forward frames at the full line rate of its ports without under normal conditions, limited only by the physical interface speeds (e.g., 10/100/1000 Mbps). By segmenting the network, bridges reduce the size of collision domains per port, minimizing contention and improving overall efficiency in shared media environments like Ethernet. The maximum recommended transit delay through a bridge is 1 second to ensure timely delivery.

Loop Prevention Mechanisms

In bridged networks, redundant paths between segments can create loops, allowing broadcast and unknown frames to circulate indefinitely among bridges. This results in broadcast storms, where frame duplication exponentially increases traffic, quickly saturating link bandwidth and rendering the network unusable. Loops also induce table instability, as the same source MAC addresses are repeatedly learned from multiple ports, causing entries to overwrite each other and leading to inconsistent forwarding decisions. Early loop prevention relied on manual intervention and simple heuristics rather than automated protocols. Network administrators manually configured bridges by disabling or blocking specific ports on redundant links to enforce a tree topology, avoiding cycles through careful design. Source address filtering, part of the basic learning process, helped mitigate some effects by building forwarding tables from observed source MACs, but it could not inherently detect or break loops. Additionally, pre-STP techniques limited address caching table sizes—typically to 8,000 entries initially—to prevent memory overflow during storms, with timeouts (e.g., after 5 minutes of inactivity) to refresh tables and handle mobility, though these measures only reduced symptoms without eliminating the root cause. Basic automated mechanisms introduced bridge identification and port role assignment to systematically prevent loops while building on address learning for forwarding. Each bridge generates a unique Bridge ID, combining a configurable priority (default 32,768) with its base ; the bridge with the lowest ID is elected via distributed comparison of Bridge Protocol Data Units (BPDUs). Ports then receive s: the provides the optimal path to the (selected by lowest path cost), designated ports forward traffic to non-root segments, and blocking ports on redundant paths discard data to break loops without isolating segments. This election process ensures a single active path per segment, referencing learned MAC locations for stable forwarding. However, these mechanisms suffer from slow convergence after topology changes, such as link failures, taking 30 to 50 seconds to recompute the —comprising listening (15 seconds), learning (15 seconds), and max age (20 seconds) timers—during which temporary loops or traffic blackholing can occur. In legacy setups, this delay has caused outages exceeding 45 seconds, disrupting real-time applications like VoIP or financial trading, with broadcast storms amplifying downtime until stabilization.

Implementations

Hardware-Based Bridges

Hardware-based network bridges utilize dedicated physical components to perform bridging functions at high speeds, distinguishing them from software implementations by leveraging specialized chips for efficient packet processing. These devices typically employ to handle learning and forwarding, enabling rapid table lookups and decision-making without relying on general-purpose processors. Multiple Ethernet ports, ranging from 4 to 48 depending on the model, connect network segments, while buffer memory—often shared across ports in the ASIC—manages frame queuing to prevent congestion during bursts of traffic. This architecture supports wire-speed forwarding, where packets are processed at the full line rate of the interface, such as 1 Gbps per port, ensuring no performance degradation under load. Performance characteristics of hardware bridges emphasize low latency and efficient resource use, critical for enterprise environments. Forwarding latency is typically under 10 μs, with some implementations achieving as low as 1-5 μs, allowing near-instantaneous frame traversal between s. Power consumption varies by scale but generally ranges from 5-50 for compact devices with 8-24 s, rising with port count and PoE support, yet optimized keep idle draw minimal at around 5 . Early examples include Digital Equipment Corporation's (DEC) LANBridge 100, introduced in 1986 as a standalone two-port device operating at 10 Mbps, using for Ethernet interfacing and an 8K-entry address table with binary search for filtering packets every 32 μs. In modern contexts, bridging functions are integrated into multilayer switches like the series, where UADP enable scalable L2/L3 operations across dozens of s. These bridges offer advantages in reliability and , providing consistent high-throughput operation suitable for enterprise networks handling heavy traffic, with hardware redundancy reducing failure points compared to software alternatives. However, they incur higher upfront costs due to custom silicon fabrication and lack flexibility for protocol updates, often requiring full device replacement for feature enhancements. By the , advancements in System-on-Chip (SoC) designs have extended hardware bridging to embedded IoT devices, with multi-protocol SoCs like those from Espressif integrating Ethernet or bridging for low-power edge connectivity in smart home gateways. Hardware-based wireless bridges, often configured in client bridge mode, enable WiFi-to-Ethernet connectivity, allowing legacy Ethernet-only devices to access modern WiFi networks without additional software drivers.

Software-Based Bridges

Software-based bridges are implemented primarily through kernel modules and user-space utilities within operating systems, enabling flexible without dedicated hardware. In , the bridge module, part of the kernel networking stack, acts as a Layer 2 switch by forwarding Ethernet frames between interfaces based on MAC addresses. This module can be configured using tools from the bridge-utils package, such as brctl, which allows creation, management, and monitoring of bridge devices. For filtering, user-space tools like ebtables provide Ethernet-level firewalling capabilities, inspecting and manipulating frames traversing the bridge in a protocol-independent manner. Virtual bridging extends these concepts into hypervisor environments, where software bridges connect virtual machine (VM) networks to physical or overlay infrastructures. (OVS), an open-source multilayer virtual switch, supports advanced features like flow-based forwarding and integration with (SDN) overlays, making it suitable for dynamic virtualized setups. Similarly, VMware's vSphere Distributed Switch (vDS) provides centralized management across ESXi hosts, aggregating VM traffic into logical switches for policy enforcement and monitoring. These implementations often leverage kernel datapaths for efficiency while allowing user-space control for customization. Performance characteristics of software-based bridges include higher latency compared to hardware solutions, typically in the range of 35 to 100 microseconds or more for virtual switches like OVS, due to processing overhead in the host CPU. Throughput is CPU-bound, limited by core utilization and packet processing rates, though multi-threading and optimizations like DPDK can scale it to near line-rate for 10 Gbps links under moderate loads. In contrast to hardware bridges, which offer sub-microsecond latencies via , software variants prioritize programmability over raw speed. Common use cases for software-based bridges encompass home networking, where firmware like enables bridging to extend LAN segments without additional hardware, supporting both wired and clients in configurations. In cloud virtual private clouds (VPCs), such as those using OVS, they facilitate isolated tenant networks with overlay encapsulation for scalability across distributed hosts. A key advantage is customization, allowing dynamic rule updates, tagging, and integration with higher-layer services without hardware reconfiguration. Specific examples include the Windows Network Bridge feature, which combines multiple network adapters into a single logical interface for transparent forwarding, useful for sharing connections in small setups. In , the if_bridge driver creates software Ethernet bridges, supporting and packet filtering to interconnect networks efficiently.

Advanced Protocols

Spanning Tree Protocol

The (STP), standardized as in 1990, is a foundational link-layer protocol designed to prevent loops in bridged Ethernet networks by constructing a loop-free logical topology. operates by exchanging Bridge Protocol Data Units (BPDUs), special multicast frames sent between bridges to discover the network topology, elect a root bridge, and determine the active paths. These BPDUs contain information such as bridge identifiers, path costs, and timer values, enabling bridges to collectively compute a that activates only a subset of links while blocking redundant ones to eliminate cycles. The STP algorithm proceeds in distinct steps to build and maintain the spanning tree. First, bridges elect a root bridge using the lowest Bridge ID, which combines a configurable priority (default 32768) and the bridge's MAC address as a tiebreaker. Each non-root bridge then selects its root port as the one with the lowest cumulative path cost to the root, where path cost is calculated based on link bandwidth— for example, a 100 Mbps link has a cost of 19. Designated ports are chosen for each LAN segment (lowest cost to root from the sending bridge), and remaining ports transition to a blocking state. Port states evolve through blocking (no traffic, but BPDUs received), listening (BPDU processing, no learning or forwarding), learning (MAC address learning, no forwarding), and forwarding (full operation) to ensure stable topology changes without temporary loops. STP relies on three key timers to manage topology updates and stability: the Hello timer (default 2 seconds), which sets the BPDU transmission interval; the Max Age timer (20 seconds), which defines how long a bridge stores a BPDU before aging it out; and the Forward Delay timer (15 seconds), applied during listening and learning phases. These timers contribute to convergence time, calculated approximately as 2×(Forward Delay+Max Age)+Hello2 \times (\text{Forward Delay} + \text{Max Age}) + \text{Hello}, yielding about 52 seconds under defaults for a full topology recalculation after a . To address STP's slow convergence (often 30–50 seconds or more), the Rapid Spanning Tree Protocol (RSTP) was introduced in IEEE 802.1w in 2001, reducing times to seconds or even hundreds of milliseconds through explicit handshaking in BPDUs and role-based port transitions (e.g., alternate ports for quick ). RSTP maintains backward compatibility with STP while proposing immediate forwarding on point-to-point links and faster aging of information. Despite its reliability, STP has limitations, including support for only a single instance per in basic implementations, which can lead to suboptimal load balancing across VLANs. Additionally, it is vulnerable to attacks such as BPDU storms, where malicious or misconfigured devices flood BPDUs, potentially causing topology instability or broadcast storms if loops form before blocking; features like BPDU Guard mitigate this by disabling ports upon unexpected BPDU receipt.

Shortest Path Bridging

Shortest Path Bridging (SPB) is defined in the standard, ratified in 2012, which amends the virtual bridged standard to enable shortest path forwarding within bridged domains. This protocol introduces a link-state approach to Ethernet bridging, allowing bridges to compute and utilize optimal paths for and traffic across mesh topologies. The core mechanism of SPB relies on the Intermediate System to Intermediate System () protocol, extended per RFC 6329, to advertise information among bridges. Each bridge maintains a synchronized link-state database and uses shortest-path algorithms to calculate forwarding tables, encapsulating frames with an Encapsulation Tag (ECT) that identifies specific equal-cost trees for multipath load balancing. This enables traffic distribution across multiple paths without loops, supporting up to 16 distinct ECT algorithms per instance for fine-grained control. Compared to the , SPB offers faster convergence times under 1 second, often in the range of hundreds of milliseconds, due to its proactive link-state updates rather than STP's reactive flooding. It supports multiple equal-cost paths for load balancing, avoiding STP's single spanning tree that blocks redundant links and leads to suboptimal routing, thereby improving scalability in large environments like data centers. SPB has been implemented in enterprise switches from vendors such as (formerly ), where it forms the basis of solutions like Fabric Connect for automated . It integrates conceptually with related standards like TRILL (Transparent Interconnection of Lots of Links), both leveraging for shortest-path Ethernet but differing in encapsulation—SPB uses MAC-in-MAC or VLAN-based tagging. In practice, SPB is applied in provider backbone networks for carrier-grade Ethernet services and in campus LANs to enhance resilience and throughput beyond STP's limitations.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.