Control plane
View on WikipediaIn network routing, the control plane is the part of the router architecture that is concerned with establishing the network topology, or the information in a routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element.[1] In most cases, the routing table contains a list of destination addresses and the outgoing interface or interfaces associated with each. Control plane logic can also identify certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.
Depending on the specific router implementation, there may be a separate forwarding information base that is populated by the control plane, but used by the high-speed forwarding plane to look up packets and decide how to handle them.
In computing, the control plane is the part of the software that configures and shuts down the data plane.[2] By contrast, the data plane is the part of the software that processes the data requests.[3] The data plane is also sometimes referred to as the forwarding plane.
The distinction has proven useful in the networking field where it originated, as it separates the concerns: the data plane is optimized for speed of processing, and for simplicity and regularity. The control plane is optimized for customizability, handling policies, handling exceptional situations, and in general facilitating and simplifying the data plane processing.[4][5]
The conceptual separation of the data plane from the control plane has been done for years.[6] An early example is Unix, where the basic file operations are open, close for the control plane and read, write for the data plane.[7]
Building the unicast routing table
[edit]A major function of the control plane is deciding which routes go into the main routing table. "Main" refers to the table that holds the unicast routes that are active. Multicast routing may require an additional routing table for multicast routes. Several routing protocols e.g. IS-IS, OSPF and BGP maintain internal databases of candidate routes which are promoted when a route fails or when a routing policy is changed.
Several different information sources may provide information about a route to a given destination, but the router must select the "best" route to install into the routing table. In some cases, there may be multiple routes of equal "quality", and the router may install all of them and load-share across them.
Sources of routing information
[edit]There are three general sources of routing information:
- Information on the status of directly connected hardware and software-defined interfaces
- Manually configured static routes
- Information from (dynamic) routing protocols
Local interface information
[edit]Routers forward traffic that enters on an input interface and leaves on an output interface, subject to filtering and other local rules. While routers usually forward from one physical (e.g., Ethernet, serial) to another physical interface, it is also possible to define multiple logical interfaces on a physical interface. A physical Ethernet interface, for example, can have logical interfaces in several virtual LANs defined by IEEE 802.1Q VLAN headers.
When an interface has an address configured in a subnet, such as 192.0.2.1 in the 192.0.2.0/24 (i.e., subnet mask 255.255.255.0) subnet, and that interface is considered "up" by the router, the router thus has a directly connected route to 192.0.2.0/24. If a routing protocol offered another router's route to that same subnet, the routing table installation software will normally ignore the dynamic route and prefer the directly connected route.
There also may be software-only interfaces on the router, which it treats as if they were locally connected. For example, most implementations have a "null" software-defined interface. Packets having this interface as a next hop will be discarded, which can be a very efficient way to filter traffic. Routers usually can route traffic faster than they can examine it and compare it to filters, so, if the criterion for discarding is the packet's destination address, "blackholing" the traffic will be more efficient than explicit filters.
Other software defined interfaces that are treated as directly connected, as long as they are active, are interfaces associated with tunneling protocols such as Generic Routing Encapsulation (GRE) or Multiprotocol Label Switching (MPLS). Loopback interfaces are virtual interfaces that are considered directly connected interfaces.
Static routes
[edit]Router configuration rules may contain static routes. A static route minimally has a destination address, a prefix length or subnet mask, and a definition where to send packets for the route. That definition can refer to a local interface on the router, or a next-hop address that could be on the far end of a subnet to which the router is connected. The next-hop address could also be on a subnet that is directly connected, and, before the router can determine if the static route is usable, it must do a recursive lookup of the next hop address in the local routing table. If the next-hop address is reachable, the static route is usable, but if the next-hop is unreachable, the route is ignored.
Static routes also may have preference factors used to select the best static route to the same destination. One application is called a floating static route, where the static route is less preferred than a route from any routing protocol. The static route, which might use a dialup link or other slow medium, activates only when the dynamic routing protocol(s) cannot provide a route to the destination.
Static routes that are more preferred than any dynamic route also can be very useful, especially when using traffic engineering principles to make certain traffic go over a specific path with an engineered quality of service.
Dynamic routing protocols
[edit]See routing protocols. The routing table manager, according to implementation and configuration rules, may select a particular route or routes from those advertised by various routing protocols.
Installing unicast routes
[edit]Different implementations have different sets of preferences for routing information, and these are not standardized among IP routers. It is fair to say that subnets on directly connected active interfaces are always preferred. Beyond that, however, there will be differences.
Implementers generally have a numerical preference, which Cisco calls an "administrative distance", for route selection. The lower the preference, the more desirable the route. Cisco's IOS[8] implementation makes exterior BGP the most preferred source of dynamic routing information, while Nortel RS[9] makes intra-area OSPF most preferred.
The general order of selecting routes to install is:
- If the route is not in the routing table, install it.
- If the route is "more specific" than an existing route, install it in addition to the existing routes. "More specific" means that it has a longer prefix. A /28 route, with a subnet mask of 255.255.255.240, is more specific than a /24 route, with a subnet mask of 255.255.255.0.
- If the route is of equal specificity to a route already in the routing table, but comes from a more preferred source of routing information, replace the route in the table.
- If the route is of equal specificity to a route in the routing table, yet comes from a source of the same preference,
- Discard it if the route has a higher metric than the existing route
- Replace the existing route if the new route has a lower metric
- If the routes are of equal metric and the router supports load-sharing, add the new route and designate it as part of a load-sharing group. Typically, implementations will support a maximum number of routes that load-share to the same destination. If that maximum is already in the table, the new route is usually dropped.
Routing table vs. forwarding information base
[edit]See forwarding plane for more detail, but each implementation has its own means of updating the forwarding information base (FIB) with new routes installed in the routing table. If the FIB is in one-to-one correspondence with the RIB, the new route is installed in the FIB after it is in the RIB. If the FIB is smaller than the RIB, and the FIB uses a hash table or other data structure that does not easily update, the existing FIB might be invalidated and replaced with a new one computed from the updated RIB.
Multicast routing tables
[edit]Multicast routing builds on unicast routing. Each multicast group to which the local router can route has a multicast routing table entry with a next hop for the group, rather than for a specific destination as in unicast routing.
There can be multicast static routes as well as learning dynamic multicast routes from a protocol such as Protocol Independent Multicast (PIM).
See also
[edit]References
[edit]- ^ Forwarding and Control Element Separation (ForCES) Framework, RFC 3746, Network Working Group, April 2004
- ^ Do, Truong-Xuan; Kim, Younghan (2017-06-01). "Control and data plane separation architecture for supporting multicast listeners over distributed mobility management". ICT Express. Special Issue on Patents, Standardization and Open Problems in ICT Practices. 3 (2): 90–95. doi:10.1016/j.icte.2017.06.001. ISSN 2405-9595.
- ^ Conran, Matt (2019-02-25). "Named data networking: Stateful forwarding plane for datagram delivery". Network World. Retrieved 2019-10-14.
- ^ Xia, Wenfeng; Wen, Yoggang; Heng Foh, Chuan; Niyato, Dusit; Xie, Haiyong (2015). "A Survey on Software-Defined Networking". IEEE Communications Surveys & Tutorials. 17 (1): 27–46. doi:10.1109/COMST.2014.2330903. S2CID 4269723.
- ^ Ahmad, Ijaz; Namal, Suneth; Ylianttila, Mika; Gurtov, Andrei (2015). "Security in Software-Defined Networks: A Survey" (PDF). IEEE Communications Surveys & Tutorials. 17 (4): 2317–2342. doi:10.1109/COMST.2015.2474118. S2CID 2138863.
- ^ Do, Truong-Xuan; Kim, Younghan (2017-06-01). "Control and data plane separation architecture for supporting multicast listeners over distributed mobility management". ICT Express. Special Issue on Patents, Standardization and Open Problems in ICT Practices. 3 (2): 90–95. doi:10.1016/j.icte.2017.06.001. ISSN 2405-9595.
- ^ Bach, Maurice J. (1986). The Design of the Unix Operating System. Prentice-Hall. Bibcode:1986duos.book.....B.
- ^ Configuring IP Routing Protocol-Independent Features, Cisco Systems,July 2006
- ^ Nortel Ethernet Routing Switch 8600 Configuring IP Routing Operations, Nortel Networks, January 2007
Control plane
View on GrokipediaCore Concepts
Definition and Functions
The control plane refers to the collection of processes within a network device, such as a router, that make decisions on how data packets should be routed and processed across the network. These processes operate at a higher level to manage overall network behavior, including the determination of packet forwarding paths based on network topology and policies. Unlike the data plane, which executes the actual packet forwarding, the control plane provides the intelligence that guides these operations by maintaining state information and updating forwarding rules.[1][8] Key functions of the control plane encompass topology discovery, where it identifies network structure through exchange of information between devices; policy enforcement, such as applying quality of service (QoS) rules to prioritize traffic; and resource allocation to optimize bandwidth and device capabilities. It populates routing tables with entries derived from learned network paths and handles protocol signaling, for example, by sending periodic hello messages in protocols like OSPF to detect neighbors and maintain adjacency. Additionally, the control plane manages error handling for issues like protocol mismatches or unreachable destinations.[1][9] Historically, early implementations of control plane functions appeared in Unix systems during the late 1970s and early 1980s, where routing decisions were managed by software daemons like the routed process introduced in 4.2BSD, which used variants of routing information protocols to update kernel routing tables dynamically. By the 1980s, as networks scaled with the growth of the ARPANET and early Internet, these functions evolved into dedicated processes on specialized router hardware, separating decision-making logic from basic packet handling to improve efficiency and reliability. This foundational separation laid the groundwork for modern network architectures, where control plane processes influence packet paths without directly participating in high-speed forwarding.[10][11]Control Plane vs. Data Plane
In networking architecture, the control plane and data plane represent a fundamental separation of responsibilities designed to enhance efficiency and performance. The control plane manages deliberative, slow-path processes, such as computing routes, maintaining network topology, and configuring policies using protocols like BGP and OSPF.[1] In contrast, the data plane executes high-speed, fast-path forwarding operations, including packet lookup, encapsulation, and transmission based on pre-established rules.[12] This architectural division allows the control plane to focus on complex decision-making without impeding the data plane's real-time handling of traffic volumes that can reach terabits per second in modern routers.[13] The separation yields significant benefits, including improved scalability by enabling independent optimization of each plane—control logic can evolve without altering forwarding hardware, while data plane components leverage specialized ASICs for low-latency processing.[12] Security is bolstered through isolation, as the control plane can be shielded from direct exposure to data traffic, mitigating risks like DDoS attacks that target routing protocols.[14] Additionally, this design supports seamless upgradability; control plane software updates or failures can occur without disrupting ongoing data flows, ensuring high availability in carrier-grade networks.[15] Interaction between the planes typically involves the control plane programming the data plane via standardized APIs or table installations, where changes like route computations trigger updates to forwarding rules.[1] For instance, in software-defined networking (SDN) environments, a centralized controller pushes match-action policies to distributed switches, allowing dynamic reconfiguration.[15] This model decouples control logic from hardware, facilitating automated orchestration. Historically, early routers integrated control and data functions on shared processors, limiting scalability as traffic grew.[15] The evolution toward logical separation accelerated with the advent of SDN in the early 2010s, where protocols like OpenFlow enabled centralized control over commodity hardware, and modern ASICs in high-end routers further reinforced this divide for programmable, resilient networks.[1]Unicast Routing Operations
Sources of Routing Information
In unicast IP routing, the control plane populates the routing table with information derived from multiple sources, each contributing candidate routes that are evaluated based on trustworthiness and specificity. These sources include directly connected networks, manually configured static routes, and routes learned through dynamic protocols, ensuring comprehensive coverage of reachable destinations while allowing for prioritized selection. Local interface information provides the highest-priority routes, representing networks directly attached to the router's interfaces. When an interface is configured with an IP address, such as assigning 192.0.2.1/24 to an Ethernet port, the router automatically installs a connected route for the corresponding subnet (e.g., 192.0.2.0/24) in the routing table, with an administrative distance of 0. These routes are considered the most reliable because they reflect physical layer connectivity and require no intermediary hops.[16] Static routes offer manually defined paths to specific destinations, configured by network administrators to override or supplement dynamic learning. Each static route specifies a destination prefix, next-hop IP address, or outgoing interface, and carries a default administrative distance of 1 on Cisco devices, making it preferable to most dynamic routes unless explicitly adjusted. For instance, a static route might direct traffic for 203.0.113.0/24 via next-hop 192.0.2.254, providing control in scenarios like default gateways or backup paths.[16] Dynamic routing protocols enable automated discovery and exchange of routing information between routers, adapting to network changes without manual intervention. These protocols fall into categories such as distance-vector (e.g., RIP, defined in RFC 2453, which uses hop count as a metric), link-state (e.g., OSPF, per RFC 2328, which computes shortest paths based on link costs derived from interface bandwidth), and path-vector (e.g., BGP, per RFC 4271, which selects paths using attributes like AS-path length for inter-domain routing). Routes from these protocols arrive with associated metrics and administrative distances—such as 120 for RIP and 110 for OSPF—allowing the router to compare and select optimal paths within the same protocol domain.[16] To resolve conflicts among routes from different sources to the same destination, routers apply two key selection criteria: administrative distance for source trustworthiness and longest prefix match for specificity. Administrative distance determines the preferred source first, with lower values winning (e.g., a connected route at 0 overrides a static route at 1, which in turn overrides OSPF at 110); if distances are equal, the protocol's internal metric (e.g., OSPF's cumulative cost) breaks the tie. Subsequently, among routes from the chosen source, the longest prefix match selects the most specific entry, as mandated by IP forwarding standards, ensuring traffic for 192.0.2.64/26 uses a /26 route over a broader /24 covering the same range. This hierarchical process maintains routing accuracy and efficiency.[16][17]Building the Unicast Routing Table
The unicast routing table is constructed by aggregating routing information from multiple sources, including directly connected interfaces, statically configured routes, and dynamically learned routes from protocols such as OSPF and BGP.[18] This process involves selecting the best route for each destination prefix based on administrative preference, which prioritizes routes from more reliable sources (e.g., connected interfaces over dynamic protocols), followed by the lowest metric within the same preference level.[19] For instance, in OSPF, routes are preferred based on the lowest cumulative link cost, where cost is inversely proportional to interface bandwidth and configurable per link.[20] The resulting table consists of entries for each destination, typically including the network prefix (with subnet mask or length), next-hop address, associated metric or cost, and the originating protocol or source.[21] Entries support route summarization to reduce table size, such as aggregating multiple /24 subnets into a single /16 prefix when contiguous and policy allows, enabling efficient CIDR-based aggregation without loss of specificity for longest-match forwarding.[21] Conflicts between overlapping routes are resolved through a hierarchical selection process: first by administrative preference (e.g., static routes often assigned lower values than dynamic ones), then by longest prefix match, and finally by metric comparison.[22] In BGP, for example, when metrics are equal, the route with the shortest AS-path length is selected as a tie-breaker to favor more direct inter-domain paths.[23] For equal-cost paths, equal-cost multipath (ECMP) allows load-sharing across multiple next-hops, distributing traffic to improve utilization, with implementations commonly supporting up to 8 such paths.[24] Table updates occur either periodically, as in RIP's scheduled advertisements every 30 seconds, or event-driven, such as recomputation following a link failure detected by Bidirectional Forwarding Detection (BFD), which provides sub-second fault detection to trigger rapid route recalculation.[25][26]Installing Unicast Routes
The installation process for unicast routes involves selecting the optimal path from the routing information base (RIB) based on criteria such as administrative distance and longest prefix match (LPM), where the route with the most specific prefix length is chosen to ensure precise forwarding decisions.[27] Once selected, the route is translated and installed into the forwarding information base (FIB) or equivalent hardware structures like ternary content-addressable memory (TCAM) for high-speed lookups in the data plane.[28] This installation often requires recursion to resolve indirect next-hops; for instance, if a route specifies a next-hop IP address not directly connected, the control plane performs a recursive lookup in the RIB to find the outbound interface and updated next-hop, repeating as needed until a directly connected route is reached.[29] Optimization techniques during installation aim to streamline the FIB for efficiency and reduced resource consumption. Redundant entries are pruned through route aggregation, where multiple more-specific routes are consolidated into a single summary route, suppressing detailed paths that are covered by the aggregate to minimize table size while maintaining reachability.[30] Floating static routes serve as backups by configuring them with a higher administrative distance than primary dynamic routes, ensuring they are only installed and used if the preferred route becomes invalid, such as during link failures.[31] Error handling ensures routing stability by promptly invalidating affected routes upon detecting failures. For example, when an interface goes down, all static and dynamic routes dependent on that interface are removed from the RIB and FIB to prevent blackholing of traffic.[32] In dynamic protocols like OSPF, graceful restart mitigates disruptions during control plane restarts by allowing the router to inform neighbors via grace LSAs, enabling them to retain forwarding entries for a configurable period (up to 1800 seconds) without purging routes, thus preserving data plane continuity until the restarting router reconverges.[33] Vendor implementations often incorporate policy mechanisms for customized installation. In Cisco devices, route maps enable policy-based routing (PBR) during the installation and application of unicast routes, allowing administrators to match traffic criteria (e.g., source IP or protocol) and set specific next-hops or interfaces, overriding standard RIB selections for tailored forwarding behavior.[34]Data Structures and Interaction
Routing Table vs. Forwarding Information Base
The routing table, formally known as the Routing Information Base (RIB), serves as a comprehensive logical data structure in the control plane of network routers. It aggregates and stores all routing information obtained from routing protocols, static configurations, and connected interfaces, including multiple paths to destinations with detailed attributes such as metrics, administrative distances, policy-based tags, and preference indicators. Accessed primarily by the router's CPU for route selection and policy enforcement, the RIB enables flexible computation without hardware constraints, allowing it to accommodate large volumes of routes limited mainly by available software memory and processing resources.[35][36][37] In contrast, the Forwarding Information Base (FIB) is a streamlined, data-plane-oriented structure optimized for rapid packet forwarding at line rates. Derived from the RIB, it includes only the best active routes—typically one primary path per destination prefix—along with essential forwarding details like next-hop IP addresses, outgoing interfaces, and encapsulation information, excluding extraneous attributes to minimize lookup overhead. Implemented in specialized hardware such as Ternary Content-Addressable Memory (TCAM) or algorithmic hash tables, the FIB supports parallel, high-speed prefix matching to forward packets without CPU intervention, ensuring low-latency performance in high-throughput environments.[35][36][38] The primary distinction between the RIB and FIB lies in their scope, accessibility, and optimization goals: the RIB prioritizes completeness and policy richness for control-plane decision-making, while the FIB emphasizes compactness and speed for data-plane operations, often resulting in a significantly smaller dataset focused solely on forwarding actions. This separation allows the control plane to handle complex route computations independently of the data plane's real-time requirements, with the FIB acting as a distilled, installable subset of RIB entries selected through best-path algorithms. As detailed in route installation processes, only FIB-eligible routes with resolved next-hops are programmed into the forwarding hardware.[35][36][37] Synchronization from the RIB to the FIB is orchestrated by the control plane's RIB manager, which pushes route updates to the data plane either incrementally—to apply changes efficiently without disrupting ongoing forwarding—or via full table dumps during system initialization, failover, or bulk reprogramming. This process ensures consistency, with mechanisms like bulk content downloaders facilitating scalable distribution across line cards in modular routers; any temporary discrepancies, such as those from route flapping, are mitigated through route dampening policies that penalize unstable paths in the RIB before propagation to the FIB, promoting network stability.[36][39] Performance implications arise from these architectural differences, particularly in scale: while the software-based RIB can theoretically support millions of routes constrained by CPU and memory, the hardware-bound FIB faces strict limits imposed by TCAM capacity or algorithmic efficiency, with modern routers typically accommodating 1 to 2 million IPv4 entries depending on the platform. For instance, Cisco Nexus 7000 series XL modules support up to 900,000 IPv4 unicast FIB entries via 900K TCAM, beyond which overflow may require aggregation techniques or route filtering to prevent forwarding failures. These constraints underscore the need for careful route management to balance control-plane flexibility with data-plane throughput.[36][37][40]Multicast Routing
Multicast Routing Tables
Multicast routing tables, often referred to as the Tree Information Base (TIB) in protocols like PIM-SM, maintain forwarding state for multicast groups to enable efficient one-to-many or many-to-many data distribution in the control plane.[41] These tables consist of entries keyed by source and group identifiers, such as (S,G) for source-specific forwarding trees, where S is the source IP address and G is the multicast group address, or (*,G) for shared trees that aggregate traffic from multiple sources to group G via a rendezvous point (RP).[41] Each entry includes an incoming interface determined by reverse path forwarding (RPF) and an outgoing interface list (OIF), which specifies the interfaces over which multicast packets are replicated and forwarded to downstream receivers.[41] The OIF is dynamically computed using macros likeimmediate_olist(S,G), which includes interfaces with active Join state minus those lost to asserts, ensuring precise control over traffic distribution.[41]
The building process for multicast routing tables relies on RPF checks to establish loop-free paths and dynamic membership signaling to populate the OIF.[41] An RPF check verifies that an incoming packet from source S arrives on the interface indicated by the unicast routing table as the path to S; if not, the packet is discarded to prevent loops, with the RPF neighbor computed as the next hop toward S in the multicast routing information base (MRIB).[41] For dynamic membership, pruning removes interfaces from the OIF when no downstream interest exists, triggered by Prune messages and maintained via Prune-Pending states with override timers (default 3 seconds) to allow grafting.[41] Grafting, conversely, adds interfaces to the OIF through Join messages when receiver interest reemerges, propagating upstream to restore traffic flow along the tree.[41] State machines, including downstream (e.g., Join, Prune-Pending) and upstream (e.g., Joined, NotJoined), manage these transitions, with timers like the Join Timer (default 60 seconds) ensuring periodic refreshes.[41]
Unlike unicast routing tables, which aggregate destination prefixes for point-to-point forwarding, multicast routing tables employ group-based addressing in the IPv4 range 224.0.0.0/4 (equivalent to 1110 in the high-order four bits) and require stateful maintenance for each active (S,G) or (*,G) entry to track per-group receiver memberships and tree branches.[42][41] This results in a more distributed and tree-oriented structure, where the control plane must handle replication states rather than simple longest-prefix matches, often referencing unicast tables only for RPF computations.[41]
Scalability challenges arise from potential state explosion in environments with numerous sources and large groups, as each active (S,G) entry consumes resources for OIF maintenance across the network.[43] In inter-domain scenarios, this is exacerbated by the need to discover remote sources without flooding every (S,G) state globally; the Multicast Source Discovery Protocol (MSDP) mitigates this by enabling rendezvous points to exchange source-active (SA) messages via peer-RPF flooding, limiting cached states through filters and SA limits to prevent denial-of-service impacts.[43]
| Entry Type | Description | Key Components |
|---|---|---|
| (S,G) | Source-specific tree state for traffic from a single source S to group G. | Incoming interface via RPF to S; OIF with source-tree joins.[41] |
| (*,G) | Shared tree state aggregating multiple sources to group G via RP. | Incoming interface via RPF to RP; OIF with group joins.[41] |
| (S,G,rpt) | Prune state on the RP tree to suppress specific source traffic. | Derived from (*,G); OIF excludes pruned interfaces.[41] |