Recent from talks
Contribute something
Nothing was collected or created yet.
OpenFlow
View on WikipediaOpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network.[1]
Description
[edit]OpenFlow enables network controllers to determine the path of network packets across a network of switches. The controllers are distinct from the switches. This separation of the control from the forwarding allows for more sophisticated traffic management than is feasible using access control lists (ACLs) and routing protocols. Also, OpenFlow allows switches from different vendors — often each with their own proprietary interfaces and scripting languages — to be managed remotely using a single, open protocol. The protocol's inventors consider OpenFlow an enabler of software-defined networking (SDN).
OpenFlow allows remote administration of a layer 3 switch's packet forwarding tables, by adding, modifying and removing packet matching rules and actions. This way, routing decisions can be made periodically or ad hoc by the controller and translated into rules and actions with a configurable lifespan, which are then deployed to a switch's flow table, leaving the actual forwarding of matched packets to the switch at wire speed for the duration of those rules. Packets which are unmatched by the switch can be forwarded to the controller. The controller can then decide to modify existing flow table rules on one or more switches or to deploy new rules, to prevent a structural flow of traffic between switch and controller. It could even decide to forward the traffic itself, provided that it has told the switch to forward entire packets instead of just their header.
The OpenFlow protocol is layered on top of the Transmission Control Protocol (TCP) and prescribes the use of Transport Layer Security (TLS). Controllers should listen on TCP port 6653 for switches that want to set up a connection. Earlier versions of the OpenFlow protocol unofficially used port 6633.[2][3] Some network control plane implementations use the protocol to manage the network forwarding elements.[4] OpenFlow is mainly used between the switch and controller on a secure channel.[5]
History
[edit]The Open Networking Foundation (ONF), a user-led organization dedicated to promotion and adoption of software-defined networking (SDN),[6] manages the OpenFlow standard.[7] ONF defines OpenFlow as the first standard communications interface defined between the control and forwarding layers of an SDN architecture. OpenFlow allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisor-based). It is the absence of an open interface to the forwarding plane that has led to the characterization of today's networking devices as monolithic, closed, and mainframe-like. A protocol like OpenFlow is needed to move network control out of proprietary network switches and into control software that's open source and locally managed.[8]
A number of network switch and router vendors announced intent to support or are shipping supported switches for OpenFlow, including Alcatel-Lucent,[9] Big Switch Networks,[10] Brocade Communications,[11] and Radisys.[12]
Development
[edit]Version 1.1 of the OpenFlow protocol was released on 28 February 2011, and new development of the standard was managed by the ONF.[13] In December 2011, the ONF board approved OpenFlow version 1.2 and published it in February 2012.[14] The current version of OpenFlow is 1.5.1.[15] However, version 1.6 has been available since September 2016, but accessible only to ONF's members.
In May 2011, Marvell and Larch Networks announced the availability of an OpenFlow-enabled, fully featured switching solution based on Marvell's networking control stack and the Prestera family of packet processors.[16][17]
Indiana University in May 2011 launched a SDN Interoperability Lab in conjunction with the ONF to test how well different vendors' software-defined networking and OpenFlow products work together.[18]
In June 2012, Infoblox released LINC, an open-source OpenFlow version 1.2 and 1.3 compliant software switch.[19]
In February 2012, Big Switch Networks released Project Floodlight, an Apache-licensed open-source software OpenFlow Controller,[20] and announced its OpenFlow-based SDN Suite in November of that year, which contains a commercial controller, and virtual switching and tap monitoring applications.[21]
In February 2012, HP said it is supporting the standard on 16 of its Ethernet switch products.[22]
In April 2012, Google's Urs Hölzle described how the company's internal network had been completely re-designed over the previous two years to run under OpenFlow with substantial efficiency improvement.[23]
In January 2013, NEC unveiled a virtual switch for Microsoft's Windows Server 2012 Hyper-V hypervisor, which is designed to bring OpenFlow-based software-defined networking and network virtualisation to those Microsoft environments.[24]
Security concerns
[edit]- Covert communications[25]
- Denial of service[25]
- Man-in-the middle attack
- Potential single point of attack and failure[26][27]
- Programming and Communication Channel Issues (w.r.t. security) - OpenFlow Deployment Experience [28]
References
[edit]- ^ McKeown, Nick; et al. (April 2008). "OpenFlow: Enabling innovation in campus networks". ACM SIGCOMM Computer Communication Review. 38 (2): 69–74. doi:10.1145/1355734.1355746. S2CID 1153326. Retrieved 2 November 2009.
- ^ "OpenFlow Switch Errata v1.0.2-rc1" (PDF). Open Networking Foundation. 4 October 2013.
- ^ "Service Name and Transport Protocol Port Number Registry". IANA.
- ^ Koponen, Teemu; et al. (4 October 2010). "Onix: A Distributed Control Platform for Large-scale Production Networks". USENIX. Retrieved 1 October 2010.
- ^ McKeown, Nick; et al. (April 2008). "OpenFlow: Enabling innovation in campus networks". ACM SIGCOMM Computer Communication Review. 38 (2): 69–74. doi:10.1145/1355734.1355746. S2CID 1153326. Retrieved 2 November 2009.
- ^ Greene, Kate (March–April 2009). "TR10: Software-Defined Networking". MIT Technology Review. Retrieved 7 October 2011.
- ^ "Open Networking Foundation: SDN Defined". Open Networking Foundation. 23 March 2013.
- ^ "Software-Defined Networking (SDN): The New Norm for Networks". Open Networking Foundation. Archived from the original on 18 August 2014. Retrieved 22 May 2013.
- ^ Solomon, Howard (11 December 2013). "Alcatel Now Supports OpenFlow, OpenStack on Switches". IT World Canada.
- ^ Metz, Cade (26 March 2013). "You Can't Have Google's Pluto Switch, But You Can Have This". Wired.
- ^ Radda, Pavel (22 March 2011). "Brocade Leads OpenFlow Adoption to Accelerate Network Virtualization and Cloud Application Development". Reuters. Archived from the original on 4 November 2013. Retrieved 29 November 2011.
- ^ "FlowEngine:Intelligent Flow Management". Radisys. 20 February 2016. Archived from the original on 16 April 2016. Retrieved 11 February 2016.
- ^ "Open Networking Foundation Press Release". Open Networking Foundation. 20 March 2011. Archived from the original on 26 March 2011.
- ^ "OpenFlow v1.2" (PDF). Open Networking Foundation. Archived from the original (PDF) on 9 November 2016. Retrieved 13 June 2013.
- ^ "OpenFlow v1.5.1" (PDF). Open Networking Foundation.
- ^ "Marvell Introduces OpenFlow-enabled Switches". Marvell. 10 May 2011. Retrieved 28 June 2015.
- ^ "OpenFlow – Innovate in Your Network". Larch Networks. 6 May 2011. Archived from the original on 30 June 2015. Retrieved 28 June 2015.
- ^ "SDN Interoperability Lab - InCNTRE". IU.edu. 5 June 2012. Archived from the original on 5 June 2012.
- ^ "Project Floodlight". www.openflowhub.org.
- ^ Cole, Bernard (2 February 2012). "Big Switch releases open source controller for OpenFlow". EE Times. Retrieved 2 February 2012.
- ^ Kerner, Sean Michael (13 November 2012). "Big Switch Emerges with Commercial SDN Portfolio". Enterprise Networking Planet.
- ^ Neagle, Colin (2 February 2012). "HP takes giant first step into OpenFlow: HP is announcing its first effort to support OpenFlow standard on its Ethernet switches". Network World. Archived from the original on 13 May 2013. Retrieved 28 April 2013.
- ^ Levy, Steven (17 April 2012). "Going With the Flow: Google's Secret Switch to the Next Wave of Networking". Wired. Retrieved 17 April 2012.
- ^ Duffy, Jim (22 January 2013). "NEC rolls out OpenFlow for Microsoft Hyper-V: NEC virtual switch adds IPv6 support to SDN controller". Network World. Archived from the original on 3 April 2013. Retrieved 28 April 2013.
- ^ a b "OpenFlow protocol has a switch authentication vulnerability". The Register.
- ^ "OpenFlow Vulnerability Assessment" (PDF). Indiana.edu. Archived from the original (PDF) on 4 March 2016. Retrieved 23 June 2014.
- ^ "OpenFlow security: Does OpenFlow secure software-defined networks?". TechTarget.
- ^ Natarajan, Sriram; et al. (2013). "A Software defined Cloud-Gateway automation system using OpenFlow". 2013 IEEE 2nd International Conference on Cloud Networking (Cloud Net). pp. 219–226. doi:10.1109/CloudNet.2013.6710582. ISBN 978-1-4799-0568-3. S2CID 16248079.
{{cite book}}:|work=ignored (help)
External links
[edit]OpenFlow
View on GrokipediaIntroduction
Definition and Purpose
OpenFlow is a communications protocol that provides a standardized interface for external software controllers to directly access and manipulate the forwarding plane of network switches and routers across the network.[5] It operates by establishing a secure channel between the controller and the switch, allowing the controller to install, modify, or delete flow rules that dictate how packets are processed and forwarded.[6] The core purpose of OpenFlow is to decouple the network's control plane—responsible for routing decisions and network intelligence—from the data plane, which handles high-speed packet forwarding.[5] This separation enables centralized management of network behavior through software, promoting programmability and flexibility in handling traffic without relying on vendor-specific hardware configurations.[6] By shifting control logic to external applications, OpenFlow facilitates dynamic adaptation to changing network conditions and supports the broader paradigm of software-defined networking (SDN). In operation, OpenFlow switches maintain one or more flow tables populated with rules from the controller; incoming packets are matched against these rules based on header fields, port, and other attributes, then subjected to specified actions such as forwarding, modifying, or dropping.[5] Unlike traditional switches with fixed forwarding logic embedded in hardware, OpenFlow devices forward packets solely according to these programmable flow rules, with unmatched packets typically forwarded to the controller for further decision-making.[6] The protocol was initially motivated by the need to overcome limitations in proprietary network hardware, which hindered researchers from experimenting with novel protocols on production networks carrying real traffic.[6] OpenFlow addresses this by providing a uniform, open interface that allows innovation in network architectures, such as testing alternative routing schemes or security mechanisms, without requiring custom-built equipment or disrupting existing infrastructure.[6]Role in Software-Defined Networking
Software-Defined Networking (SDN) represents an architectural approach to networking that decouples the control plane from the data plane, allowing software-based controllers to direct traffic across network devices in a programmable and centralized manner.[7] This separation enables network administrators to manage and optimize resources dynamically, abstracting the underlying hardware for applications and services.[8] Within this framework, OpenFlow serves as the primary southbound interface, providing a standardized protocol for SDN controllers to interact with and configure forwarding devices such as switches and routers.[7][8] OpenFlow enables SDN by acting as the core communication protocol between centralized controllers—such as NOX and Floodlight—and OpenFlow-compatible switches, facilitating the installation and modification of flow rules to enforce network policies in real time.[9][8] This interaction allows controllers to maintain a global view of the network and issue instructions that direct how packets are processed, promoting interoperability across multi-vendor environments.[7] By standardizing this southbound communication, OpenFlow supports dynamic policy enforcement, where network behaviors can be adjusted on-the-fly without requiring hardware reconfiguration.[10] In SDN deployments, OpenFlow contributes to enhanced scalability by enabling efficient handling of large-scale traffic through centralized decision-making and automated flow management, reducing the need for manual interventions in expansive networks.[9] It also provides flexibility for diverse applications, such as traffic engineering to optimize paths and bandwidth, load balancing to distribute workloads evenly, and intrusion detection to monitor and mitigate threats via programmable flow rules that inspect and redirect suspicious packets.[8][9] These capabilities stem from OpenFlow's flow-based paradigm, which allows fine-grained control over network behavior tailored to specific use cases.[7] Compared to traditional networking, where control logic is distributed across vendor-specific devices using proprietary protocols, OpenFlow shifts management to a centralized, open model that simplifies operations in large-scale environments by standardizing instructions and fostering vendor neutrality.[8] This transition addresses the rigidity and complexity of legacy systems, where inconsistent policies and hardware dependencies often hinder innovation and increase operational costs.[2][9] As a result, OpenFlow-based SDN reduces network complexity, enabling faster deployment of services and greater adaptability to evolving demands.[8]Technical Architecture
Separation of Control and Data Planes
In OpenFlow, the separation of the control plane and data plane represents a foundational architectural principle that decouples network decision-making from packet forwarding, enabling more programmable and flexible networking. The control plane is responsible for handling routing decisions, policy enforcement, and the installation of flow rules; it is centralized in an external controller that communicates with switches over the OpenFlow protocol. This centralization allows the controller to maintain a global view of the network and dynamically manage traffic policies across multiple devices. In contrast, the data plane focuses solely on high-speed packet forwarding based on the rules pre-installed by the controller, implemented in commodity switches that lack embedded intelligence for complex decision-making. This division ensures that data plane elements operate at line-rate speeds without the overhead of control logic, processing packets according to predefined actions such as forwarding to specific ports or dropping them.[2][7] The mechanism of this separation relies on the OpenFlow protocol, which serves as a standardized interface to carry instructions from the controller to the switch's data plane, effectively replacing traditional integrated designs where control and forwarding logic were tightly coupled within each device. Through a secure channel, the controller installs, modifies, or removes flow entries in the switch's flow table, directing how packets matching specific headers (e.g., Ethernet source/destination or IP addresses) are handled. This protocol-based decoupling abstracts the underlying switch hardware, allowing a single controller to orchestrate multiple switches as if they were a unified fabric. For instance, when a packet arrives at a switch without a matching flow rule, it can be encapsulated and sent to the controller for processing, after which the appropriate rule is installed for future handling—illustrating the interactive flow between planes without embedding control functions in the data path.[2][11] This architectural split offers significant advantages, particularly in fostering innovation by permitting rapid experimentation with control logic through software while leveraging cost-effective, high-performance hardware for data forwarding. By externalizing control, network operators can implement custom policies, such as load balancing or security measures, without vendor-specific modifications to switches, reducing dependency on proprietary systems and accelerating deployment cycles. The separation also enhances scalability, as controllers can handle thousands of flow installations per second across distributed data planes, supporting diverse applications from campus networks to large-scale data centers. Overall, it promotes vendor neutrality and interoperability, as evidenced by widespread adoption in commercial environments where OpenFlow-enabled switches process traffic at wire speeds under centralized orchestration.[7][12][2]OpenFlow Switch Components
An OpenFlow switch provides a logical abstraction that separates the data plane from the control plane, presenting a programmable interface for packet processing through a set of flow tables connected to a controller via a secure channel, along with a pipeline for sequential packet handling.[1] This abstraction allows the controller to install flow rules dynamically, enabling centralized management of network behavior without direct hardware intervention.[1] The primary key components of an OpenFlow switch include flow tables, group tables, and meter tables, each serving distinct roles in packet processing and traffic management. Flow tables consist of match fields, priority levels, counters, and action sets that enable the switch to classify incoming packets based on header fields and apply corresponding forwarding or modification instructions, such as dropping or outputting to specific ports.[1] Group tables extend flow table actions by supporting multicast and load balancing through predefined group entries that contain multiple action buckets, allowing packets to be replicated or selectively forwarded across ports based on group types like "all" for broadcasting or "select" for hashing-based distribution.[1] Meter tables facilitate rate limiting and quality-of-service enforcement by associating flow entries with meter identifiers, where each meter applies bandwidth constraints via bands that drop or remark packets exceeding specified rates.[1] Packet processing in an OpenFlow switch occurs through a pipeline that directs ingress packets starting at the first flow table (table 0), with subsequent tables accessed via instructions that may resubmit packets for further matching or apply final actions at the pipeline's end.[1] This multi-table pipeline, supported in versions from 1.1 onward, allows for staged processing where each table can modify packet headers or metadata to influence downstream decisions, providing flexibility for complex forwarding logics like access control followed by routing.[1] Egress processing may involve additional tables for output-specific handling, ensuring comprehensive traversal before packets exit the switch.[1] The secure channel forms the critical link between the OpenFlow switch and the external controller, utilizing SSL/TLS protocols to encrypt all control messages and protect against unauthorized access or eavesdropping.[1] This connection supports asynchronous event notifications from the switch, such as port status changes, and ordered delivery of controller commands through mechanisms like barriers, maintaining isolation of the control plane from data traffic.[1] Multiple controllers can connect via primary and auxiliary roles, with the channel configurable for failover and role negotiation to ensure reliable operation.[1]Protocol Specifications
Message Types and Flow
OpenFlow employs a message-based protocol for communication between the controller and the switch, utilizing a secure channel to exchange control information. Messages are structured with a fixed header containing fields such as version, type, length, and transaction ID, followed by a variable body specific to each message subtype. The protocol classifies messages into three primary categories: controller-to-switch, asynchronous (switch-to-controller), and symmetric, enabling directed management, event reporting, and bidirectional connection maintenance, respectively. This classification supports both proactive and reactive network control paradigms.[1] Controller-to-switch messages allow the controller to configure and query the switch's operation. The Flow Mod message is central, enabling the installation, modification, or deletion of flow entries in the switch's flow tables, with commands such as ADD, MODIFY, MODIFY_STRICT, or DELETE, and optional timeouts for idle or hard expiration. The Packet-Out message injects packets into the switch for transmission on specified ports, often including actions and referencing buffered packets via a buffer ID. Multipart requests, sent as OFPT_MULTIPART_REQUEST, gather statistics or configuration data, such as flow, table, or port statistics, with the switch responding via corresponding replies to support monitoring and debugging. These messages facilitate centralized control over forwarding rules and data plane behavior.[1] Asynchronous messages are generated by the switch and sent unsolicited to the controller to report events or seek guidance. The Packet-In message forwards packets to the controller, typically for table misses or specific action instructions, including packet data or metadata like the ingress port and reason code (e.g., OFPR_TABLE_MISS). The Flow Removed message notifies the controller when a flow entry expires or is evicted, providing details such as duration, packet/byte counts, and the reason (e.g., idle timeout or hard timeout). Error messages alert the controller to processing failures, such as invalid instructions or bad type errors, categorized by error types like bad request or bad action. These messages enable reactive flow management and error handling in dynamic network environments.[1] Symmetric messages support bidirectional communication without directional dependency, primarily for connection lifecycle management. The handshake process begins with Hello messages (OFPT_HELLO) exchanged upon connection establishment to negotiate the protocol version, using a version bitmap in later specifications for compatibility. Echo Request and Echo Reply messages monitor link liveness, with the controller or switch sending requests and expecting timely replies to detect failures. Other symmetric messages include Features Request/Reply for capability exchange and Error messages, which can flow in either direction. This category ensures reliable, ongoing interaction between the endpoints.[1] The overall protocol flow commences with the establishment of a secure channel, typically over TCP on port 6653 or TLS for encryption, initiated by the switch connecting to the controller. Following connection, the initial handshake occurs via Hello messages to agree on the protocol version, preventing mismatches. The controller then sends a Features Request to query the switch's capabilities, such as supported actions or port configurations, with the switch replying via Features Reply. Once established, the connection enters an operational state where the controller issues Flow Mod messages for rule updates, Packet-Out for traffic injection, and multipart requests for monitoring, while the switch responds asynchronously with Packet-In, Flow Removed, or Error messages as events arise. Echo messages periodically verify connectivity, and Barrier messages ensure message ordering if needed. This flow maintains a state transition from unconnected to negotiated, capable, and active, supporting continuous adaptation of the data plane without interrupting forwarding.[1]Flow Tables and Matching
In OpenFlow, flow tables serve as the core mechanism for packet classification and forwarding decisions within the data plane of an OpenFlow-enabled switch. Each flow table consists of a set of flow entries that define rules for matching incoming packets against specific header fields. These entries enable the switch to process traffic based on programmable criteria, decoupling forwarding logic from hardware constraints.[13][2] A flow entry is structured with three primary components: match fields, priority, and counters. Match fields specify the packet headers to inspect, such as ingress port, Ethernet source and destination addresses, EtherType, VLAN ID and priority, IP source and destination addresses, IP ToS bits, IP protocol, and TCP/UDP source and destination ports—totaling up to 12 fields in early specifications. Later versions expand this flexibility through the OpenFlow Extensible Match (OXM) format, incorporating additional fields like TCP flags, tunnel IDs, IPv6 Flow Label, and MPLS labels to support more complex matching scenarios. Priority levels, ranging from 0 to 65535 with higher values taking precedence, resolve conflicts among overlapping entries; exact matches inherently receive the highest priority. Counters track per-entry statistics, including received packets, byte counts, and duration, typically using 64-bit values that wrap around upon overflow, alongside aggregate counters for tables, ports, and queues.[13][1] The matching process examines packet headers against flow entries in priority order, supporting exact, wildcard, and longest-prefix matching techniques. Exact matching requires identical values with no wildcards, offering precise control for specific flows. Wildcard matching uses "ANY" or bitmasks to ignore certain bits, such as subnet masks for IP addresses, allowing broader rules for aggregated traffic. Longest-prefix matching applies specifically to IP fields, selecting the entry with the longest matching prefix to emulate traditional routing behavior. Switches may organize tables with exact-match entries preceding wildcard ones for efficiency, processing packets through a single table in initial versions or multiple tables in pipelines introduced later.[13][1] When a packet does not match any entry in the flow table—a table miss—the switch handles it according to a default rule, typically sending the packet (or a portion of it) to the controller via a Packet-In message for further processing or dropping it. This miss entry can also output the packet to the next table in a multi-table pipeline or apply other predefined actions, with configurable parameters like the maximum byte length for controller transmission (default 128 bytes).[13][1] Flow entries are installed, modified, or deleted by the controller using Flow Mod messages, such as ADD for new entries or DELETE for removal, with options to check for overlaps or apply strict matching. Entries expire via idle timeouts, which remove inactive flows after a specified period of no matching packets, or hard timeouts, which enforce a maximum lifetime regardless of activity; both can be set to zero for permanent persistence. In cases of table overflow, eviction policies prioritize removal based on factors like entry importance, lifetime, or installation order, ensuring resource management while minimizing disruptions.[13][1]Actions and Instructions
In OpenFlow, actions define the operations performed on packets that match flow entries, enabling forwarding, modification, and other processing decisions within the switch pipeline. Instructions, attached to flow entries, specify how these actions are applied and control packet progression through the multiple flow tables. These mechanisms allow for flexible packet handling, separating the decision logic from the actual execution to support programmable networking behaviors.[1] Basic actions include outputting a packet to a specific port, such as a physical port, logical port, or reserved ports like the controller or flood to all ports; dropping the packet implicitly if no output or group action is present; enqueuing the packet to a designated queue on a port for quality-of-service control; and modifying packet fields, for example, setting a VLAN ID using the OXM_OF_VLAN_VID field or decrementing the IP TTL with the OFPAT_DEC_NW_TTL action, which also updates the checksum accordingly. These actions are encoded in structures likeofp_action_output for output (16 bytes, specifying port and maximum length) and ofp_action_set_queue for enqueue (8 bytes, setting queue ID). Such operations provide the foundational tools for traffic engineering and header manipulation in software-defined networks.[1]
OpenFlow instructions dictate the application of actions and pipeline flow, with key types including apply-actions, which immediately executes a list of actions on the matching packet using the OFPIT_APPLY_ACTIONS instruction; write-actions, which merges a list of actions into the packet's action set, overwriting any duplicates with the new ones via OFPIT_WRITE_ACTIONS; clear-actions, which empties the entire action set using OFPIT_CLEAR_ACTIONS (required for table-miss entries); and goto-table, which advances the packet to a specified next table (table ID greater than the current one) through OFPIT_GOTO_TABLE, facilitating multi-stage processing except in the final table. These instructions, defined in variable-length structures like ofp_instruction_actions, enable both immediate and deferred action execution to build complex processing pipelines.[1]
Advanced features extend basic actions with group actions for coordinated outputs and experimenter actions for customization. Group actions, invoked via the OFPAT_GROUP action (8 bytes, referencing a group ID), process packets through group entries containing buckets of actions; supported types include all (executes all buckets for flooding or multicast, required), indirect (executes a single bucket for simple next-hop routing, required), and fast-failover (selects the highest-priority live bucket based on liveness monitoring, optional). Experimenter actions, using OFPAT_EXPERIMENTER (multiple of 8 bytes, with a unique experimenter ID like an IEEE OUI), allow vendors to define proprietary extensions while maintaining protocol compatibility. These capabilities support scalable multicast, load balancing, and innovation without altering the core specification.[1]
During pipeline traversal, write-actions and similar instructions accumulate selected actions into an action set stored in packet metadata, limiting to one action per type to avoid conflicts; this set is applied only at the pipeline's end—after the last table or when no further goto-table is specified—following a fixed order: copy-TTL inwards, pop tags, push new tags (MPLS, PBB, VLAN), copy-TTL outwards, decrement TTL, set fields, apply QoS (e.g., enqueue), invoke group, and finally output. This deferred application ensures consistent processing across the multi-table pipeline, with egress handling starting from the ingress port's output action if needed. The design, introduced in early OpenFlow concepts for experimental protocol deployment, has evolved to handle modern network demands efficiently.[1][2]
