Recent from talks
Nothing was collected or created yet.
Maximum transmission unit
View on WikipediaIn computer networking, the maximum transmission unit (MTU) is the size of the largest protocol data unit (PDU) that can be communicated in a single network layer transaction.[1]: 25 The MTU relates to, but is not identical to the maximum frame size that can be transported on the data link layer, e.g., Ethernet frame.
Larger MTU is associated with reduced overhead. Smaller MTU values can reduce network delay. In many cases, MTU is dependent on underlying network capabilities and must be adjusted manually or automatically so as to not exceed these capabilities. MTU parameters may appear in association with a communications interface or standard. Some systems may decide MTU at connect time, e.g. using Path MTU Discovery.
Applicability
[edit]MTUs apply to communications protocols and network layers. The MTU is specified in terms of bytes or octets of the largest PDU that the layer can pass onwards. MTU parameters usually appear in association with a communications interface (NIC, serial port, etc.). Standards (Ethernet, for example) can fix the size of an MTU; or systems (such as point-to-point serial links) may decide MTU at connect time.
Underlying data link and physical layers usually add overhead to the network layer data to be transported, so for a given maximum frame size of a medium, one needs to subtract the amount of overhead to calculate that medium's MTU. For example, with Ethernet, the maximum frame size is 1518 bytes, 18 bytes of which are overhead (header and frame check sequence), resulting in an MTU of 1500 bytes.
Tradeoffs
[edit]A larger MTU brings greater efficiency because each network packet carries more user data while protocol overheads, such as headers or underlying per-packet delays, remain fixed; the resulting higher efficiency means an improvement in bulk protocol throughput. A larger MTU also requires processing of fewer packets for the same amount of data. In some systems, per-packet-processing can be a critical performance limitation.
However, this gain is not without a downside. Large packets occupy a link for more time than a smaller packet, causing greater delays to subsequent packets, and increasing network delay and delay variation. For example, a 1500-byte packet, the largest allowed by Ethernet at the network layer, ties up a 14.4k modem for about one second.
Large packets are also problematic in the presence of communications errors. If no forward error correction is used, corruption of a single bit in a packet requires that the entire packet be retransmitted, which can be costly. At a given bit error rate, larger packets are more susceptible to corruption. Their greater payload makes retransmissions of larger packets take longer. Despite the negative effects on retransmission duration, large packets can still have a net positive effect on end-to-end TCP performance.[2]
Internet protocol
[edit]The Internet protocol suite was designed to work over many different networking technologies, each of which may use packets of different sizes. While a host will know the MTU of its own interface and possibly that of its peers (from initial handshakes), it will not initially know the lowest MTU in a chain of links to other peers. Another potential problem is that higher-level protocols may create packets larger than even the local link supports.
IPv4 allows fragmentation which divides the datagram into pieces, each small enough to accommodate a specified MTU limitation. This fragmentation process takes place at the internet layer. The fragmented packets are marked so that the IP layer of the destination host knows it should reassemble the packets into the original datagram.
All fragments of a packet must arrive for the packet to be considered received. If the network drops any fragment, the entire packet is lost.
When the number of packets that must be fragmented or the number of fragments is great, fragmentation can cause unreasonable or unnecessary overhead. For example, various tunneling situations may exceed the MTU by very little as they add just a header's worth of data. The addition is small, but each packet now has to be sent in two fragments, the second of which carries very little payload. The same amount of payload is being moved, but every intermediate router has to forward twice as many packets.
The Internet Protocol requires that hosts must be able to process IP datagrams of at least 576 bytes (for IPv4) or 1280 bytes (for IPv6). However, this does not preclude link layers with an MTU smaller than this minimum MTU from conveying IP data. For example, according to IPv6's specification, if a particular link layer cannot deliver an IP datagram of 1280 bytes in a single frame, then the link layer must provide its own fragmentation and reassembly mechanism, separate from the IP fragmentation mechanism, to ensure that a 1280-byte IP datagram can be delivered, intact, to the IP layer.
MTUs for common media
[edit]In the context of Internet Protocol, MTU refers to the maximum size of an IP packet that can be transmitted without fragmentation over a given medium. The size of an IP packet includes IP headers but excludes headers from the link layer. In the case of an Ethernet frame this adds a protocol overhead of 18 bytes, or 22 bytes with an IEEE 802.1Q tag for VLAN tagging or class of service.
The MTU should not be confused with the minimum datagram size (in one piece or in fragments) that all hosts must be prepared to accept. This is 576 bytes for IPv4[1]: 24 and 1280 bytes for IPv6.[3]: 25
| Media for IP transport | Maximum transmission unit (bytes) | Notes |
|---|---|---|
| Internet IPv4 path MTU | At least 68,[1]: 24 max of 64 KiB[1]: 12 | Systems may use Path MTU Discovery[4] to find the actual path MTU. Routing from larger MTU to smaller MTU causes IP fragmentation. |
| Internet IPv6 path MTU | At least 1280,[3] max of 64 KiB, but optional jumbograms go up to 4 GiB[5] | Systems should use Path MTU Discovery[3] to find the actual path MTU, unless the minimum MTU (1280 bytes) is not exceeded. Jumbograms are packets with a Jumbo Payload option to allow transmission of payloads between 65,536 and 4,294,967,295 octets in length. |
| X.25 | Minimal 576 (sending) or 1600 (receiving)[6] | |
| Ethernet v2 | 1500[7] | Nearly all IP over Ethernet implementations use the Ethernet II frame format. |
| Ethernet with LLC and SNAP | 1492[8] | |
| Ethernet jumbo frames | 1501–9202[9] or more[10] | The limit varies by vendor. For correct interoperation, frames should be no larger than the maximum frame size supported by any device on the network segment.[11] |
| PPPoE v2 | 1492[12] | Ethernet II MTU (1500) less PPPoE header (8); extensions exist |
| DS-Lite over PPPoE | 1452 | Ethernet II MTU (1500) less PPPoE header (8) and IPv6 header (40) |
| PPPoE jumbo frames | 1493–9190 or more[13] | Ethernet Jumbo Frame MTU (1501–9198) less PPPoE header (8) |
| IEEE 802.11 Wi-Fi (WLAN) | 2304[14] | The maximum MSDU size is 2304 before encryption. WEP will add 8 bytes, WPA-TKIP 20 bytes, and WPA2-CCMP 16 bytes. See also Frame aggregation mechanisms in 802.11n. |
| Token Ring (802.5) | 4464 | |
| FDDI | 4352[4] |
Ethernet maximum frame size
[edit]The IP MTU and Ethernet maximum frame size are configured separately. In Ethernet switch configuration, MTU may refer to Ethernet maximum frame size. In Ethernet-based routers, MTU normally refers to the IP MTU. If jumbo frames are allowed in a network, the IP MTU should also be adjusted upwards to take advantage of this.
Since the IP packet is carried by an Ethernet frame, the Ethernet frame has to be larger than the IP packet. With the normal untagged Ethernet frame overhead of 18 bytes and the 1500-byte payload, the Ethernet maximum frame size is 1518 bytes. If a 1500-byte IP packet is to be carried over a tagged Ethernet connection, the Ethernet frame maximum size needs to be 1522 bytes due to the larger size of an 802.1Q tagged frame. 802.3ac increases the standard Ethernet maximum frame size to accommodate this.
Path MTU Discovery
[edit]The Internet Protocol defines the path MTU of an Internet transmission path as the smallest MTU supported by any of the hops on the path between a source and destination. Put another way, the path MTU is the largest packet size that can traverse this path without suffering fragmentation.
Path MTU Discovery is a technique for determining the path MTU between two IP hosts, defined for both IPv4[4] and IPv6[15]. It works by sending packets with the DF (don't fragment) option in the IP header set. Any device along the path whose MTU is smaller than the packet will drop such packets and send back an ICMP Destination Unreachable (Datagram Too Big) message which indicates its MTU. This information allows the source host to reduce its assumed path MTU appropriately. The process repeats until the MTU becomes small enough to traverse the entire path without fragmentation.
Standard Ethernet supports an MTU of 1500 bytes and Ethernet implementation supporting jumbo frames, allow for an MTU up to 9000 bytes. However, border protocols like PPPoE will reduce this. Path MTU Discovery exposes the difference between the MTU seen by Ethernet end-nodes and the Path MTU.
Unfortunately, increasing numbers of networks drop ICMP traffic (for example, to prevent denial-of-service attacks), which prevents path MTU discovery from working. Packetization Layer Path MTU Discovery[16][17] is a Path MTU Discovery technique which responds more robustly to ICMP filtering. In an IP network, the path from the source address to the destination address may change in response to various events (load-balancing, congestion, outages, etc.) and this could result in the path MTU changing (sometimes repeatedly) during a transmission, which may introduce further packet drops before the host finds a new reliable MTU.
A failure of Path MTU Discovery carries the possible result of making some sites behind badly configured firewalls unreachable. A connection with mismatched MTU may work for low-volume data but fail as soon as a host sends a large block of data. For example, with Internet Relay Chat a connecting client might see the initial messages up to and including the initial ping (sent by the server as an anti-spoofing measure), but get no response after that. This is because the large set of welcome messages sent at that point are packets that exceed the path MTU. One can possibly work around this, depending on which part of the network one controls; for example one can change the MSS (maximum segment size) in the initial packet that sets up the TCP connection at one's firewall.
In other contexts
[edit]MTU is sometimes used to describe the maximum PDU sizes in communication layers other than the network layer.
- Cisco Systems and MikroTik use L2 MTU for the maximum frame size.[18][19]
- Dell/Force10 use MTU for the maximum frame size.[20]
- Hewlett-Packard used just MTU for the maximum frame size including the optional IEEE 802.1Q tag.[21]
- Juniper Networks use several MTU terms: Physical Interface MTU (L3 MTU plus some unspecified protocol overhead), Logical Interface MTU (consistent with IETF MTU) and Maximum MTU (maximum configurable frame size for jumbo frames).[22]
The transmission of a packet on a physical network segment that is larger than the segment's MTU is known as jabber. This is almost always caused by faulty devices.[23] Network switches and some repeater hubs have a built-in capability to detect when a device is jabbering.[24][25]
References
[edit]- ^ a b c d J. Postel, ed. (September 1981). INTERNET PROTOCOL - DARPA INTERNET PROGRAM PROTOCOL SPECIFICATION. IETF. doi:10.17487/RFC0791. STD 5. RFC 791. IEN 128, 123, 111, 80, 54, 44, 41, 28, 26. Internet Standard 5. Obsoletes RFC 760. Updated by RFC 1349, 2474 and 6864.
- ^ Murray, David; Terry Koziniec; Kevin Lee; Michael Dixon (2012). "Large MTUs and internet performance". 2012 IEEE 13th International Conference on High Performance Switching and Routing. pp. 82–87. doi:10.1109/HPSR.2012.6260832. ISBN 978-1-4577-0833-6. S2CID 232321.
- ^ a b c S. Deering; R. Hinden (July 2017). Internet Protocol, Version 6 (IPv6) Specification. Internet Engineering Task Force. doi:10.17487/RFC8200. STD 86. RFC 8200. Internet Standard 86. Obsoletes RFC 2460.
- ^ a b c J. Mogul; S. Deering (November 1990). Path MTU Discovery. Network Working Group. doi:10.17487/RFC1191. RFC 1191. Draft Standard. Obsoletes RFC 1063.
- ^ D. Borman; S. Deering; R. Hinden (August 1999). IPv6 Jumbograms. Network Working Group. doi:10.17487/RFC2675. RFC 2675. Proposed Standard. Obsoletes RFC 2147.
- ^ A. Malis; D. Robinson; R. Ullmann (August 1992). Multiprotocol Interconnect on X.25 and ISDN in the Packet Mode. Network Working Group. doi:10.17487/RFC1356. RFC 1356. Draft Standard. Obsoletes RFC 877.
- ^ C. Hornig (April 1984). A Standard for the Transmission of IP Datagrams over Ethernet Networks. Network Working Group. doi:10.17487/RFC0894. STD 41. RFC 894. Internet Standard 41.
- ^ IEEE 802.3[page needed]
- ^ Scott Hogg (2013-03-06), Jumbo Frames, Network World, retrieved 2013-08-05,
Most network devices support a jumbo frame size of 9216 bytes.
- ^ Juniper Networks (2020-03-23), Physical Interface Properties, retrieved 2020-05-01
- ^ Joe St Sauver (2003-02-04). "Practical Issues Associated With 9K MTUs" (PDF). uoregon.edu. p. 67. Retrieved 2016-12-15.
you still need to insure that ALL upstream Ethernet switches, including any switches in your campus core, are ALSO jumbo frame capable
- ^ L. Mamakos; K. Lidl; J. Evarts; D. Carrel; D. Simone; R. Wheeler (February 1999). A Method for Transmitting PPP Over Ethernet (PPPoE). Network Working Group. doi:10.17487/RFC2516. RFC 2516. Informational.
- ^ P. Arberg; D. Kourkouzelis; M. Duckett; T. Anschutz; J. Moisand (September 2006). Accommodating a Maximum Transit Unit/Maximum Receive Unit (MTU/MRU) Greater Than 1492 in the Point-to-Point Protocol over Ethernet (PPPoE). Network Working Group. doi:10.17487/RFC4638. RFC 4638. Informational.
- ^ 802.11-2012, page 413, section 8.3.2.1; page 381 "The Frame Body field is of variable size. The maximum frame body size is determined by the maximum MSDU size (2304 octets), plus the length of the Mesh Control field (6, 12, or 18 octets) if present, the maximum unencrypted MMPDU size excluding the MAC header and FCS (2304 octets) or the maximum A-MSDU size (3839 or 7935 octets, depending upon the STA’s capability), plus any overhead from security encapsulation."
- ^ J. McCann; S. Deering; J. Mogul (July 2017). R. Hinden (ed.). Path MTU Discovery for IP version 6. Internet Engineering Task Force. doi:10.17487/RFC8201. STD 87. RFC 8201. Internet Standard 87. Obsoletes RFC 1981.
- ^ M. Mathis; J. Heffner (March 2007). Packetization Layer Path MTU Discovery. Network Working Group. doi:10.17487/RFC4821. RFC 4821. Proposed Standard. Updated by RFC 8899.
- ^ G. Fairhurst; T. Jones; M. Tüxen; I. Rüngeler; T. Völker (September 2020). Packetization Layer Path MTU Discovery for Datagram Transports. Internet Engineering Task Force. doi:10.17487/RFC8899. ISSN 2070-1721. RFC 8899. Proposed Standard. Updates RFC 4821, 4960, 6951, 8085 and 8261.
- ^ "Configure and Verify Maximum Transmission Unit on Cisco Nexus Platforms". Cisco. 2016-11-29. Document ID:118994. Retrieved 2017-01-04.
- ^ "MTU in RouterOS". MikroTik. 2022-07-08. Retrieved 2022-09-02.
- ^ "How to configure MTU (Maximum Transmission Unit) for Jumbo Frames on Dell Networking Force10 switches". Dell. 2016-06-02. Article ID: HOW10713. Retrieved 2017-01-06.
- ^ "Jumbo Frames". HP Networking 2910al Switches Management and Configuration Guide. Hewlett-Packard. November 2011. P/N 5998-2874.
- ^ "SRX Series Services Gateways for the Branch Physical Interface Modules Reference: MTU Default and Maximum Values for Physical Interface Modules". Juniper. 2014-01-03. Retrieved 2017-01-04.
- ^ jabber, The Network Encyclopedia, retrieved 2016-07-28
- ^ show interfaces, Juniper Networks, retrieved 2016-07-28
- ^ IEEE 802.3 27.3.1.7 Receive jabber functional requirements
External links
[edit]- Marc Slemko (January 18, 1998). "Path MTU Discovery and Filtering ICMP". Archived from the original on August 9, 2011. Retrieved 2007-09-02.
- Tweaking your MTU / RWin for Orange Broadband Users
- How to set the TCP MSS value using iptables
- mturoute – a console utility for debugging mtu problems
Maximum transmission unit
View on GrokipediaFundamentals
Definition and Scope
The maximum transmission unit (MTU) is defined as the maximum sized datagram that can be transmitted through a given network without fragmentation.[1] This represents the largest protocol data unit (PDU) that a network interface or path can handle in a single transaction at the relevant protocol layer.[1] MTU applies across various layers of the network stack, including the data link layer where it governs frame sizes on physical media, and the network layer where it primarily constrains packet transmission for protocols like IP.[9] At the network layer, a distinction exists between the link MTU, which is the hardware-imposed maximum IP packet size (including the IP header but excluding link-layer framing) that can be sent over a single link in one piece, and the path MTU, which is the minimum link MTU along an end-to-end path between source and destination.[10] MTU is typically measured in bytes (or equivalently, octets, which are eight-bit units), encompassing the full PDU size unless specified otherwise.[10] In contexts like IP, this includes protocol headers, though related concepts such as the TCP maximum segment size (MSS) focus on payload excluding transport and network headers to optimize transmission. For instance, just as postal systems impose envelope size limits to ensure efficient handling without splitting contents, network MTUs set boundaries to prevent fragmentation and maintain performance.[1]Historical Development
The concept of the maximum transmission unit (MTU) originated in early packet-switched networks during the 1970s, particularly with the ARPANET, where link MTUs varied by interface, such as 1006 bytes on ARPANET interfaces to accommodate packet fragmentation across heterogeneous networks.[11] This approach influenced the design of internetworking protocols to handle varying packet sizes without universal standardization at the time. The formalization of MTU in internet protocols came with RFC 791 in 1981, which defined the Internet Protocol (IPv4) and specified that every internet destination must be able to accept datagrams of at least 576 bytes, establishing this as the minimum reassembly buffer size for hosts to support fragmentation and reassembly across diverse networks.[1] Concurrently, the Ethernet standard, initially published as the DIX Ethernet Version 1 specification in 1980 by Digital Equipment Corporation, Intel, and Xerox, set a standard MTU of 1500 bytes for local area networks, balancing error rates and transmission efficiency on 10 Mbps shared media; this was later ratified in IEEE 802.3 in 1983, becoming the foundational MTU for most LAN implementations.[12] Key advancements in the 1990s addressed path variability: RFC 1191 in 1990 introduced Path MTU Discovery (PMTUD), a mechanism for endpoints to dynamically determine the smallest MTU along an internet path, reducing unnecessary fragmentation by allowing senders to adjust packet sizes based on ICMP feedback.[7] The transition to IPv6, outlined in RFC 2460 (1998) and updated in RFC 8200 (2017), raised the minimum link MTU to 1280 bytes to simplify deployment on modern links while prohibiting fragmentation in transit routers, shifting more responsibility to endpoints.[13] Influential standards bodies shaped broader adoption: IEEE 802.3 continued to evolve Ethernet MTU definitions, maintaining 1500 bytes as the baseline while enabling extensions, and ITU-T recommendations, such as those in the G-series for transmission systems (e.g., G.7041/Y.1303), incorporated MTU considerations into telecommunications frameworks for optical transport, supporting Ethernet sizes like 1600 octets.[14] In the post-2000 era, support for jumbo frames emerged to enhance efficiency in high-speed environments, with Ethernet implementations allowing MTUs up to 9000 bytes or more in data centers and storage networks, driven by needs for reduced overhead in Gigabit Ethernet and beyond.[15] By the 2020s, 5G networks, governed by 3GPP specifications such as TS 38.323, supported maximum service data unit sizes up to 9000 bytes in the packet data convergence protocol layer, enabling larger MTUs in backhaul and core configurations to optimize throughput for diverse applications like ultra-reliable low-latency communications.[16]Applicability Across Layers
Data Link Layer Considerations
At the data link layer, the maximum transmission unit (MTU) defines the largest frame size that can be reliably transmitted over a physical medium, encompassing the entire protocol data unit including headers for addressing (such as MAC addresses), control fields, payload, and trailer elements like the frame check sequence (FCS) for integrity verification.[4] This frame-level limit ensures that data is formatted appropriately for the underlying hardware and medium, preventing transmission failures due to oversized units. For instance, in Ethernet networks, the standard maximum frame size is 1518 bytes, which includes 14 bytes for the header and 4 bytes for the FCS, thereby constraining the effective payload to 1500 bytes.[5] The determination of MTU at this layer is heavily influenced by the physical characteristics of the transmission medium, including signal propagation delays, cable lengths, and susceptibility to interference or noise, which can necessitate adjustments to maintain reliable delivery.[5] In environments with high noise or poor signal integrity, such as wireless links, larger frames increase the transmission duration and thus the exposure to errors, often leading to the adoption of smaller MTUs to reduce retransmission overhead and improve overall reliability.[17] Specific media impose distinct limits: Token Ring networks under IEEE 802.5 support a maximum frame size of approximately 4500 bytes on 4 Mbit/s links and up to 18,000 bytes on 16 Mbit/s links, reflecting speed-dependent buffering and timing constraints.[18] In contrast, IEEE 802.11 wireless networks typically limit the maximum service data unit (MSDU) to 2304 bytes, accounting for the challenges of radio signal variability and interference in shared spectrum environments.[19] Hardware components, particularly network interface cards (NICs), enforce these MTU limits through configured buffer capacities and port specifications, dropping or rejecting frames that exceed the supported size to avoid processing errors.[20] This enforcement at the data link layer directly bounds the effective MTU available to higher layers, as the network layer must construct packets that fit within the link frame payload after accounting for data link overhead, ensuring seamless encapsulation without mandatory fragmentation at the boundary.[2]Network Layer Interactions
At the network layer, the maximum transmission unit (MTU) defines the largest size of an IP datagram that can be transmitted over a link without fragmentation, ensuring compatibility across diverse network infrastructures. For IPv4, the minimum link MTU is 68 octets, allowing routers and hosts to forward datagrams of this size without further fragmentation, as specified in the protocol's foundational design.[1] In contrast, IPv6 mandates a higher minimum link MTU of 1280 octets for every link, eliminating reliance on fragmentation at intermediate nodes and promoting end-to-end packet integrity.[13] This distinction reflects IPv6's architectural shift toward larger, fixed-size packets to accommodate modern network demands while simplifying routing processes. Routers handle MTU constraints during packet forwarding by comparing the size of an incoming IP datagram against the MTU of the outgoing interface. If the datagram exceeds the outgoing MTU and the Don't Fragment (DF) bit is clear, the router fragments the datagram into smaller pieces that fit within the limit, adhering to IPv4 fragmentation rules that require minimizing the number of resulting fragments.[21] However, if the DF bit is set, the router drops the datagram and generates an ICMP "Destination Unreachable" message with code 4 (Fragmentation Needed and DF Set), including the next-hop MTU to signal the issue upstream.[21] This mechanism, rooted in core IP specifications, enables adaptive transmission but introduces overhead in processing and reassembly at the destination. In heterogeneous networks, where links support varying MTUs—such as IPv4 datagrams traversing Ethernet segments with a 1500-octet MTU alongside PPP links often limited to 1492 octets due to encapsulation—routers must navigate mismatched path capacities. These disparities can necessitate frequent fragmentation or packet drops, complicating end-to-end delivery and increasing latency in mixed environments like those combining wired and dial-up connections.[22] Tunneling protocols exacerbate these challenges by reducing effective MTUs through added headers, potentially leading to undetected mismatches if signaling fails. To mitigate black holes—scenarios where oversized packets are silently discarded without feedback—routers rely on ICMP messages to notify senders of MTU mismatches, allowing adjustments without full path discovery. When a router drops a DF-set packet due to an insufficient outgoing MTU, the ICMP "Fragmentation Needed" response conveys the limiting MTU value, enabling the source to retransmit smaller datagrams. Blocking or loss of these ICMP messages, however, creates persistent black holes, where connections stall as senders fail to adapt, a problem well-documented in TCP implementations over IP paths.[23]Performance Tradeoffs
Efficiency and Overhead
The efficiency of network transmission is significantly influenced by the MTU size, as larger MTUs generally improve throughput by reducing the relative proportion of header overhead to payload data. For instance, in protocols like TCP/IP, a combined header of 40 bytes (20 bytes for the IPv4 header and 20 bytes for the TCP header) represents approximately 2.7% of a 1500-byte MTU, allowing nearly 97.3% of the packet to carry useful data, whereas the same header constitutes 20% of a 200-byte MTU, severely limiting effective bandwidth utilization.[24][8] This reduction in overhead ratio enables higher overall throughput, particularly in high-bandwidth environments, by minimizing the frequency of header transmissions per unit of data.[24] Overhead can be quantified using the efficiency formula: efficiency = (payload size / total packet size) × 100%, where payload size is the MTU minus the protocol headers. For IPv4 alone, the minimum header is 20 bytes, so for a 1500-byte MTU, the maximum payload is 1480 bytes, yielding an efficiency of about 98.7%; adding a TCP header drops this to 97.3% for the combined 40-byte overhead.[1] In scenarios with small payloads, such as control messages or short bursts, this overhead becomes more pronounced, wasting bandwidth as a larger fraction of each packet is non-data.[25] Beyond bandwidth, processing costs also factor into MTU efficiency tradeoffs, with larger packets requiring more CPU cycles per packet due to increased data handling but fewer overall packets for the same data volume, thus reducing total processing overhead. For example, a 9000-byte jumbo frame replaces six 1500-byte standard frames, eliminating five sets of header processing and associated interrupts, which can lower CPU utilization in high-throughput scenarios.[25][26] Conversely, very small MTUs amplify per-packet processing demands, straining resources on routers and endpoints. Guidelines for optimal MTU sizing emphasize balancing these factors based on network scenarios; for typical local area networks (LANs) using Ethernet, an MTU of 1500 bytes provides efficient performance by minimizing overhead without excessive processing demands on standard hardware. In bandwidth-constrained or latency-sensitive environments, slightly smaller MTUs may be preferred to avoid potential delays from larger packet handling, though 1500 bytes remains the default for most general-purpose LAN efficiency.[25]Fragmentation Challenges
In IPv4, fragmentation occurs when a router encounters a packet larger than the outgoing link's MTU and the Don't Fragment (DF) bit in the IP header's flags field is set to 0. The flags field consists of three bits: the most significant bit is reserved (set to 0), the DF bit (bit 1) indicates whether fragmentation is prohibited, and the More Fragments (MF) bit (bit 2) signals if additional fragments follow. The 16-bit Identification (IP ID) field assigns a unique value to all fragments of the same original datagram for reassembly, while the 13-bit Fragment Offset field denotes each fragment's position relative to the start of the original data, in units of 8 octets. The MF bit is set to 1 in all but the final fragment. IPv6, by design, prohibits routers from fragmenting packets to simplify forwarding and reduce state management; only the source host performs fragmentation if the packet exceeds the path MTU, inserting a Fragment Header into oversized packets. This header includes a 32-bit Identification field (analogous to IPv4's IP ID for matching fragments), an 8-bit M flag (equivalent to MF), and a 13-bit Fragment Offset field, with fragments sized in multiples of 8 octets to fit the path MTU. The first fragment carries the full set of headers up to the upper-layer protocol, while subsequent fragments include only the IPv6 header, routing headers, and payload data.[13] Reassembly of fragments takes place exclusively at the destination host in both IPv4 and IPv6, where the receiver must buffer incoming fragments, use the IP ID (or equivalent) and offset values to order them, and reconstruct the original packet once complete. This process burdens the end host's CPU with significant overhead from memory buffering, fragment matching, and validation, particularly under high traffic loads where multiple datagrams require simultaneous reassembly. Incomplete fragment sets pose additional risks; if all pieces do not arrive within a configurable timeout—often 15 seconds for IPv4 upon receipt of the first fragment—the buffered fragments are discarded, triggering upper-layer retransmissions and increasing latency.[27] Fragmentation introduces notable challenges, including security vulnerabilities that enable amplification attacks. The Teardrop attack, for example, sends malformed or overlapping IP fragments with inconsistent offsets to exploit reassembly logic flaws, causing the target system to crash or hang during reconstruction due to improper handling of the bogus packet. Performance suffers from the transmission of multiple smaller packets, which amplifies per-packet header overhead and reduces effective bandwidth; in lossy networks, losing even one fragment invalidates the entire datagram, necessitating full retransmission and compounding delays, especially at high data rates where the 16-bit IP ID field risks collisions and duplicate discards.[28][29] Mitigation strategies emphasize avoiding fragmentation altogether by preferring end-to-end non-fragmented transmission through accurate path MTU estimation. In IPv4, setting the DF bit to 1 prevents router fragmentation, prompting oversized packets to be dropped with an ICMP "Destination Unreachable—Fragmentation Needed" message to inform the sender of the limiting MTU for adjustment. RFC 8900 underscores IP fragmentation's inherent fragilities—such as reassembly timeouts, ID exhaustion, and attack surfaces—and advocates for its deprecation in favor of robust Path MTU Discovery and conservative MTU configurations to ensure reliable, secure packet delivery. In practical scenarios, such as VPN clients where encapsulation adds overhead, a common mitigation is to manually lower the MTU value in the client's advanced settings, for example to 1400 bytes, to prevent packet fragmentation.[29][8][30][31]Protocol-Specific Implementations
MTUs in Common Network Media
The Maximum Transmission Unit (MTU) varies across common network media due to differences in physical layer constraints, encapsulation protocols, and performance optimizations. For instance, traditional Ethernet networks standardize at 1500 bytes for the payload, excluding headers, to balance efficiency on shared media. In contrast, technologies like PPP over Ethernet (PPPoE) reduce this to 1492 bytes to accommodate the 8-byte PPPoE header overhead. Asynchronous Transfer Mode (ATM) networks support larger MTUs, with a maximum of 9180 bytes for AAL5 frames, enabling efficient handling of variable-length data over fixed-size cells. Multiprotocol Label Switching (MPLS) allows adjustable MTUs up to 9198 bytes, depending on label stacking and underlying media, to support diverse traffic engineering needs.[32][33]| Network Media/Technology | Standard MTU (Bytes) | Notes |
|---|---|---|
| Ethernet (IEEE 802.3) | 1500 | Payload size; excludes 18-byte frame header and 4-byte FCS. Jumbo frames optional up to 9000+ on supported hardware. |
| PPPoE (over DSL/Ethernet) | 1492 | Accounts for 8-byte PPPoE header; common in broadband wired access. |
| ATM (AAL5) | 9180 | Maximum for user data in SAR-PDU; cell size fixed at 53 bytes.[32] |
| MPLS | Up to 9198 | Adjustable based on label stack (4 bytes per label); often matches underlying MTU.[33] |
| DSL (e.g., ADSL/VDSL) | 1492 | Typically via PPPoE; wired copper-based access with encapsulation limits. |
| Fiber Optic (10G Ethernet) | 9000+ (jumbo) | Supports larger frames for high-speed backbones; standard 1500 also viable. |
| Satellite (e.g., DVB-S2) | Variable (typically ≤1500) | Influenced by high latency and error correction; often lower to mitigate retransmissions.[34] |
| Wi-Fi (IEEE 802.11) | 2304 (AMSDU), typically 1500 for IP | Supports aggregation for larger payloads; IP often limited to Ethernet standard for compatibility.[35] |
Ethernet Frame Size Variations
The standard Ethernet frame, as defined in IEEE 802.3, supports a maximum transmission unit (MTU) of 1500 bytes for the payload, resulting in a total frame size of 1518 bytes when including the 14-byte header (destination and source MAC addresses plus length/type field) and 4-byte frame check sequence (FCS).[26] This configuration ensures compatibility across legacy and modern Ethernet implementations while maintaining efficient transmission for typical network traffic. To accommodate VLAN tagging under IEEE 802.1Q, the IEEE 802.3ac amendment extends the maximum frame size to 1522 bytes by adding a 4-byte tag, often referred to as a "baby giant" frame; this allows the same 1500-byte payload MTU without requiring jumbo frame support.[36] Such extensions are common in environments using virtual LANs for segmentation, providing a slight increase in overhead for enhanced network flexibility.[26] Jumbo frames extend the Ethernet payload beyond 1500 bytes, typically up to 9000 bytes in implementations for 10 Gigabit Ethernet (10GbE) and higher speeds, reducing overhead in high-throughput scenarios like data centers.[26] RFC 4638, published in 2006, facilitates this by accommodating MTUs greater than 1492 bytes in Point-to-Point Protocol over Ethernet (PPPoE), enabling jumbo frame support up to approximately 9000 bytes in encapsulated environments.[37] These larger frames are widely adopted in 10GbE and faster links to minimize CPU processing cycles per byte transferred.[26] In provider bridging networks defined by IEEE 802.1ad, the maximum frame size reaches 9216 bytes to support stacked VLAN tags (QinQ) and service provider scaling, allowing for efficient tunneling of customer traffic across multiple domains.[38] Frame size variations must also account for control mechanisms, such as IEEE 802.3x pause frames (fixed at 64 bytes minimum) and Link Aggregation Control Protocol (LACP) frames under IEEE 802.3ad, which operate within standard sizes but influence buffer configurations in mixed environments.[5] Jumbo frame compatibility relies on manual configuration across all devices, as Ethernet autonegotiation handles only speed and duplex, not MTU; mismatches in mixed jumbo and non-jumbo networks lead to frame drops or fragmentation, necessitating uniform end-to-end support to avoid performance degradation.[5]Path MTU Discovery Mechanisms
Path MTU Discovery (PMTUD) enables a source host to dynamically determine the effective maximum transmission unit (MTU) along the network path to a destination, minimizing fragmentation by adjusting packet sizes accordingly. The core mechanism involves the sender setting the Don't Fragment (DF) bit in IP headers and transmitting probe packets of increasing size. If a packet exceeds the MTU of any link or router along the path, the encountering device discards it and sends back an ICMP "Destination Unreachable" message with Type 3 and Code 4 ("Fragmentation Needed"), including the maximum transmittable unit from that point. The sender then lowers its path MTU estimate to this reported value and may resume probing with incrementally larger packets to refine the estimate upward, typically using a binary search approach for efficiency.[7] In IPv4 networks, PMTUD supplements the legacy capability for routers to fragment packets, though such fragmentation is now deprecated due to performance overhead and security risks. IPv4 implementations often start with a conservative initial path MTU guess of 576 bytes, as this value ensures compatibility across diverse links without prior knowledge. Black hole detection addresses scenarios where ICMP feedback is filtered or lost: if probe packets time out without acknowledgment after multiple retransmissions (e.g., three times the retransmission timeout), the sender assumes a path MTU reduction and lowers the estimate by a fixed amount, such as 100 bytes per step, until connectivity resumes.[7] IPv6 mandates stricter adherence to PMTUD, as routers cannot fragment packets; any oversized packet is dropped, and the source must solely rely on discovery to avoid failures. The IPv6 PMTUD algorithm mirrors IPv4 but uses ICMPv6 "Packet Too Big" messages (Type 2) instead of ICMP Type 3 Code 4, with an initial minimum path MTU of 1280 bytes to support the protocol's baseline requirements. Probing proceeds similarly with DF-equivalent semantics via the IPv6 header, and black hole mitigation employs transport-layer timeouts, ensuring end-to-end adaptation without intermediate intervention.[39] A key extension integrates PMTUD with TCP's Maximum Segment Size (MSS) negotiation. During the TCP three-way handshake, the sender clamps the advertised MSS to the current path MTU minus 20 bytes for IPv4 headers (or 40 bytes for IPv6), preventing the receiver from sending segments that would require fragmentation. This adjustment, performed iteratively as the path MTU is refined, ensures seamless operation without altering the core discovery algorithm.[7] To mitigate vulnerabilities in classic PMTUD—such as reliance on potentially unreliable or spoofable ICMP messages—Packetization Layer Path MTU Discovery (PLPMTUD) introduces resilience through transport-layer feedback. Developed in 2007, PLPMTUD leverages packetization protocols like TCP to infer path MTU via delivery success or loss detection, rather than ICMP alone; probes are sent at exponentially increasing intervals, with the base size starting at a safe value (e.g., 1280 bytes for IPv6) and growing up to a ceiling like 9000 bytes. Upon detecting loss via timeouts or explicit acknowledgments, the algorithm halves the current probe size and restarts, providing robustness against packet drops that mimic black holes. This approach has been widely adopted for its compatibility with datagram transports and reduced dependency on network-layer signals.[10] In practice, when automated PMTUD mechanisms are impaired (such as by filtered ICMP messages leading to black holes) or for performance fine-tuning, manual path MTU determination using diagnostic tools is common. A standard method employs the ping utility with the "don't fragment" flag to identify the largest non-fragmenting packet size. On Windows systems, the commandping -f -l [size] [destination] (e.g., ping -f -l 1472 8.8.8.8) is used iteratively to find the maximum payload size that succeeds without fragmentation; the path MTU is then this size plus 28 bytes (20-byte IPv4 header + 8-byte ICMP header). This technique is particularly useful for connections with encapsulation overhead, such as PPPoE, where the effective MTU is typically 1492 bytes due to the 8-byte PPPoE header overhead from a standard 1500-byte Ethernet MTU. The PPPoE specification requires that the MTU not exceed 1492 bytes.[40]
In specific applications, such as online gaming over PPPoE connections (for example, in Valorant), users have reported reduced lag, desync, or inconsistent performance by manually configuring the client interface MTU to the tested maximum non-fragmenting value, commonly 1492 bytes or lower (e.g., 1450–1472 bytes), especially when automatic discovery yields suboptimal results.
