Hubbry Logo
100 Gigabit Ethernet100 Gigabit EthernetMain
Open search
100 Gigabit Ethernet
Community hub
100 Gigabit Ethernet
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
100 Gigabit Ethernet
100 Gigabit Ethernet
from Wikipedia

40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) are groups of computer networking technologies for transmitting Ethernet frames at rates of 40 and 100 gigabits per second (Gbit/s), respectively. These technologies offer significantly higher speeds than 10 Gigabit Ethernet. The technology was first defined by the IEEE 802.3ba-2010 standard[1] and later by the 802.3bg-2011, 802.3bj-2014,[2] 802.3bm-2015,[3] and 802.3cd-2018 standards. The first succeeding Terabit Ethernet specifications were approved in 2017.[4]

The standards define numerous port types with different optical and electrical interfaces and different numbers of optical fiber strands per port. Short distances (e.g. 7 m) over twinaxial cable are supported while standards for fiber reach up to 80 km.

Standards development

[edit]

On July 18, 2006, a call for interest for a High Speed Study Group (HSSG) to investigate new standards for high speed Ethernet was held at the IEEE 802.3 plenary meeting in San Diego.[5]

The first 802.3 HSSG study group meeting was held in September 2006.[6] In June 2007, a trade group called "Road to 100G" was formed after the NXTcomm trade show in Chicago.[7]

On December 5, 2007, the Project Authorization Request (PAR) for the P802.3ba 40 Gbit/s and 100 Gbit/s Ethernet Task Force was approved with the following project scope:[8]

The purpose of this project is to extend the 802.3 protocol to operating speeds of 40 Gbit/s and 100 Gbit/s in order to provide a significant increase in bandwidth while maintaining maximum compatibility with the installed base of 802.3 interfaces, previous investment in research and development, and principles of network operation and management. The project is to provide for the interconnection of equipment satisfying the distance requirements of the intended applications.

The 802.3ba task force met for the first time in January 2008.[9] This standard was approved at the June 2010 IEEE Standards Board meeting under the name IEEE Std 802.3ba-2010.[10]

The first 40 Gbit/s Ethernet Single-mode Fibre PMD study group meeting was held in January 2010 and on March 25, 2010, the P802.3bg Single-mode Fibre PMD Task Force was approved for the 40 Gbit/s serial SMF PMD.

The scope of this project is to add a single-mode fiber Physical Medium Dependent (PMD) option for serial 40 Gbit/s operation by specifying additions to, and appropriate modifications of, IEEE Std 802.3-2008 as amended by the IEEE P802.3ba project (and any other approved amendment or corrigendum).

On June 17, 2010, the IEEE 802.3ba standard was approved.[1][11] In March 2011, the IEEE 802.3bg standard was approved.[12] On September 10, 2011, the P802.3bj 100 Gbit/s Backplane and Copper Cable task force was approved.[2]

The scope of this project is to specify additions to and appropriate modifications of IEEE Std 802.3 to add 100 Gbit/s 4-lane Physical Layer (PHY) specifications and management parameters for operation on backplanes and twinaxial copper cables, and specify optional Energy Efficient Ethernet (EEE) for 40 Gbit/s and 100 Gbit/s operation over backplanes and copper cables.

On May 10, 2013, the P802.3bm 40 Gbit/s and 100 Gbit/s Fiber Optic Task Force was approved.[3]

This project is to specify additions to and appropriate modifications of IEEE Std 802.3 to add 100 Gbit/s Physical Layer (PHY) specifications and management parameters, using a four-lane electrical interface for operation on multimode and single-mode fiber optic cables, and to specify optional Energy Efficient Ethernet (EEE) for 40 Gbit/s and 100 Gbit/s operation over fiber optic cables. In addition, to add 40 Gbit/s Physical Layer (PHY) specifications and management parameters for operation on extended reach (>10 km) single-mode fiber optic cables.

Also on May 10, 2013, the P802.3bq 40GBASE-T Task Force was approved.[13]

Specify a Physical Layer (PHY) for operation at 40 Gbit/s on balanced twisted-pair copper cabling, using existing Media Access Control, and with extensions to the appropriate physical layer management parameters.

On June 12, 2014, the IEEE 802.3bj standard was approved.[2]

On February 16, 2015, the IEEE 802.3bm standard was approved.[14]

On May 12, 2016, the IEEE P802.3cd Task Force started working to define next generation two-lane 100 Gbit/s PHY.[15]

On May 14, 2018, the PAR for the IEEE P802.3ck Task Force was approved. The scope of this project is to specify additions to and appropriate modifications of IEEE Std 802.3 to add Physical Layer specifications and Management Parameters for 100 Gbit/s, 200 Gbit/s, and 400 Gbit/s electrical interfaces based on 100 Gbit/s signaling.[16]

On December 5, 2018, the IEEE-SA Board approved the IEEE 802.3cd standard.

On November 12, 2018, the IEEE P802.3ct Task Force started working to define PHY supporting 100 Gbit/s operation on a single wavelength capable of at least 80 km over a DWDM system (using a combination of phase and amplitude modulation with coherent detection).[17]

In May 2019, the IEEE P802.3cu Task Force started working to define single-wavelength 100 Gbit/s PHYs for operation over SMF (Single-Mode Fiber) with lengths up to at least 2 km (100GBASE-FR1) and 10 km (100GBASE-LR1).[18]

In June 2020, the IEEE P802.3db Task Force started working to define a physical layer specification that supports 100 Gbit/s operation over 1 pair of MMF with lengths up to at least 50 m.[19]

On February 11, 2021, the IEEE-SA Board approved the IEEE 802.3cu standard.[20]

On June 16, 2021, the IEEE-SA Board approved the IEEE 802.3ct standard.[21]

On September 21, 2022, the IEEE-SA Board approved the IEEE 802.3ck and 802.3db standards.[22]

Early products

[edit]

Optical signal transmission over a nonlinear medium is principally an analog design problem. As such, it has evolved slower than digital circuit lithography (which generally progressed in step with Moore's law). This explains why 10 Gbit/s transport systems existed since the mid-1990s, while the first forays into 100 Gbit/s transmission happened about 15 years later – a 10x speed increase over 15 years is far slower than the 2x speed per 1.5 years typically cited for Moore's law.

Nevertheless, at least five firms (Ciena, Alcatel-Lucent, MRV, ADVA Optical and Huawei) made customer announcements for 100 Gbit/s transport systems by August 2011, with varying degrees of capabilities.[23] Although vendors claimed that 100 Gbit/s light paths could use existing analog optical infrastructure, deployment of high-speed technology was tightly controlled and extensive interoperability tests were required before moving them into service.

Designing routers or switches which support 100 Gbit/s interfaces is difficult. The need to process a 100 Gbit/s stream of packets at line rate without reordering within IP/MPLS microflows is one reason for this.

As of 2011, most components in the 100 Gbit/s packet processing path (PHY chips, NPUs, memories) were not readily available off-the-shelf or require extensive qualification and co-design. Another problem is related to the low-output production of 100 Gbit/s optical components, which were also not easily available – especially in pluggable, long-reach or tunable laser flavors.

Backplane

[edit]

NetLogic Microsystems announced backplane modules in October 2010.[24]

Multimode fiber

[edit]

In 2009, Mellanox[25] and Reflex Photonics[26] announced modules based on the CFP agreement.

Single mode fiber

[edit]

Finisar,[27] Sumitomo Electric Industries,[28] and OpNext[29] all demonstrated singlemode 40 or 100 Gbit/s Ethernet modules based on the C form-factor pluggable (CFP) agreement at the European Conference and Exhibition on Optical Communication in 2009. The first lasers for 100 GBE were demonstrated in 2008.[30]

Compatibility

[edit]

Optical fiber IEEE 802.3ba implementations were not compatible with the numerous 40 and 100 Gbit/s line rate transport systems because they had different optical layer and modulation formats as the IEEE 802.3ba interface types show. In particular, existing 40 Gbit/s transport solutions that used dense wavelength-division multiplexing to pack four 10 Gbit/s signals into one optical medium were not compatible with the IEEE 802.3ba standard, which used either coarse WDM in 1310 nm wavelength region with four 25 Gbit/s or ten 10 Gbit/s channels, or parallel optics with four or ten optical fibers per direction.

Test and measurement

[edit]
  • Quellan announced a test board in 2009.[31]
  • Ixia developed Physical Coding Sublayer Lanes[32] and demonstrated a working 100GbE link through a test setup at NXTcomm in June 2008.[33] Ixia announced test equipment in November 2008.[34][35]
  • Discovery Semiconductors introduced optoelectronics converters for 100 Gbit/s testing of the 10 km and 40 km Ethernet standards in February 2009.[36]
  • JDS Uniphase (now VIAVI Solutions) introduced test and measurement products for 40 and 100 Gbit/s Ethernet in August 2009.[37]
  • Spirent Communications introduced test and measurement products in September 2009.[38]
  • EXFO demonstrated interoperability in January 2010.[39]
  • Xena Networks demonstrated test equipment at the Technical University of Denmark in January 2011.[40][41]
  • Calnex Solutions introduced 100GbE Synchronous Ethernet synchronisation test equipment in November 2014.[42]
  • Spirent Communications introduced the Attero-100G for 100GbE and 40GbE impairment emulation in April 2015.[43][44]
  • VeEX[45] introduced its CFP-based UX400-100GE and 40GE test and measurement platform in 2012,[46] followed by CFP2, CFP4, QSFP28 and QSFP+ versions in 2015.[47][48]

Mellanox Technologies

[edit]

Mellanox Technologies introduced the ConnectX-4 100GbE single and dual port adapter in November 2014.[49] In the same period, Mellanox introduced availability of 100GbE copper and fiber cables.[50] In June 2015, Mellanox introduced the Spectrum 10, 25, 40, 50 and 100GbE switch models.[51]

Aitia

[edit]

Aitia International introduced the C-GEP FPGA-based switching platform in February 2013.[52] Aitia also produce 100G/40G Ethernet PCS/PMA+MAC IP cores for FPGA developers and academic researchers.[53]

Arista

[edit]

Arista Networks introduced the 7500E switch (with up to 96 100GbE ports) in April 2013.[54] In July 2014, Arista introduced the 7280E switch (the world's first top-of-rack switch with 100G uplink ports).[55]

Extreme Networks

[edit]

Extreme Networks introduced a four-port 100GbE module for the BlackDiamond X8 core switch in November 2012.[56]

Dell

[edit]

Dell's Force10 switches support 40 Gbit/s interfaces. These 40 Gbit/s fiber-optical interfaces using QSFP+ transceivers can be found on the Z9000 distributed core switches, S4810 and S4820[57] as well as the blade-switches MXL and the IO-Aggregator. The Dell PowerConnect 8100 series switches also offer 40 Gbit/s QSFP+ interfaces.[58]

Chelsio

[edit]

Chelsio Communications introduced 40 Gbit/s Ethernet network adapters (based on the fifth generation of its Terminator architecture) in June 2013.[59]

Telesoft Technologies Ltd

[edit]

Telesoft Technologies announced the dual 100G PCIe accelerator card, part of the MPAC-IP series.[60] Telesoft also announced the STR 400G (Segmented Traffic Router)[61] and the 100G MCE (Media Converter and Extension).[62]

Commercial trials and deployments

[edit]

Unlike the "race to 10 Gbit/s" that was driven by the imminent need to address growth pains of the Internet in the late 1990s, customer interest in 100 Gbit/s technologies was mostly driven by economic factors. The common reasons to adopt the higher speeds were:[63]

  • to reduce the number of optical wavelengths ("lambdas") used and the need to light new fiber
  • to utilize bandwidth more efficiently than 10 Gbit/s link aggregate groups
  • to provide cheaper wholesale, internet peering and data center connectivity
  • to skip the relatively expensive 40 Gbit/s technology and move directly from 10 to 100 Gbit/s

Alcatel-Lucent

[edit]

In November 2007, Alcatel-Lucent held the first field trial of 100 Gbit/s optical transmission. Completed over a live, in-service 504 kilometre portion of the Verizon network, it connected the Florida cities of Tampa and Miami.[64]

100GbE interfaces for the 7450 ESS/7750 SR service routing platform were first announced in June 2009, with field trials with Verizon,[65] T-Systems and Portugal Telecom taking place in June–September 2010. In September 2009, Alcatel-Lucent combined the 100G capabilities of its IP routing and optical transport portfolio in an integrated solution called Converged Backbone Transformation.[66]

In June 2011, Alcatel-Lucent introduced a packet processing architecture known as FP3, advertised for 400 Gbit/s rates.[67] Alcatel-Lucent announced the XRS 7950 core router (based on the FP3) in May 2012.[68][69]

Brocade

[edit]

Brocade Communications Systems introduced their first 100GbE products (based on the former Foundry Networks MLXe hardware) in September 2010.[70] In June 2011, the new product went live at the AMS-IX traffic exchange point in Amsterdam.[71]

Cisco

[edit]

Cisco Systems and Comcast announced their 100GbE trials in June 2008.[72] However, it is doubtful that this transmission could approach 100 Gbit/s speeds when using a 40 Gbit/s per slot CRS-1 platform for packet processing. Cisco's first deployment of 100GbE at AT&T and Comcast took place in April 2011.[73] In the same year, Cisco tested the 100GbE interface between CRS-3 and a new generation of their ASR9K edge router model.[74] In 2017, Cisco announced a 32 port 100GbE Cisco Catalyst 9500 Series switch [75] and in 2019 the modular Catalyst 9600 Series switch with a 100GbE line card [76]

Huawei

[edit]

In October 2008, Huawei presented their first 100GbE interface for their NE5000e router.[77] In September 2009, Huawei also demonstrated an end-to-end 100 Gbit/s link.[78] It was mentioned that Huawei's products had the self-developed NPU "Solar 2.0 PFE2A" onboard and was using pluggable optics in CFP.

In a mid-2010 product brief, the NE5000e linecards were given the commercial name LPUF-100 and credited with using two Solar-2.0 NPUs per 100GbE port in opposite (ingress/egress) configuration.[79] Nevertheless, in October 2010, the company referenced shipments of NE5000e to Russian cell operator "Megafon" as "40 GBPS/slot" solution, with "scalability up to" 100 Gbit/s.[80]

In April 2011, Huawei announced that the NE5000e was updated to carry 2x100GbE interfaces per slot using LPU-200 linecards.[81] In a related solution brief, Huawei reported 120 thousand Solar 1.0 integrated circuits shipped to customers, but no Solar 2.0 numbers were given.[82] Following the August 2011 trial in Russia, Huawei reported paying 100 Gbit/s DWDM customers, but no 100GbE shipments on NE5000e.[83]

Juniper

[edit]

Juniper Networks announced 100GbE for its T-series routers in June 2009.[84] The 1x100GbE option followed in Nov 2010, when a joint press release with academic backbone network Internet2 marked the first production 100GbE interfaces going live in real network.[85]

In the same year, Juniper demonstrated 100GbE operation between core (T-series) and edge (MX 3D) routers.[86] Juniper, in March 2011, announced first shipments of 100GbE interfaces to a major North American service provider (Verizon[87]).

In April 2011, Juniper deployed a 100GbE system on the UK education network JANET.[88] In July 2011, Juniper announced 100GbE with Australian ISP iiNet on their T1600 routing platform.[89] Juniper started shipping the MPC3E line card for the MX router, a 100GbE CFP MIC, and a 100GbE LR4 CFP optics in March 2012[citation needed]. In Spring 2013, Juniper Networks announced the availability of the MPC4E line card for the MX router that includes 2 100GbE CFP slots and 8 10GbE SFP+ interfaces[citation needed].

In June 2015, Juniper Networks announced the availability of its CFP-100GBASE-ZR module which is a plug & play solution that brings 80 km 100GbE to MX & PTX based networks.[90] The CFP-100GBASE-ZR module uses DP-QPSK modulation and coherent receiver technology with an optimized DSP and FEC implementation. The low-power module can be directly retrofitted into existing CFP sockets on MX and PTX routers.

Standards

[edit]

The IEEE 802.3 working group is concerned with the maintenance and extension of the Ethernet data communications standard. Additions to the 802.3 standard[91] are performed by task forces which are designated by one or two letters. For example, the 802.3z task force drafted the original Gigabit Ethernet standard.

802.3ba is the designation given to the higher speed Ethernet task force which completed its work to modify the 802.3 standard to support speeds higher than 10 Gbit/s in 2010.

The speeds chosen by 802.3ba were 40 and 100 Gbit/s to support both end-point and link aggregation needs respectively. This was the first time two different Ethernet speeds were specified in a single standard. The decision to include both speeds came from pressure to support the 40 Gbit/s rate for local server applications and the 100 Gbit/s rate for internet backbones. The standard was announced in July 2007[92] and was ratified on June 17, 2010.[10]

A 40G-SR4 transceiver in the QSFP form factor

The 40/100 Gigabit Ethernet standards encompass a number of different Ethernet physical layer (PHY) specifications. A networking device may support different PHY types by means of pluggable modules. Optical modules are not standardized by any official standards body but are in multi-source agreements (MSAs). One agreement that supports 40 and 100 Gigabit Ethernet is the CFP MSA[93] which was adopted for distances of 100+ meters. QSFP and CXP connector modules support shorter distances.[94]

The standard supports only full-duplex operation.[95] Other objectives include:

  • Preserve the 802.3 Ethernet frame format utilizing the 802.3 MAC
  • Preserve minimum and maximum frame size of current 802.3 standard
  • Support a bit error rate (BER) better than or equal to 10−12 at the MAC/PLS service interface
  • Provide appropriate support for OTN
  • Support MAC data rates of 40 and 100 Gbit/s
  • Provide physical layer specifications (PHY) for operation over single-mode optical fiber (SMF), laser optimized multi-mode optical fiber (MMF) OM3 and OM4, copper cable assembly, and backplane.

The following nomenclature is used for the physical layers:[2][3][96]

Physical layer 40 Gigabit Ethernet 100 Gigabit Ethernet
Backplane 100GBASE-KP4
Improved Backplane 40GBASE-KR4 100GBASE-KR4
100GBASE-KR2
7 m over twinax copper cable 40GBASE-CR4 100GBASE-CR10
100GBASE-CR4
100GBASE-CR2
30 m over Category 8 twisted pair 40GBASE-T
100 m over OM3 MMF 40GBASE-SR4 100GBASE-SR10
100GBASE-SR4
100GBASE-SR2
125 m over OM4 MMF[94]
500 m over SMF, serial 100GBASE-DR
2 km over SMF, serial 40GBASE-FR 100GBASE-FR1
10 km over SMF 40GBASE-LR4 100GBASE-LR4
100GBASE-LR1
40 km over SMF 40GBASE-ER4 100GBASE-ER4
80 km over SMF 100GBASE-ZR

The 100 m laser-optimized multi-mode fiber (OM3) objective was met by parallel ribbon cable with 850 nm wavelength 10GBASE-SR like optics (40GBASE-SR4 and 100GBASE-SR10). The backplane objective with 4 lanes of 10GBASE-KR type PHYs (40GBASE-KR4). The copper cable objective is met with 4 or 10 differential lanes using SFF-8642 and SFF-8436 connectors. The 10 and 40 km 100 Gbit/s objectives with four wavelengths (around 1310 nm) of 25 Gbit/s optics (100GBASE-LR4 and 100GBASE-ER4) and the 10 km 40 Gbit/s objective with four wavelengths (around 1310 nm) of 10 Gbit/s optics (40GBASE-LR4).[97]

In January 2010 another IEEE project authorization started a task force to define a 40 Gbit/s serial single-mode optical fiber standard (40GBASE-FR). This was approved as standard 802.3bg in March 2011.[12] It used 1550 nm optics, had a reach of 2 km and was capable of receiving 1550 nm and 1310 nm wavelengths of light. The capability to receive 1310 nm light allows it to inter-operate with a longer reach 1310 nm PHY should one ever be developed. 1550 nm was chosen as the wavelength for 802.3bg transmission to make it compatible with existing test equipment and infrastructure.[98]

In December 2010, a 10x10 multi-source agreement (10x10 MSA) began to define an optical Physical Medium Dependent (PMD) sublayer and establish compatible sources of low-cost, low-power, pluggable optical transceivers based on 10 optical lanes at 10 Gbit/s each.[99] The 10x10 MSA was intended as a lower cost alternative to 100GBASE-LR4 for applications which do not require a link length longer than 2 km. It was intended for use with standard single mode G.652.C/D type low water peak cable with ten wavelengths ranging from 1523 to 1595 nm. The founding members were Google, Brocade Communications, JDSU and Santur.[100] Other member companies of the 10x10 MSA included MRV, Enablence, Cyoptics, AFOP, oplink, Hitachi Cable America, AMS-IX, EXFO, Huawei, Kotura, Facebook and Effdon when the 2 km specification was announced in March 2011.[101] The 10X10 MSA modules were intended to be the same size as the CFP specifications.

On June 12, 2014, the 802.3bj standard was approved. The 802.3bj standard specifies 100 Gbit/s 4x25G PHYs - 100GBASE-KR4, 100GBASE-KP4 and 100GBASE-CR4 - for backplane and twin-ax cable.

On February 16, 2015, the 802.3bm standard was approved. The 802.3bm standard specifies a lower-cost optical 100GBASE-SR4 PHY for MMF and a four-lane chip-to-module and chip-to-chip electrical specification (CAUI-4). The detailed objectives for the 802.3bm project can be found on the 802.3 website.

On May 14, 2018, the 802.3ck project was approved. This has objectives to:[102]

  • Define a single-lane 100 Gbit/s Attachment Unit interface (AUI) for chip-to-module applications, compatible with PMDs based on 100 Gbit/s per lane optical signaling (100GAUI-1 C2M)
  • Define a single-lane 100 Gbit/s Attachment Unit Interface (AUI) for chip-to-chip applications (100GAUI-1 C2C)
  • Define a single-lane 100 Gbit/s PHY for operation over electrical backplanes supporting an insertion loss ≤ 28 dB at 26.56 GHz (100GBASE-KR1).
  • Define a single-lane 100 Gbit/s PHY for operation over twin-axial copper cables with lengths up to at least 2 m (100GBASE-CR1).

On November 12, 2018, the IEEE P802.3ct Task Force started working to define PHY supporting 100 Gbit/s operation on a single wavelength capable of at least 80 km over a DWDM system (100GBASE-ZR) (using a combination of phase and amplitude modulation with coherent detection).

On December 5, 2018, the 802.3cd standard was approved. The 802.3cd standard specifies PHYs using 50 Gbit/s lanes - 100GBASE-KR2 for backplane, 100GBASE-CR2 for twin-ax cable, 100GBASE-SR2 for MMF and using 100 Gbit/s signalling 100GBASE-DR for SMF.

In June 2020, the IEEE P802.3db Task Force started working to define a physical layer specification that supports 100 Gbit/s operation over 1 pair of MMF with lengths up to at least 50 m.[19]

On February 11, 2021, the IEEE 802.3cu standard was approved. The IEEE 802.3cu standard defines single-wavelength 100 Gbit/s PHYs for operation over SMF (Single-Mode Fiber) with lengths up to at least 2 km (100GBASE-FR1) and 10 km (100GBASE-LR1).

100G interface types

[edit]
Legend for fibre-based PHYs[103]
Fibre type Introduced Performance
MMF FDDI 62.5/125 µm 1987 160 MHz·km @ 850 nm
MMF OM1 62.5/125 µm 1989 200 MHz·km @ 850 nm
MMF OM2 50/125 µm 1998 500 MHz·km @ 850 nm
MMF OM3 50/125 µm 2003 1500 MHz·km @ 850 nm
MMF OM4 50/125 µm 2008 3500 MHz·km @ 850 nm
MMF OM5 50/125 µm 2016 3500 MHz·km @ 850 nm + 1850 MHz·km @ 950 nm
SMF OS1 9/125 µm 1998 1.0 dB/km @ 1300/1550 nm
SMF OS2 9/125 µm 2000 0.4 dB/km @ 1300/1550 nm
Name Standard Status Media Con­nec­tor Trans­ceiver module Reach
in m
#
Media
(⇆)
#
Lamb­das
(→)
#
Lanes
(→)
Notes
100 Gigabit Ethernet (100 GbE) (1st Generation: 10GbE-based) - (Data rate: 100 Gbit/s - Line code: 64b/66b × NRZ - Line rate: 10x 10.3125 GBd = 103.125 GBd - Full-Duplex) [104][105][106]
100GBASE-CR10
Direct Attach
802.3ba-2010
(CL85)
phase-out twinaxial
balanced
CXP
(SFF-8642)
CFP2
CFP4
QSFP+
CXP
CFP2
CFP4
QSFP+
7 20 N/A 10 Data centres (inter-rack);
CXP connector uses center 10 out of 12 channels.
100GBASE-SR10 802.3ba-2010
(CL82/86)
phase-out Fibre
850 nm
MPO/MTP
(MPO-24)
CXP
CFP
CFP2
CFP4
CPAK
OM3: 100 20 1 10
OM4: 150
10×10G proprietary
(MSA, Jan 2010)
phase-out Fibre
1523 nm , 1531 nm
1539 nm , 1547 nm
1555 nm , 1563 nm
1571 nm , 1579 nm
1587 nm , 1595 nm
LC CFP OSx:
2k / 10k / 40k
2 10 10 WDM
Multi-vendor standard[107]
100 Gigabit Ethernet (100 GbE) (2nd Generation: 25GbE-based) - (Data rate: 100 Gbit/s - Line code: 256b/257b × RS-FEC(528,514) × NRZ - Line rate: 4x 25.78125 GBd = 103.125 GBd - Full-Duplex) [104][105][106][108]
100GBASE-KR4 802.3bj-2014
(CL93)
current Cu-Backplane 1 8 N/A 4 PCBs;
total insertion loss of up to 35 dB at 12.9 GHz
100GBASE-KP4 802.3bj-2014
(CL94)
current Cu-Backplane 1 8 N/A 4 PCBs;
Line code: RS-FEC(544,514) × PAM4
× 92/90 framing and 31320/31280 lane identification

Line rate: 4x 13.59375 GBd = 54.375 GBd
total insertion loss of up to 33 dB at 7 GHz
100GBASE-CR4
Direct Attach
802.3bj-2010
(CL92)
current twinaxial
balanced
QSFP28
(SFF-8665)
CFP2
CFP4
5 8 N/A 4 Data centres (inter-rack)
100GBASE-SR4 802.3bm-2015
(CL95)
current Fibre
850 nm
MPO/MTP
(MPO-12)
QSFP28
CFP2
CFP4
CPAK
OM3: 70 8 1 4
OM4: 100
100GBASE-SR2-BiDi
(BiDirectional)
proprietary
(non IEEE)
current Fibre
850 nm
900 nm
LC QSFP28 OM3: 70 2 2 2 WDM
Line rate: 2x (2x 26.5625 GBd with PAM4)
Duplex fiber with both being used to transmit and receive;
The major selling point of this variant is its ability to run over existing LC multi-mode fiber (allowing easy migration from 10G/25G to 100G).
Not to be confused with (and not compatible with) 100GBASE-SR1.2 (see below).
OM4: 100
OM5: 150
100GBASE-SWDM4 proprietary
(MSA, Nov 2017)
current Fibre
844 – 858 nm
874 – 888 nm
904 – 918 nm
934 – 948 nm
LC QSFP28 OM3: 75 2 4 4 SWDM[109]
OM4: 100
OM5: 150
100GBASE-LR4 802.3ba-2010
(CL88)
current Fibre
1295.56 nm
1300.05 nm
1304.59 nm
1309.14 nm
LC QSFP28
CFP
CFP2
CFP4
CPAK
OSx: 10k 2 4 4 WDM
Line code: 64b/66b × NRZ
100GBASE-ER4 802.3ba-2010
(CL88)
current QSFP28
CFP
CFP2
OSx: 40k 2 4 4 WDM
Line code: 64b/66b × NRZ
100GBASE-PSM4 proprietary
(MSA, Jan 2014)
current Fibre
1310 nm
MPO/MTP
(MPO-12)
QSFP28
CFP4
OSx: 500 8 1 4 Data centres;
Line code: 64b/66b × NRZ or 256b/257b × RS-FEC(528,514) × NRZ
Multi-vendor standard [110]
100GBASE-CWDM4 proprietary
(MSA, Mar 2014)
current Fibre
1271 nm
1291 nm
1311 nm
1331 nm
±6.5 nm each
LC QSFP28
CFP2
CFP4
OSx: 2k 2 4 4 Data centres;
WDM
Multi-vendor standard[111][112]
100GBASE-4WDM-10 proprietary
(MSA, Oct 2018)
current QSFP28
CFP4
OSx: 10k 2 4 4 WDM
Multi-vendor standard[113]
100GBASE-4WDM-20 proprietary
(MSA, Jul 2017)
current Fibre
1295.56 nm
1300.05 nm
1304.58 nm
1309.14 nm
±1.03 nm each
OSx: 20k WDM
Multi-vendor standard[114]
100GBASE-4WDM-40 proprietary
(non IEEE)
(MSA, Jul 2017)
current OSx: 40k WDM
Multi-vendor standard[114]
100GBASE-CLR4 proprietary
(MSA, Apr 2014)
current Fibre
1271 nm
1291 nm
1311 nm
1331 nm
±6.5 nm each
QSFP28 OSx: 2k 2 4 4 Data centres;
WDM
Line code: 64b/66b × NRZ or 256b/257b × RS-FEC(528,514) × NRZ
Interoperable with 100GBASE-CWDM4 when using RS-FEC;
Multi-vendor standard[111][115]
100GBASE-CWDM4 proprietary
(OCP MSA, Mar 2014)
current Fibre
1504 – 1566 nm
LC QSFP28 OSx: 2k 2 4 4 Data centres;
WDM
Line code: 64b/66b × NRZ or 256b/257b × RS-FEC(528,514) × NRZ
Derived from 100GBASE-CWDM4 to allow cheaper transceivers;
Multi-vendor standard[116]
100 Gigabit Ethernet (100 GbE) (3rd Generation: 50GbE-based) - (Data rate: 100 Gbit/s - Line code: 256b/257b × RS-FEC(544,514) × PAM4 - Line rate: 2x 26.5625 GBd x2 = 106.25 GBd - Full-Duplex) [105][106]
100GBASE-KR2 802.3cd-2018
(CL137)
current Cu-Backplane 1 4 N/A 2 PCBs
100GBASE-CR2 802.3cd-2018
(CL136)
current twinaxial
balanced
QSFP28,
microQSFP,
QSFP-DD,
OSFP

(SFF-8665)
3 4 N/A 2 Data centres (in-rack)
100GBASE-SR2 802.3cd-2018
(CL138)
current Fibre
850 nm
MPO
4 fibres
QSFP28 OM3: 70 4 1 2
OM4: 100
100GBASE-SR1.2
(Bidirectional)
802.3bm-2015 current Fibre
850 nm
900 nm
LC QSFP28 OM3: 70 2 2 2 WDM
Line rate: 2x (2x 26.5625 GBd with PAM4)[117]
Duplex fiber with both being used to transmit and receive;
The major selling point of this variant is its ability to run over existing LC multi-mode fiber (allowing easy migration from 10G/25G to 100G). This BiDi variant is compatible with breakout from 400GBASE-4.2 but is not compatible with 100G-SR2-BiDi (see above).[118]
OM4: 100
OM5: 100
100 Gigabit Ethernet (100 GbE) (4th Generation: 100GbE-based) - (Data rate: 100 Gbit/s - Line code: 256b/257b × RS-FEC(544,514) × PAM4 - Line rate: 1x 53.1250 GBd x2 = 106.25 GBd - Full-Duplex)
100GBASE-KR1 802.3ck-2022
(CL163)
current Cu-Backplane 2 N/A 1 total insertion loss ≤ 28 dB at 26.56 GHz.
100GBASE-CR1 802.3ck-2022
(CL162)
current twinaxial
balanced
SFP112,
SFP-DD112,
DSFP,
QSFP112,
QSFP-DD800,
OSFP
2 2 N/A 1
100GBASE-VR1 802.3db-2022
(CL167)
current Fibre
842 – 948 nm
LC QSFP28 OM3: 30 2 1 1
OM4: 50
100GBASE-SR1 802.3db-2022
(CL167)
current Fibre
844 – 863 nm
LC QSFP28 OM3: 60 2 1 1
OM4: 100
100GBASE-DR 802.3cd-2018
(CL140)
current Fibre
1311 nm
LC QSFP28 OSx: 500 2 1 1
100GBASE-FR1 802.3cu-2021
(CL140)
current Fibre
1311 nm
LC QSFP28 OSx: 2k 2 1 1 Multi-vendor standard[119]
100GBASE-LR1 802.3cu-2021
(CL140)
current Fibre
1311 nm
LC QSFP28 OSx: 10k 2 1 1 Multi-vendor standard[119]
100GBASE-LR1-20 proprietary
(MSA, Nov 2020)
current Fibre
1311 nm
LC QSFP28 OSx: 20k 2 1 1 Multi-vendor standard[120]
100GBASE-ER1-30 proprietary
(MSA, Nov 2020)
current Fibre
1311 nm
LC QSFP28 OSx: 30k 2 1 1 Multi-vendor standard[120]
100GBASE-ER1-40 proprietary
(MSA, Nov 2020)
current Fibre
1311 nm
LC QSFP28 OSx: 40k 2 1 1 Multi-vendor standard[120]
100GBASE-ZR 802.3ct-2021
(CL153/154)
current Fibre
1546.119 nm
LC CFP OS2: 80k+ 2 1 1 Line code: DP-DQPSK × SC-FEC
Line rate: 27.9525 GBd
Reduced bandwidth and line rate for ultra long distances.[121]

Coding schemes

[edit]
10.3125 Gbaud with NRZ ("PAM2") and 64b66b on 10 lanes per direction
One of the earliest coding used, this widens the coding scheme used in single lane 10GE and quad lane 40G to use 10 lanes. Due to the low symbol rate, relatively long ranges can be achieved at the cost of using a lot of cabling.
This also allows breakout to 10×10GE, provided that the hardware supports splitting the port.
25.78125 Gbaud with NRZ ("PAM2") and 64b66b on 4 lanes per direction
A sped-up variant of the above, this directly corresponds to 10GE/40GE signalling at 2.5× speed. The higher symbol rate makes links more susceptible to errors.
If the device and transceiver support dual-speed operation, it is possible to reconfigure an 100G port to downspeed to 40G or 4×10G. There is no autonegotiation protocol for this, thus manual configuration is necessary. Similarly, a port can be broken into 4×25G if implemented in the hardware. This is applicable even for CWDM4, if a CWDM demultiplexer and CWDM 25G optics are used appropriately.
25.78125 Gbaud with NRZ ("PAM2") and RS-FEC(528,514) on 4 lanes per direction
To address the higher susceptibility to errors at these symbol rates, an application of Reed–Solomon error correction was defined in IEEE 802.3bj / Clause 91. This replaces the 64b66b encoding with a 256b257b encoding followed by the RS-FEC application, which combines to the exact same overhead as 64b66b. To the optical transceiver or cable, there is no distinction between this and 64b66b; some interface types (e.g. CWDM4) are defined "with or without FEC."
26.5625 Gbaud with PAM4 and RS-FEC(544,514) on 2 lanes per direction
This achieves a further doubling in bandwidth per lane (used to halve the number of lanes) by employing pulse-amplitude modulation with 4 distinct analog levels, making each symbol carry 2 bits. To keep up error margins, the FEC overhead is doubled from 2.7% to 5.8%, which explains the slight rise in symbol rate.
53.125 Gbaud with PAM4 and RS-FEC(544,514) on 1 lane per direction
Further pushing silicon limits, this is a double rate variant of the previous, giving full 100GE operation over 1 medium lane.
30.14475 Gbaud with DP-DQPSK and SD-FEC on 1 lane per direction
Mirroring OTN4 developments, DP-DQPSK (dual polarization differential quadrature phase shift keying) employs polarization to carry one axis of the DP-QPSK constellation. Additionally, new soft decision FEC algorithms take additional information on analog signal levels as input to the error correction procedure.
13.59375 Gbaud with PAM4, KP4 specific coding and RS-FEC(544,514) on 4 lanes per direction
A half-speed variant of 26.5625 Gbaud with RS-FEC, with a 31320/31280 step encoding the lane number into the signal, and further 92/90 framing.

40G interface types

[edit]
Legend for fibre-based PHYs[103]
Fibre type Introduced Performance
MMF FDDI 62.5/125 µm 1987 160 MHz·km @ 850 nm
MMF OM1 62.5/125 µm 1989 200 MHz·km @ 850 nm
MMF OM2 50/125 µm 1998 500 MHz·km @ 850 nm
MMF OM3 50/125 µm 2003 1500 MHz·km @ 850 nm
MMF OM4 50/125 µm 2008 3500 MHz·km @ 850 nm
MMF OM5 50/125 µm 2016 3500 MHz·km @ 850 nm + 1850 MHz·km @ 950 nm
SMF OS1 9/125 µm 1998 1.0 dB/km @ 1300/1550 nm
SMF OS2 9/125 µm 2000 0.4 dB/km @ 1300/1550 nm
Name Standard Status Media Con­nec­tor Trans­ceiver module Reach (m) #
Media
(⇆)
#
Lamb­das
(→)
#
Lanes
(→)
Notes
40 Gigabit Ethernet (40 GbE) - (Data rate: 40 Gbit/s - Line code: 64b/66b × NRZ - Line rate: 4x 10.3125 GBd = 41.25 GBd - Full-Duplex) [104][105][122][123]
40GBASE-KR4 802.3ba-2010
(CL82/84)
phase-
out
Cu-Backplane 1 8 N/A 4 PCBs;
possible breakout / lane separation to 4x 10G
through splitter cable (QSFP+ to 4x SFP+);
involves CL73 for auto-negotiation, and CL72 for link training.
40GBASE-CR4
Direct Attach
802.3ba-2010
(CL82/85)
phase-
out
twinaxial
balanced
QSFP+
(SFF-8635)
QSFP+ 10 8 N/A 4 Data centres (inter-rack)
possible breakout / lane separation to 4x 10G
through splitter cable (QSFP+ to 4x SFP+);
involves CL73 for auto-negotiation and CL72 for link training.
40GBASE-SR4 802.3ba-2010
(CL82/86)
phase-
out
Fibre
850 nm
MPO/MTP
(MPO-12)
CFP
QSFP+
OM3: 100 8 1 4 possible breakout / lane separation to 4x 10G
through splitter cable (MPO/MTP to 4x LC-pairs).
OM4: 150
40GBASE-eSR4 proprietary
(non IEEE)
phase-
out
QSFP+ OM3: 300 possible breakout / lane separation to 4x 10G
through splitter cable (MPO/MTP to 4x LC-pairs).
OM4: 400
40GBASE-SR2-BiDi
(BiDirectional)
proprietary
(non IEEE)
phase-
out
Fibre
850 nm
900 nm
LC QSFP+ OM3: 100 2 2 2 WDM
duplex fiber each used to transmit and receive on two wavelengths;
The major selling point of this variant is its ability to run over existing 10G multi-mode fiber (i.e. allowing easy migration from 10G to 40G).
OM4: 150
40GBASE-SWDM4 proprietary
(MSA, Nov 2017)
phase-
out
Fibre
844-858 nm
874-888 nm
904-918 nm
934-948 nm
LC QSFP+ OM3: 240 2 4 4 SWDM[109]
OM4: 350
OM5: 440
40GBASE-LR4 802.3ba-2010
(CL82/87)
phase-
out
Fibre
1271 nm
1291 nm
1311 nm
1331 nm
±6.5 nm each
LC CFP
QSFP+
OSx: 10k 2 4 4 WDM
40GBASE-ER4 802.3bm-2015
(CL82/87)
phase-
out
QSFP+ OSx: 40k WDM
40GBASE-LX4 / -LM4 proprietary
(non IEEE)
phase-
out
QSFP+ OM3: 140 WDM
as primarily designed for single mode (-LR4), this mode of operation is out of specification for some transceivers.
OM4: 160
OSx: 10k
40GBASE-PLR4
(parallel -LR4)
proprietary
(non IEEE)
phase-
out
Fibre
1310 nm
MPO/MTP
(MPO-12)
QSFP+ OSx: 10k 8 1 4 possible breakout / lane separation to 4x 10G
through splitter cable (MPO/MTP to 4x LC-pairs).
40GBASE-FR 802.3bg-2011
(CL82/89)
phase-
out
Fibre
1550 nm
LC CFP OSx: 2k 2 1 1 Line rate: 41.25 GBd
capability to receive 1310 nm light besides 1550 nm;
allows inter-operation with a longer reach 1310 nm PHY (TBD);
use of 1550 nm implies compatibility with existing test equipment and infrastructure.
Additional note for 40GBASE-CR4/-KR4:

CL73 allows communication between the 2 PHYs to exchange technical capability pages, and both PHYs come to a common speed and media type. Completion of CL73 initiates CL72. CL72 allows each of the 4 lanes' transmitters to adjust pre-emphasis via feedback from the link partner.

40GBASE-T
40GBASE-T is a port type for 4-pair balanced twisted-pair Cat.8 copper cabling up to 30 m defined in IEEE 802.3bq.[124] IEEE 802.3bq-2016 standard was approved by The IEEE-SA Standards Board on June 30, 2016.[125] It uses 16-level PAM signaling over four lanes at 3,200 MBd each, scaled up from 10GBASE-T.
Comparison of twisted-pair-based Ethernet physical transport layers (TP-PHYs)[126]
Name Standard Status Speed (Mbit/s) Pairs required Lanes per direction Bits per hertz Line code Symbol rate per lane (MBd) Band­width Max distance (m) Cable Cable rating (MHz) Usage
40GBASE-T 802.3bq-2016 (CL113) current 40000 4 4 6.25 PAM-16 RS-FEC (192, 186) LDPC 3200 1600 30 Cat 8 2000 LAN, Data centres

Chip-to-chip/chip-to-module interfaces

[edit]
CAUI-10
CAUI-10 is a 100 Gbit/s 10-lane electrical interface defined in 802.3ba.[1]
CAUI-4
CAUI-4 is a 100 Gbit/s 4-lane electrical interface defined in 802.3bm Annex 83E with a nominal signaling rate for each lane of 25.78125 GBd using NRZ modulation.[3]
100GAUI-4
100GAUI-4 is a 100 Gbit/s 4-lane electrical interface defined in 802.3cd Annex 135D/E with a nominal signaling rate for each lane of 26.5625 GBd using NRZ modulation and RS-FEC(544,514) so suitable for use with 100GBASE-CR2, 100GBASE-KR2, 100GBASE-SR2, 100GBASE-DR, 100GBASE-FR1, 100GBASE-LR1 PHYs.
100GAUI-2
100GAUI-2 is a 100 Gbit/s 2-lane electrical interface defined in 802.3cd Annex 135F/G with a nominal signaling rate for each lane of 26.5625 GBd using PAM4 modulation and RS-FEC(544,514) so suitable for use with 100GBASE-CR2, 100GBASE-KR2, 100GBASE-SR2, 100GBASE-DR, 100GBASE-FR1, 100GBASE-LR1 PHYs.
100GAUI-1
100GAUI-1 is a 100 Gbit/s 1-lane electrical interface defined in 802.3ck Annex 120F/G with a nominal signaling rate for each lane of 53.125 GBd using PAM4 modulation and RS-FEC(544,514) so suitable for use with 100GBASE-CR1, 100GBASE-KR1, 100GBASE-SR1, 100GBASE-DR, 100GBASE-FR1, 100GBASE-LR1 PHYs.

Pluggable optics standards

[edit]
40G Transceiver Form Factors
The QSFP+ form factor is specified for use with the 40 Gigabit Ethernet. Copper direct attached cable (DAC) or optical modules are supported, see Figure 85–20 in the 802.3 spec. QSFP+ modules at 40 Gbit/s can also be used to provide four independent ports of 10 gigabit Ethernet.[1]
100G Transceiver Form Factors
CFP modules use the 10-lane CAUI-10 electrical interface.
CFP2 modules use the 10-lane CAUI-10 electrical interface or the 4-lane CAUI-4 electrical interface.
CFP4 modules use the 4-lane CAUI-4 electrical interface.[127]
QSFP28 modules use the CAUI-4 electrical interface.
SFP-DD or Small Form-factor Pluggable – Double Density modules use the 100GAUI-2 electrical interface.
Cisco's CPAK optical module uses the four lane CEI-28G-VSR electrical interface.[128][129]
There are also CXP and HD module standards.[130] CXP modules use the CAUI-10 electrical interface.

Optical connectors

[edit]

Short reach interfaces use Multiple-Fiber Push-On/Pull-off (MPO) optical connectors.[1]: 86.10.3.3  40GBASE-SR4 and 100GBASE-SR4 use MPO-12 while 100GBASE-SR10 uses MPO-24 with one optical lane per fiber strand.

Long reach interfaces use duplex LC connectors with all optical lanes multiplexed with WDM.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
100 Gigabit Ethernet (100GbE) is a high-speed wired networking standard that enables the transmission of Ethernet frames at a rate of 100 gigabits per second (Gbps), supporting full-duplex operation for efficient data transfer in demanding environments. Developed by the Institute of Electrical and Electronics Engineers (IEEE), it forms part of the IEEE 802.3 family of standards, specifically introduced through the IEEE Std 802.3ba-2010 amendment, which was approved on June 17, 2010. This standard builds on previous Ethernet generations, such as 10GbE and 40GbE, to address escalating bandwidth needs in data-intensive applications. The development of 100GbE originated from the IEEE Higher Speed Study Group (HSSG), established in 2006 to explore speeds beyond 10 Gbps, leading to the formation of the IEEE P802.3ba Task Force in January 2008. Key challenges addressed during standardization included maintaining signal integrity at ultra-high speeds, managing power consumption, and ensuring compatibility with existing infrastructure, with a focus on the physical (PHY) layer and media access control (MAC). Subsequent amendments to IEEE 802.3, such as 802.3ck-2022, 802.3db-2022, and 802.3ct-2021, have extended 100GbE capabilities to include electrical interfaces, multimode fiber over 100 Gb/s per wavelength, and dense wavelength-division multiplexing (DWDM) systems supporting distances up to 80 km. At its core, 100GbE employs 64B/66B encoding across up to 20 physical coding sublayer (PCS) lanes, allowing flexible interface widths of 1, 2, 4, 5, 10, or 20 channels to optimize for various media types. Physical media dependent (PMD) sublayers support diverse transmission media, including:
  • 100GBASE-CR10: Up to 7 meters over copper twinaxial cable for short-reach backplane applications.
  • 100GBASE-SR10: Up to 100 meters on OM3 multimode fiber (MMF) or 150 meters on OM4 MMF for data center interconnects.
  • 100GBASE-LR4: Up to 10 kilometers on single-mode fiber (SMF) using four wavelengths.
  • 100GBASE-ER4: Up to 40 kilometers on SMF for longer-haul links.
Primarily deployed for network aggregation in data centers and service provider environments, 100GbE facilitates high-bandwidth tasks such as video streaming, cloud computing, and telecommunications backhaul. As of 2025, it remains integral to the Ethernet ecosystem—where AI fabrics are driving 70–80% CAGR bandwidth demand toward one billion data-center ports—supporting emerging demands in artificial intelligence (AI), high-performance computing (HPC), 5G infrastructure, and enterprise networks, with ongoing roadmap evolutions paving the way for seamless integration with higher speeds like 400GbE and beyond.

History and Standards Development

Timeline of Development

The development of 100 Gigabit Ethernet (100G Ethernet) began in response to escalating bandwidth demands in data centers and high-performance computing environments during the mid-2000s, where 10 Gigabit Ethernet was becoming insufficient for aggregating traffic from numerous servers and storage systems. In July 2006, the IEEE 802.3 working group established the Higher Speed Study Group (HSSG) to explore speeds beyond 10 Gb/s, identifying the need for 40 Gb/s and 100 Gb/s Ethernet to support exponential growth in data traffic. This initiative was driven by projections of server bandwidth doubling every 18-24 months, fueled by virtualization, cloud computing, and video streaming applications. By January 2008, the IEEE P802.3ba Task Force was formally created to draft the amendment for 40 Gb/s and 100 Gb/s Ethernet, building on the HSSG's objectives and incorporating input from industry stakeholders on physical layer technologies for both electrical and optical media. The task force focused on defining MAC parameters, physical layer specifications, and management features to ensure compatibility with existing Ethernet infrastructure. Key milestones in 2009 included early demonstrations of pre-standard 100G Ethernet technologies at the Optical Fiber Communication Conference and National Fiber Optic Engineers Conference (OFC/NFOEC). For instance, Finisar showcased next-generation optical transceivers supporting 100 Gb/s links, highlighting advancements in form factors like QSFP for multimode fiber. Similarly, Anritsu and Sumitomo Electric demonstrated a 100 GbE optical interface compliant with emerging IEEE 802.3ba drafts, validating transmission over short distances. The IEEE 802.3ba-2010 amendment was ratified on June 17, 2010, officially specifying 40GBASE and 100GBASE variants for LAN, WAN, and metropolitan applications, marking the first IEEE standard to support 100 Gb/s Ethernet operation. This ratification enabled the transition from prototypes to production, with initial commercial products becoming available in 2011 from vendors like Ciena and Juniper Networks, including deployments in Eastern European networks and core routers. Subsequent amendments refined 100G Ethernet for broader adoption. In 2014, IEEE 802.3bj added specifications for 100 Gb/s operation over electrical backplanes and twinaxial copper cables, including optional Energy Efficient Ethernet to reduce power consumption in high-density systems. IEEE 802.3bm, ratified in 2015, enhanced optical interfaces by defining 100 Gb/s over multimode fiber and 40 Gb/s over single-mode fiber, along with a four-lane CAUI-4 attachment unit interface. Post-2010 evolution included IEEE 802.3bs in 2017, which introduced 200 Gb/s and 400 Gb/s Ethernet as scalable extensions, leveraging 100G building blocks like 25 Gb/s lanes to accelerate 100G deployment in hyperscale data centers and carrier networks. These developments were influenced by ongoing traffic growth, with 100G Ethernet adoption surging by 2017 to meet terabit-scale aggregation needs.

Key IEEE Standards

The IEEE 802.3ba-2010 standard, ratified on June 17, 2010, serves as the foundational amendment to IEEE Std 802.3 for 100 Gigabit Ethernet, defining Physical Layer (PHY) specifications and management parameters to support data rates of 100 Gb/s. It introduces key physical medium dependent (PMD) variants, including 100GBASE-SR10 for short-reach multimode fiber using 10 parallel lanes at 10.3125 Gb/s each over OM3 fiber up to 100 meters or over OM4 fiber up to 150 meters, 100GBASE-LR4 for longer-reach single-mode fiber employing four wavelength-division multiplexed lanes at 25.78125 Gb/s each over 10 km, 100GBASE-ER4 for up to 40 km on single-mode fiber, and 100GBASE-CR10 for short-reach applications over copper twinaxial cable up to 7 meters. The standard specifies a gross bit rate of 103.125 Gb/s to accommodate overhead, while maintaining Ethernet frame formats and Media Access Control (MAC) parameters consistent with prior generations, including support for frame sizes from 64 bytes to 1518 bytes maximum transmission unit (MTU). Subsequent amendments have extended the 100G Ethernet specifications. The IEEE 802.3bj-2014 amendment adds support for 100GBASE-KR4 backplane Ethernet, utilizing four lanes of non-return-to-zero (NRZ) signaling at 25.78125 Gb/s each with forward error correction (FEC) to achieve reliable operation over electrical backplanes up to 1 meter, and introduces optional Energy Efficient Ethernet (EEE) mechanisms for 100 Gb/s operation over backplanes and twinaxial copper cables. It also defines 100GBASE-KP4 for extended backplane reach using advanced FEC. Similarly, IEEE 802.3cd-2018 expands the portfolio with additional 100 Gb/s PHY specifications for shorter-reach optics, such as 100GBASE-SR2 using two parallel multimode lanes at 53.125 Gb/s each over OM3 fiber up to 70 meters or over OM4 fiber up to 100 meters, alongside MAC parameters for 50 Gb/s and PHYs for 200 Gb/s to enable scalable deployments. The architectural structure of these standards is outlined in Clauses 80 through 91 of IEEE 802.3, which detail the Physical Coding Sublayer (PCS) in Clause 82 for 64B/66B encoding and lane distribution, the Physical Medium Attachment (PMA) sublayer for serialization and alignment across multiple lanes, and the PMD sublayer for media-specific signaling and interfaces. All 100G Ethernet PHYs operate exclusively in full-duplex mode, eliminating the need for carrier sense multiple access with collision detection (CSMA/CD) used in half-duplex legacy Ethernet. Backward compatibility is ensured through alignment with 10 Gb/s and 40 Gb/s Ethernet MAC layers, allowing 100G interfaces to process frames and control parameters from lower-speed systems without modification, facilitating incremental network upgrades. Performance requirements include a bit error ratio (BER) better than 10^{-12} post-FEC for reliable data transmission, with power budgets tailored to media types—for instance, approximately 7 dB optical budget for 100GBASE-SR10 multimode links and up to 10.5 dB for 100GBASE-LR4 single-mode—to account for attenuation, dispersion, and connector losses. The IEEE 802.3ba-2010 standard introduced 40 Gigabit Ethernet specifications as precursors to 100G Ethernet, including 40GBASE-KR4 for backplane applications, 40GBASE-SR4 for short-range multimode fiber, and 40GBASE-LR4 for longer-range single-mode fiber, each utilizing four parallel lanes aggregated at 10 Gb/s per lane to achieve the total 40 Gb/s rate. These configurations leveraged existing 10G Ethernet technology for cost-effective scaling, enabling seamless integration in enterprise and data center environments before the full deployment of 100G variants within the same amendment. Building on these foundations, the IEEE 802.3bs-2017 amendment extended Ethernet capabilities to 200 Gb/s and 400 Gb/s, introducing variants such as 200GBASE and 400GBASE that incorporate pulse amplitude modulation with four levels (PAM4) to double the signaling efficiency over prior non-return-to-zero (NRZ) schemes used in 100G Ethernet. This shift to PAM4 allowed for higher per-lane data rates, with 200GBASE typically using four lanes at 53.125 Gb/s each and 400GBASE employing eight such lanes or four at 106.25 Gb/s, facilitating denser interconnects in high-performance computing and cloud infrastructures. Post-2020 developments have further advanced electrical interfaces through IEEE 802.3ck-2022, which specifies 100 Gb/s, 200 Gb/s, and 400 Gb/s operations based on 100 Gb/s per-lane PAM4 signaling for chip-to-module and backplane applications, enhancing signal integrity for active copper cables and direct-attach copper solutions. Looking ahead, IEEE 802.3df, ratified in 2024, specifies 800 Gb/s and 1.6 Tb/s Ethernet with up to 200 Gb/s or 224 Gb/s per lane using advanced PAM4 or higher-order modulations, driven by demands for terabit-scale fabrics in AI-driven networks. Interoperability between 40G and 100G Ethernet is achieved through lane aggregation strategies, where 40G's four 10 Gb/s lanes can scale to 100G via either ten 10 Gb/s lanes (as in early 100GBASE-SR10/CR10) or more efficiently to four 25 Gb/s lanes (as in 100GBASE-SR4/CR4), allowing backward compatibility and gradual upgrades without full infrastructure overhauls. The 4x25G approach has become preferred for its reduced lane count and power efficiency compared to 10x10G, supporting breakout configurations like 100G to 4x25G or even 40G to 4x10G in pluggable modules. Market drivers for transitioning from 100G to 400G and beyond in hyperscale data centers include explosive growth in AI workloads and cloud services, which necessitated a projected 10-fold increase in 400G port shipments from 2023 to 2025 to handle bandwidth demands exceeding 100G capacities. By 2025, hyperscalers like those operated by major cloud providers are prioritizing 400G for its scalability and energy efficiency, reducing operational costs in environments where data throughput has surged due to machine learning training and real-time analytics.

Technical Overview

Physical Layer Specifications

The physical layer (PHY) of 100 Gigabit Ethernet, as defined in IEEE Std 802.3ba-2010, comprises several sublayers that enable high-speed data transmission over various media. The reconciliation sublayer (RS) serves as the interface between the media access control (MAC) sublayer and the physical coding sublayer (PCS), ensuring compatibility with the MAC parameters for 100 Gb/s operation. The PCS employs 64b/66b block encoding to add minimal overhead while providing synchronization, error detection, and lane distribution across multiple parallel paths, distributing the 100 Gb/s data stream into 20 logical lanes before aggregation. The physical medium attachment (PMA) sublayer handles serialization and deserialization, multiplexing the PCS lanes into fewer physical lanes (such as 4 or 10) and performing clock alignment and retiming to interface with the physical medium dependent (PMD) sublayer. The PMD sublayer is media-specific, defining the optical or electrical transceivers tailored to the transmission medium, including modulation and signal conditioning. 100G Ethernet PHY operates exclusively in full-duplex mode, eliminating the need for carrier sense multiple access with collision detection (CSMA/CD) used in earlier half-duplex Ethernet variants, which simplifies design and maximizes throughput. Lane configurations vary by PHY type: most implementations, such as 100GBASE-SR4, 100GBASE-LR4, and 100GBASE-KR4, aggregate 100 Gb/s using 4 parallel lanes of 25.78125 Gb/s each (accounting for encoding overhead), while alternatives like 100GBASE-SR10 and 100GBASE-CR10 use 10 parallel lanes of 10.3125 Gb/s for broader compatibility with existing 10G infrastructure. Transmission reaches depend on the medium and PHY variant; for example, 100GBASE-SR4 supports up to 100 m over OM4 multimode fiber (MMF) using parallel optics at 850 nm, 100GBASE-LR4 achieves 10 km over single-mode fiber (SMF) via wavelength-division multiplexing (WDM) with four 25 Gb/s lanes at approximately 1310 nm, and 100GBASE-KR4 targets backplane applications with effective reaches up to 1 m over electrical traces. Power consumption and port density are optimized for data center environments, with typical QSFP28 pluggable modules consuming up to 3.5 W for short-reach variants like SR4, though full port implementations in switches often range from 15-20 W to account for transceiver, driver, and cooling overhead. Form factors such as QSFP28 enable high-density deployments, supporting up to 36 ports in a single rack unit. Signal integrity specifications, outlined in IEEE Std 802.3ba-2010 clauses, include insertion loss budgets tailored to media types—for instance, up to 9.5 dB for direct-attach copper cables in 100GBASE-CR4 at the Nyquist frequency—and return loss requirements exceeding 12 dB to minimize reflections and ensure robust signal recovery across lanes. These parameters, combined with forward error correction (FEC) optionally applied at the PCS, maintain bit error rates below 10^{-12} under specified channel conditions. The Media Access Control (MAC) layer for 100 Gigabit Ethernet, as defined in IEEE Std 802.3ba-2010, retains the core frame format established in prior Ethernet standards, ensuring compatibility and simplicity in higher-speed implementations. Each MAC frame consists of a 7-byte preamble followed by a 1-byte start frame delimiter (SFD), 6-byte destination and source MAC addresses, a 2-byte EtherType or length field, a variable-length payload (46 to 1500 bytes for standard frames), and a 4-byte frame check sequence (FCS) for error detection. This structure operates in full-duplex mode without carrier sense multiple access with collision detection (CSMA/CD), as collision domains are eliminated at these speeds, allowing the MAC to focus solely on frame delineation and addressing at 100 Gbit/s rates. Flow control mechanisms at the MAC layer prevent buffer overflow and congestion, with support for both standard pause frames and priority-based variants tailored to high-performance environments. IEEE 802.3x pause frames, using MAC control opcodes, allow a receiver to request a temporary halt in transmission by specifying a pause quanta duration, applicable across the full 100 Gbit/s link to manage general congestion. For data center fabrics requiring lossless operation, Priority-based Flow Control (PFC) per IEEE 802.1Qbb enables per-priority pausing, where up to eight virtual lanes (corresponding to 802.1p priorities) can be independently paused without affecting other traffic classes, thus supporting converged storage and compute networks. Link aggregation enhances bandwidth and redundancy by bundling multiple 100 Gbit/s links into a single logical interface, governed by IEEE 802.1AX (formerly 802.3ad). Up to eight 100G links can form a Link Aggregation Group (LAG), providing up to 800 Gbit/s aggregate capacity with load balancing and fault tolerance via the Link Aggregation Control Protocol (LACP), which dynamically negotiates bundle membership and monitors link status. The Maximum Transmission Unit (MTU) handling remains consistent with lower-speed Ethernet, supporting a standard payload of 1500 bytes while optionally accommodating jumbo frames up to 9000 bytes or more through vendor-specific extensions, which reduce overhead in bulk data transfers common in 100G data centers. In switch implementations, MAC layer processing contributes to low-latency forwarding, with many 100G Ethernet switches achieving sub-microsecond cut-through latencies for minimal frame sizes, enabling real-time applications like high-frequency trading.

Coding and Error Correction Schemes

In 100 Gigabit Ethernet, the Physical Coding Sublayer (PCS) employs 64b/66b block coding to prepare data for transmission, appending a 2-bit synchronization header to each 64-bit data block. This scheme ensures sufficient transitions for clock recovery, maintains DC balance, and limits the maximum run length to prevent baseline wander, with a coding overhead of approximately 3.125%. The 64b/66b encoding is applied across multiple lanes in parallel configurations, such as the four-lane PCS for 100GBASE-R PHYs, where data is distributed and scrambled prior to encoding to randomize the bit stream and enhance error detection. Modulation schemes in 100G Ethernet are selected based on reach and channel characteristics, with non-return-to-zero (NRZ) used for shorter-distance applications like multimode and single-mode optical interfaces. In NRZ, binary signaling at 25.78125 GBd per lane transmits 25 Gbps of data (after 64b/66b overhead) using two amplitude levels, providing robust performance over reaches up to 100 meters in multimode fiber or 10 kilometers in single-mode fiber. For backplane applications with higher insertion loss, such as 100GBASE-KP4, pulse amplitude modulation with four levels (PAM4) is adopted, encoding two bits per symbol at approximately 13.65 GBaud to achieve approximately 27.3 Gbps per lane across four lanes, doubling spectral efficiency compared to NRZ at the cost of reduced signal-to-noise margin. PAM4's multilevel nature requires precise equalization to mitigate intersymbol interference, and its symbol error rate (SER) is modeled using Gaussian noise assumptions on eye heights, where the probability of symbol error PsP_s approximates 32×Q(dmin2σ)\frac{3}{2} \times Q\left( \frac{d_{\min}}{2\sigma} \right) for equally spaced levels under gray coding, with dmind_{\min} as the minimum distance between levels and σ\sigma as noise standard deviation; this ensures most symbol errors affect only one bit. Forward error correction (FEC) enhances reliability in 100G Ethernet by correcting errors introduced by channel impairments, with Reed-Solomon codes specified in IEEE 802.3bj. For optical PHYs like 100GBASE-SR4 and 100GBASE-LR4, the RS(528,514) code operates on 528-symbol codewords with 514 information symbols over GF(2^10), capable of correcting up to 7 symbol errors to achieve a post-FEC bit error rate (BER) of 101310^{-13} or better from a pre-FEC BER of approximately 2.4×1042.4 \times 10^{-4}. The coding rate is 5145280.973\frac{514}{528} \approx 0.973, adding 2.7% overhead to the line rate. For backplane PHYs like 100GBASE-KP4, the stronger RS(544,514) code (also known as KP4 FEC) is used, correcting up to 15 symbol errors per 544-symbol codeword to support PAM4 signaling over channels with up to 33 dB loss at 7 GHz. These coding and FEC schemes enable higher data density and extended reaches but introduce trade-offs in implementation. The 64b/66b and RS-FEC overheads increase the effective signaling rate, contributing to higher power dissipation—typically 200-300 mW per port for FEC processing in 28 nm CMOS—and added latency from encoding, decoding, and interleaving, bounded at around 250 ns to meet application timing requirements. PAM4 further amplifies power needs due to enhanced equalization and analog-to-digital conversion demands, though it reduces lane count for the same aggregate bandwidth compared to NRZ.

Interface Types and Technologies

Electrical Interfaces (Backplane and Chip-to-Chip)

The electrical interfaces for 100 Gigabit Ethernet enable high-speed data transmission over copper backplanes and short intra-system connections, addressing the challenges of signal integrity in dense networking environments. The 100GBASE-KR4 Physical Layer (PHY) specification, defined in IEEE Std 802.3bj-2014, supports operation over electrical backplanes using four parallel lanes, each at a signaling rate of 25.78125 Gbps (GBd) ±100 ppm, for an aggregate bitrate of 100 Gbps. This interface is designed for reaches up to 1 meter, accommodating typical backplane traces within chassis or midplanes. To mitigate inter-symbol interference and attenuation in these channels, 100GBASE-KR4 mandates adaptive equalization, including continuous-time linear equalization (CTLE) and decision feedback equalization (DFE) at the receiver, complemented by transmitter equalization with pre-emphasis (pre-cursor tap) and post-emphasis (post-cursor tap) via a 3-tap finite impulse response filter. The standard specifies a maximum channel insertion loss of 35 dB at the Nyquist frequency of 12.89 GHz to ensure reliable performance. More recent amendments, such as IEEE 802.3ck-2022, introduce PAM4-modulated single-lane electrical interfaces at 100 Gb/s signaling rate (100GBASE-KR1), supporting backplane reaches up to 1 m and chip-to-chip distances up to 30 cm with reduced lane count and power consumption compared to multi-lane NRZ variants. These use 112 Gbps PAM4 SerDes with RS(544,514) FEC for enhanced performance in dense systems. For chip-to-chip and chip-to-module connections, the 100 Gigabit Attachment Unit Interface (CAUI-4), specified in IEEE Std 802.3bm-2015 Annex 83D, provides a low-loss electrical interface using four lanes at 25.78125 Gbps each, suitable for distances up to approximately 25 inches of PCB trace with one connector. An alternative, CAUI-10, employs ten lanes at 10.3125 Gbps for ASIC-to-PHY links, offering flexibility in implementation while maintaining the 100 Gbps aggregate rate. Both CAUI variants support similar equalization techniques as 100GBASE-KR4, with compliance testing focused on lower insertion loss budgets (e.g., 15-20 dB at Nyquist) to prioritize power and simplicity in intra-board applications. Implementations of these interfaces in 28 nm CMOS processes emphasize power efficiency, with serializer/deserializer (SerDes) units achieving energy consumption below 10 pJ/bit. For instance, a multistandard 28 Gb/s transceiver compliant with 100GBASE-KR4 demonstrates 3.2 pJ/bit efficiency at the receiver while supporting backplane channels. These electrical interfaces find primary application in server-to-switch interconnects within network chassis and modular systems, where copper paths suffice for short distances but are not viable for long-haul transmission due to excessive attenuation. Reliability is further enhanced by integrating Reed-Solomon forward error correction, as outlined in related coding specifications.

Optical Interfaces (Multimode and Single-Mode Fiber)

Optical interfaces for 100 Gigabit Ethernet enable high-speed data transmission over fiber optic cables, utilizing either multimode fiber (MMF) for shorter distances or single-mode fiber (SMF) for longer reaches. These interfaces are defined in IEEE 802.3 standards, which specify physical layer parameters including wavelength, modulation, and reach to ensure interoperability. Multimode variants leverage parallel optics at shorter wavelengths, while single-mode options employ wavelength-division multiplexing (WDM) for extended distances. Forward error correction (FEC) is typically integrated to enhance link reliability across both fiber types. Multimode fiber interfaces, such as 100GBASE-SR4, support transmission over OM4 MMF up to 100 meters using parallel optics with four lanes of 25 Gbps each, employing vertical-cavity surface-emitting lasers (VCSELs) at 850 nm and MPO connectors for eight fibers (four transmit and four receive). This configuration minimizes modal dispersion, the primary limitation in MMF where multiple light paths propagate at different speeds, spreading the signal pulse and restricting reach. The IEEE 802.3bm-2015 amendment also introduced 100GBASE-SR10, which uses ten lanes of 10 Gbps over MMF for similar short-reach applications in high-performance computing environments. Transmitter power budgets for 100GBASE-SR4 range from -6.3 dBm to 2.3 dBm per lane, providing an illustrative link budget sufficient for the specified distances while accounting for insertion loss and penalties. Subsequent standards introduced single-lane PAM4 optical interfaces for simplified deployments. The 100GBASE-DR (IEEE 802.3cd-2018) supports 500 m over SMF using a single 1310 nm wavelength with LC connectors and RS-FEC. Similarly, 100GBASE-FR extends to 2 km, and 100GBASE-LR (IEEE 802.3cu-2021) to 10 km, all employing 100 Gb/s PAM4 modulation for higher spectral efficiency. Single-mode fiber interfaces address longer distances by mitigating chromatic dispersion, caused by varying light wavelengths traveling at different speeds in SMF, which broadens pulses over extended links. The 100GBASE-LR4 standard achieves 10 km reach using coarse WDM (CWDM) with a multiplexer combining four 25 Gbps lanes from electro-absorption modulated lasers (EMLs) at 1310 nm. For even greater distances, 100GBASE-ER4 extends to 40 km employing LAN-WDM with four closely spaced wavelengths around 1310 nm and cooled EMLs to maintain signal integrity against dispersion and attenuation. These SMF interfaces typically use duplex LC connectors for bidirectional transmission over a single fiber pair. A notable advancement is the 100G CWDM4 Multi-Source Agreement (MSA), which standardizes 2 km reach over SMF using CWDM at 1310 nm with four 25 Gbps lanes and RS(528,514) FEC, gaining widespread adoption in data centers by 2020 for its cost-effective interoperability.

Pluggable Modules and Optical Connectors

Pluggable modules for 100 Gigabit Ethernet are designed as hot-swappable transceivers that enable flexible, high-density connectivity in network equipment such as switches and routers. These modules adhere to Multi-Source Agreement (MSA) specifications to ensure interoperability across vendors. The primary form factor for 100G applications is QSFP28, which measures approximately 8.5 mm in height, 18.4 mm in width, and 72.4 mm in depth, allowing for compact integration into faceplates while supporting four 25 Gbps lanes for aggregate 100 Gbps throughput. QSFP28 modules are hot-pluggable, facilitating maintenance without system downtime, and typically consume 3.5 to 4.5 W of power. The evolution of pluggable form factors for 100G Ethernet began with the CFP (C Form-factor Pluggable) module in 2011, which was developed by the CFP MSA to support initial 100G deployments but was notably bulky with dimensions around 82 mm x 13.6 mm x 144.8 mm, limiting port density due to its larger footprint and higher power requirements up to 24 W. By 2013, the QSFP28 form factor, defined under the SFF-8665 specification, emerged as a more compact alternative, reducing size and power while maintaining compatibility with existing QSFP cages, which accelerated adoption in data centers. For applications requiring higher thermal dissipation, the OSFP (Octal Small Form-factor Pluggable) form factor was introduced later, supporting up to 15 W power consumption to handle more demanding cooling needs in dense environments. Optical standards for 100G pluggable modules are governed by MSAs to standardize interfaces over fiber. The CWDM4 MSA, established in 2014, defines a four-lane coarse wavelength division multiplexing (CWDM) interface operating at center wavelengths of 1271 nm, 1291 nm, 1311 nm, and 1331 nm, enabling 100 Gbps transmission over duplex single-mode fiber for reaches up to 2 km. Similarly, the PSM4 MSA, also from 2014, specifies a parallel single-mode four-lane interface using 1310 nm wavelength across eight single-mode fibers (four transmit, four receive), supporting reaches up to 2 km and leveraging existing parallel fiber infrastructure for cost-effective upgrades. For multimode applications, these modules commonly use MPO-12 or MTP connectors to interface with parallel multimode fiber ribbons, aligning with short-reach specifications like 100GBASE-SR4 for distances up to 100 m over OM4 fiber. Electrical pluggable modules for 100G Ethernet short-reach copper connections, such as direct-attach cables (DAC), utilize the QSFP-DD (Quad Small Form-factor Pluggable Double Density) form factor, which doubles the lane density of QSFP28 to eight lanes while maintaining backward compatibility with QSFP28 modules through integrated adapters. QSFP-DD supports hybrid electrical-optical configurations, enabling direct-attach copper cables or backplane traces for intra-shelf interconnects with low latency and power efficiency, often consuming under 12 W for 100G electrical variants. MPO connectors, standardized under IEC 61754-7, are integral to these pluggable modules for multimode and parallel single-mode interfaces, featuring a push-pull design with a 12-fiber ferrule for high-density terminations. These connectors achieve low insertion loss of less than 0.5 dB to minimize signal degradation in 100G links, with specifications ensuring alignment tolerances under 0.5 μm for reliable performance. Proper maintenance, including cleaning protocols to remove contaminants and polarity management (e.g., Type A, B, or C configurations) to ensure correct fiber mapping, is essential for sustaining link integrity in deployments. As of 2025, trends in pluggable modules for 100G Ethernet are shifting toward 400G compatibility, with form factors like QSFP-DD and OSFP enabling seamless upgrades through higher lane speeds (e.g., 50 Gbps per lane) and enhanced thermal designs to support Ethernet roadmap evolutions beyond 100G. The LPO MSA has released specifications for linear pluggable optics supporting 100 Gb/s per lane without DSP, targeting short-reach applications with reduced power (under 5 W) and latency, facilitating upgrades to higher-speed Ethernet while maintaining QSFP-DD and OSFP form factors.

Implementation and Compatibility

Hardware Components and Early Products

The development of 100 Gigabit Ethernet (100GbE) hardware relied on key components such as application-specific integrated circuits (ASICs) and serializer/deserializer (SerDes) technologies to achieve high-speed data transmission. In November 2010, Broadcom introduced the BCM88600 series, the industry's first Ethernet switching silicon capable of supporting scalable systems up to 100 terabits per second (Tbps), enabling early 100G port configurations in switches with integrated Layer 2-4 processing. Complementing these ASICs, SerDes components were critical for electrical signaling; Avago Technologies (now part of Broadcom) demonstrated the first 28-nm 25-Gbps SerDes in 2012, compliant with the Common Electrical Interface (CEI) standard, which facilitated four-lane 100G operation over copper backplanes and chip-to-chip links. Network interface cards (NICs) also emerged early, with Mellanox offering initial 100G Ethernet adapters based on ConnectX technology in 2014, supporting QSFP+ interfaces for server connectivity in high-performance computing environments. Early commercial products focused on modular switches and routers to integrate 100G interfaces. Cisco announced the ASR 9000 series expansion in June 2011, including the ASR 9922 router with high-density 100G ports, providing up to 96 Tbps system capacity for carrier-grade aggregation and edge applications. In February 2012, Cisco extended 100G support to the Nexus 7000 series with new line cards, such as the 2-port 100G module, enabling data center fabrics with up to 32x100G ports in a chassis. For backplane implementations, the 100GBASE-KR4 electrical interface—defined in IEEE 802.3bj and supporting four 25 Gbps lanes over 1 meter of copper—was first realized in products like Inphi's 100G CMOS PHY SerDes gearbox announced in September 2012, targeting backplane and chip-to-chip connectivity in switches. Optical components were equally pivotal for early adoption. Finisar demonstrated the first CFP pluggable module for 100GBASE-LR4 at the Optical Fiber Communication Conference (OFC) in March 2011, supporting 10 km reaches over single-mode fiber using distributed feedback lasers across four wavelengths. Switches like the Mellanox Spectrum SN2100, announced in June 2015 but building on earlier prototypes, featured 16x100G QSFP28 ports with 3.2 Tbps throughput, serving as a compact top-of-rack solution. Key milestones included Broadcom's 2010 prototype of a 100G switch silicon, marking the transition from 10G to multi-lane 100G architectures. By 2015, port costs had dropped significantly from over $20,000 per 100G port in early modules (as of 2011) to around $2,500, driven by CMOS integration and volume production, making 100GbE viable for broader data center use. Following these early developments, 100GbE hardware evolved further; as of 2025, costs had decreased to under $500 per port, with modern ASICs like Broadcom's Tomahawk 5 supporting high-density 100G configurations alongside 400G transitions in data centers and AI infrastructure.

Compatibility with Lower-Speed Ethernet

100 Gigabit Ethernet maintains backward compatibility with lower-speed Ethernet standards at the MAC layer, allowing seamless integration of 10G, 40G, and 1G networks through standardized frame formats and rate adaptation mechanisms. The Ethernet MAC frame structure, defined in IEEE 802.3, remains consistent across speeds from 1G to 100G, enabling 100G ports to process and forward frames from slower interfaces without modification. Rate adaptation occurs at the physical coding sublayer (PCS), where 64b/66b encoding—identical to that used in 10G and 40G Ethernet—facilitates the distribution of data across multiple lanes while preserving frame integrity. Physical interoperability is achieved through breakout cables, such as QSFP28 to 4x SFP28 configurations, which split a single 100G port into four 25G lanes for connectivity to lower-speed devices. These cables comply with Multi-Source Agreements (MSAs) like SFF-8665 for QSFP28 and SFF-8402 for SFP28, supporting both direct-attach copper (DAC) and active optical cable (AOC) variants for short-reach applications in data centers. This approach allows a 100G QSFP28 port to interface directly with four 25G SFP28 ports, bridging the speed gap without requiring additional transceivers. In backplane and chip-to-chip environments, IEEE 802.3 Clause 82 defines virtual lanes for 100GBASE-KR4, enabling mixed-speed fabrics by distributing PCS-encoded data in a round-robin fashion across physical lanes. Virtual lanes decouple logical data flow from physical lane assignments, allowing 100G Ethernet to coexist with 10G or 40G links in the same fabric while maintaining low latency and error correction via forward error correction (FEC). This mechanism supports interoperability in aggregated systems, such as switches combining multiple speed tiers. Integrating 100G with lower speeds introduces challenges, including oversubscription ratios that can reach 3:1 in data centers, where aggregate downstream traffic exceeds upstream capacity, potentially leading to congestion during peak loads. Autonegotiation extensions, as specified in IEEE 802.3 Clause 73 for backplane Ethernet, address these by negotiating parameters like speed, FEC, and lane alignment across mixed environments, though implementation requires careful configuration to avoid link failures. The IEEE 802.3by standard for 25GBASE Ethernet serves as a critical bridge between 10G and 100G, providing single-lane 25 Gb/s operation over twisted-pair copper and optical media to enable incremental upgrades. By aligning with the 100GBASE-R PCS and supporting backward compatibility with 10GBASE-T, 802.3by facilitates hybrid deployments, such as breaking out 100G ports to 4x25G for cost-effective scaling in enterprise and data center networks.

Testing and Measurement Challenges

Testing 100 Gigabit Ethernet systems presents significant challenges due to the high data rates, parallel lane architectures, and stringent signal integrity requirements, necessitating specialized methods to validate performance across physical and protocol layers. At the physical layer, bit error rate (BER) testing is essential to ensure reliable transmission, typically employing pseudorandom binary sequence (PRBS) patterns such as PRBS31 to simulate real-world traffic and detect errors without assuming perfect internal component functionality. This pattern is checked at the 5 Gbps physical coding sublayer (PCS) lane level rather than higher nAUI lanes, allowing comprehensive module evaluation with error counters per lane to identify issues like gearbox splitting in 100G implementations. Oscilloscopes are commonly used to capture eye diagrams for 25 Gbps lanes, assessing signal quality through metrics like eye width and height to confirm compliance with BER targets, often below 10^{-12} pre-forward error correction (FEC). Key tools for these validations include bit error rate testers (BERTs), which generate and analyze stressed signals to measure BER under various conditions, such as frequency offsets or loopbacks, supporting rates up to 100 Gbps and patterns like PRBS9 to PRBS31. For optical fiber links, optical time-domain reflectometers (OTDRs) characterize impairments like chromatic dispersion (CD) and polarization mode dispersion (PMD), critical for multimode and single-mode deployments where tight insertion loss tolerances (e.g., ≤1.0 dB over 150 m OM4 fiber) heighten failure risks. Protocol-level testing relies on analyzers from vendors like Keysight (formerly Ixia), which emulate traffic flows and verify layer 2/3 compliance, including skew monitoring across lanes to detect differential delays that could degrade performance. Major hurdles arise from high-speed signal degradation, including total jitter (TJ) requirements often below 0.4 unit intervals (UI) at BER=10^{-12} to maintain timing margins in 25 Gbps lanes, where even minor deterministic or random components can cause eye closure. Backplane implementations face additional crosstalk challenges, with bounded uncorrelated jitter (BUJ) from N-1 aggressors needing evaluation using patterns like PRBS31 to simulate multi-lane interference without exceeding channel operating margin (COM) limits. FEC stress testing further complicates validation, requiring dynamic error injection to assess correction capabilities under non-linear effects or noise, ensuring post-FEC BER remains below 10^{-15} despite pre-FEC errors up to 10^{-4}. Link training for 100GBASE-KR4 backplane Ethernet is specified in IEEE 802.3 Clause 93, while full physical layer compliance, including jitter budgets, eye diagram masks, and BER testing protocols, is outlined in Clauses 82 (PCS), 95 (PMD), and related sections of IEEE 802.3bj. For electrical interfaces, the Optical Internetworking Forum's Common Electrical I/O (OIF-CEI) agreements, such as CEI-28G, define jitter tolerances and signaling for chip-to-module links supporting 100 Gbps aggregates via 4x25 Gbps lanes, enabling low-power, clockless designs. Since 2020, automation has advanced testing for 100G and higher Ethernet through machine learning (ML)-based anomaly detection, integrating AI into test equipment to identify subtle impairments like unexpected jitter spikes or FEC failures in real-time, reducing manual intervention in complex data center validations. These ML models, often using unsupervised techniques on traffic metrics, enable predictive maintenance and scalable protocol analysis, addressing the growing complexity of AI-driven hyperscale networks. In 2025, Anritsu introduced testing support for 100ZR coherent optical transceivers, enabling full performance evaluation for data center interconnect (DCI) and access networks, with interoperability verified at ECOC 2025.

Commercial Deployments and Vendors

Initial Trials and Deployments

The initial trials of 100 Gigabit Ethernet began in 2011, with notable interoperability testing conducted in Japan by NTT Communications Corporation (NTT Com) alongside Internet Initiative Japan Inc. (IIJ), Internet Multifeed Co. (MF), and vendors including Brocade, Cisco, Juniper, and Toyo Corporation. This joint test at an Internet Exchange Point in Japan verified the compatibility of 100G Ethernet interfaces across routers, switches, and packet generators from multiple manufacturers, demonstrating readiness for deployment in exchange environments. By 2012, the U.S. Department of Energy's Energy Sciences Network (ESnet) advanced 100G deployments to support scientific research grids, rolling out the world's fastest science network at 100 gigabits per second—ten times faster than its predecessor—across its nationwide backbone. This upgrade, part of the ESnet5 initiative, connected DOE laboratories and enabled high-speed data transfers for projects in high-energy physics and other fields, with initial 100G testbeds activated earlier that year for protocol research. In 2013, ESnet extended 100G capabilities internationally through the Advanced North Atlantic 100G Pilot (ANA-100G) project, establishing the first intercontinental 100 Gbps link for research and education in collaboration with European networks. Concurrently in Europe, the GÉANT backbone network underwent a major upgrade in 2013, achieving 100 Gbps speeds across its core infrastructure to handle growing data demands from scientific collaborations. This transition, completed in phases by mid-2013, increased total capacity to 2 terabits per second and supported disciplines like high-energy physics by providing low-latency, high-bandwidth connectivity across 24 points of presence. Early deployments faced significant challenges, including high initial costs for equipment and infrastructure—estimated in the tens of thousands of dollars per kilometer for long-haul links—and constraints from limited fiber availability in certain regions, necessitating careful optimization of existing dark fiber assets. These hurdles were gradually addressed through scalable packet-optical technologies that improved cost per bit per kilometer. A key milestone came in 2013 when Ciena enabled the first wide-area network (WAN) deployments of 100G using dense wavelength-division multiplexing (DWDM), as seen in CenturyLink's global network upgrade and RASCOM's trans-Russia link to Western Europe, which boosted capacity and service offerings like 100GE wavelengths. In high-performance computing (HPC), 100G adoption progressed by 2014, with Oak Ridge National Laboratory (ORNL) integrating 100G connectivity via ESnet to its leadership computing facility, facilitating production data transfers for simulations on systems like Titan and supporting the shift to exascale-era requirements. This integration resolved perimeter hardware limitations and enabled sustained high-throughput links to the 100G router backbone. By 2015, Asia-Pacific regions demonstrated strong early uptake, driven by deployments in carrier and research networks, contributing significantly to global 100G port growth amid concentrated market leadership by key vendors.

Major Vendor Contributions

Cisco introduced significant advancements in 100 Gigabit Ethernet through its Nexus 9000 series switches, launching models with 100G support in 2016 that integrated with the Application Centric Infrastructure (ACI) fabric for software-defined networking (SDN). These switches featured VXLAN encapsulation to enable scalable overlay networks, allowing seamless integration of 100G ports in data center environments while maintaining compatibility with ACI policies for automated provisioning and orchestration. Arista Networks contributed to low-latency 100G Ethernet deployments starting with its 7050X series, initially released in 2012 and later enhanced with 100G capabilities in models like the 7050X3, which achieved cut-through forwarding latencies as low as 800 nanoseconds. Running on the Extensible Operating System (EOS), these switches were optimized for high-frequency trading applications, providing wire-speed Layer 2/3/4 performance and dynamic buffer allocation to handle bursty financial workloads without packet loss. Juniper Networks advanced 100G Ethernet interoperability with the QFX5100 series, introduced in 2013, which supported 100GbE ports alongside rich Layer 2/3 features in the Junos operating system, including Ethernet VPN (EVPN) for multi-tenant data center fabrics. The QFX5100 enabled high-density 100G uplinks with low-latency forwarding, facilitating EVPN-based VXLAN overlays for virtualized environments and ensuring standards-compliant operation across diverse network topologies. Huawei bolstered 100G Ethernet for carrier-grade applications with the CloudEngine 16800 series, a modular data center switch featuring high-density 100G/400G ports and intelligent lossless forwarding algorithms tailored for 5G backhaul networks. These switches supported massive east-west traffic in 5G fronthaul and midhaul scenarios, incorporating AI-driven optimization to minimize latency and enhance reliability in ultra-dense deployments. Broadcom's Jericho family of switch chips played a pivotal role in enabling scalable 100G Ethernet architectures, with models like Jericho2 and Jericho3 providing up to 10 Tbps of throughput per device and support for 100GbE ports through integrated SerDes and deep buffering. These merchant silicon solutions powered third-party routers and switches, allowing for flexible, high-scale implementations in service provider and enterprise networks while adhering to IEEE 802.3 standards. Mellanox, now part of NVIDIA, developed InfiniBand-to-Ethernet bridges such as the Skyway GA100 appliance, which facilitated 100G connectivity between high-performance computing clusters using InfiniBand and Ethernet-based storage or networks. This gateway supported sub-microsecond latencies and RDMA over Converged Ethernet (RoCE), bridging legacy InfiniBand fabrics to 100GbE infrastructures for hybrid AI and cloud workloads. In the 2020s, Dell enhanced 100G Ethernet for AI-driven fabrics with its PowerSwitch Z-series, including models like the Z9664F-ON, which delivered high-density 100G/400G ports optimized for distributed AI training and inference. These switches integrated with NVIDIA technologies to form lossless Ethernet fabrics, supporting adaptive routing and congestion control for massive-scale GPU interconnects in data centers. Extreme Networks contributed to metro Ethernet evolution with the SLX 9740 router, a compact fixed-form-factor device offering 80 x 100GbE ports for high-density border routing in service provider networks. Launched in 2021, it featured ultra-deep buffers and IPv4/IPv6 dual-stack support, enabling efficient 100G aggregation in metro access and peering scenarios while maintaining low power consumption.

Applications in Data Centers and Networks

100 Gigabit Ethernet (100G Ethernet) has become integral to data center architectures, particularly in hyperscale environments where high-bandwidth, low-latency interconnects are essential for handling massive data workloads. In spine-leaf topologies, widely adopted by providers like Amazon Web Services (AWS) and Microsoft Azure, 100G uplinks serve as the backbone for aggregating traffic from thousands of servers, enabling seamless scaling for cloud computing and storage services. This configuration reduces end-to-end latency to approximately 1 μs in optimized setups, supporting real-time applications such as virtual desktop infrastructure and big data analytics. For example, in 2025, Hetzner established inter-data center fiber links at 100 Gbit/s and 400 Gbit/s connecting its locations in Nuremberg, Falkenstein, Frankfurt, and Helsinki, demonstrating ongoing 100GbE deployments for enhanced network aggregation. In telecommunications networks, 100G Ethernet facilitates efficient 5G deployments, especially in cloud radio access networks (C-RAN) where it handles fronthaul traffic between remote radio heads and centralized baseband units, accommodating the high data rates required for massive MIMO and beamforming. Additionally, in metro Ethernet networks, 100G links support the surging demand for video streaming services, delivering uncompressed 4K and 8K content over fiber infrastructures with minimal buffering. High-performance computing (HPC) and artificial intelligence (AI) clusters leverage 100G Ethernet for its balance of performance and cost, as seen in NVIDIA DGX systems that utilize 100G Remote Direct Memory Access over Converged Ethernet (RoCE) to interconnect GPUs for distributed training of large language models. These implementations target energy efficiency improvements, aiming for power consumption as low as 0.5 pJ/bit in transceiver designs to sustain exascale computing without excessive operational costs. Market adoption of 100G Ethernet has grown rapidly, with global deployments exceeding 10 million ports by 2023, driven by the need for upgraded infrastructure in edge and core networks. Although migrations to 400G are accelerating by 2025, 100G persists in edge computing scenarios due to its maturity and lower deployment barriers. As of 2024, the global 100G Ethernet market was valued at approximately $19 billion, with continued growth projected into 2025 driven by AI and 5G applications. Key benefits include providing sufficient bandwidth for emerging 4K/8K video applications, with the cost per Gbps for short-reach optics dropping to around $0.7 by 2024 through economies of scale in optics and silicon.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.