Hubbry Logo
logo
Epyc
Community hub

Epyc

logo
0 subscribers
Read side by side
from Wikipedia

Epyc
General information
LaunchedJune 20, 2017; 8 years ago (2017-06-20)
Marketed byAMD
Designed byAMD
Common manufacturers
Performance
Max. CPU clock rate2.7 GHz to 5.7 GHz
Architecture and classification
Technology node14 nm to 3 nm
Microarchitecture
Instruction setAMD64 (x86-64)
Extensions
Physical specifications
Cores
  • up to 192 cores/384 threads per socket
Memory (RAM)
  • up to 12 memory channels at 6400 MT/s
Socket
Products, models, variants
Core names
  • Naples
  • Rome
  • Milan
  • Genoa
  • Bergamo
  • Siena
  • Raphael
  • Turin
History
PredecessorOpteron

Epyc (stylized as EPYC) is a brand of multi-core x86-64 microprocessors designed and sold by AMD, based on the company's Zen microarchitecture. Introduced in June 2017, they are specifically targeted for the server and embedded system markets.[1]

Epyc processors share the same microarchitecture as their regular desktop-grade counterparts, but have enterprise-grade features such as higher core counts, more PCI Express lanes, support for larger amounts of RAM, support for ECC memory, and larger CPU cache. They also support multi-chip and dual-socket system configurations by using the Infinity Fabric interconnect.

History

[edit]
  • In March 2017, AMD announced plans to re-enter the server market with a platform based on the Zen microarchitecture, codenamed Naples, and officially revealed it under the brand name Epyc in May.[2] That June AMD officially launched Epyc 7001 series processors, offering up to 32 cores per socket, and enabling performance that allowed Epyc to be competitive with the competing Intel Xeon Scalable product line.[3] In August 2019, the Epyc 7002 "Rome" series processors, based on the Zen 2 microarchitecture, launched, doubling the core count per socket to 64, and increasing per-core performance dramatically over the last generation architecture.
  • In March 2021, AMD launched the Epyc 7003 "Milan" series, based on the Zen 3 microarchitecture.[4] Epyc Milan brought the same 64 cores as Epyc Rome, but with much higher per-core performance, with the Epyc 7763 beating the Epyc 7702 by up to 22 percent despite having the same number of cores and threads.[5] A refresh of the Epyc 7003 "Milan" series with 3D V-Cache, named Milan-X, launched on March 21, 2022, using the same cores as Milan, but with an additional 512 MB of cache stacked onto the compute dies, bringing the total amount of cache per CPU to 768 MB.[6]
  • In September 2021, Oak Ridge National Laboratory partnered with AMD and HPE Cray to build Frontier, a supercomputer with 9,472 Epyc 7453 CPUs and 37,888 Instinct MI250X GPUs, becoming operational by May 2022. As of November 2023, it is the most powerful supercomputer in the world according to the TOP500, with a peak performance of over 1.6 exaFLOPS.
  • In November 2021, AMD detailed the upcoming generations of Epyc, and unveiled the new LGA-6096 SP5 socket that would support the new generations of Epyc chips. Codenamed Genoa, these CPUs are based on the Zen 4 microarchitecture and built on TSMC's N5 node, supporting up to 96 cores and 192 threads per socket, alongside 12 channels of DDR5[7] and 128 PCIe 5.0 lanes. Genoa also became the first x86 server CPU to support Compute Express Link 1.1,[8] or CXL, allowing for further expansion of memory and other devices with a high bandwidth interface built on PCIe 5.0. AMD also shared information regarding the sister chip of Genoa, codenamed Bergamo. Bergamo is based on a modified version of Zen 4 named Zen 4c, designed to allow for much higher core counts and efficiency at the cost of lower single-core performance, targeting cloud providers and workloads, compared to traditional high performance computing workloads.[9] It is compatible with Socket SP5, and supports up to 128 cores and 256 threads per socket.[10]
  • In November 2022, AMD launched their 4th generation Epyc "Genoa" series of CPUs. Some tech reviewers and customers had already received hardware for testing and benchmarking, and third party benchmarks of Genoa parts were immediately available. The flagship part, the 96 core Epyc 9654, set records for multi-core performance, and showed up to 4× performance compared to Intel's flagship part, the Xeon Platinum 8380. High memory bandwidth and extensive PCIe connectivity removed many bottlenecks, allowing all 96 cores to be utilized in workloads where previous generation Milan chips would have been I/O-bound.
  • In June 2023, AMD began shipping the 3D V-Cache enabled Genoa-X lineup, a variant of Genoa that uses the same 3D die stacking technology as Milan-X to enable up to 1152 MB of L3 cache, a 50% increase over Milan-X, which had a maximum of 768 MB of L3 cache.[11] On the same day, AMD also announced the release of their cloud optimized Zen 4c SKUs, codenamed Bergamo, offering up to 128 cores per socket, utilizing a modified version of the Zen 4 core that was optimized for power efficiency and to reduce die space. Zen 4c cores do not have any instructions removed compared to standard Zen 4 cores; instead, the amount of L3 cache per CCX is reduced from 32 MB to 16 MB, and the frequency of the cores is reduced.[12] Bergamo is socket compatible with Genoa, using the same SP5 socket and supporting the same CXL, PCIe, and DDR5 capacity as Genoa.[13]
  • In September 2023, AMD launched their low power and embedded 8004 series of CPUs, codenamed Siena. Siena utilizes a new socket, called SP6, which has a smaller footprint and pin count than the SP5 socket of its contemporary Genoa processors. Siena utilizes the same Zen 4c core architecture as Bergamo cloud native processors, allowing up to 64 cores per processor, and the same 6 nm I/O die as Bergamo and Genoa, although certain features have been cut down, such as reducing the memory support from 12 channels of DDR5 to only 6, and removing dual socket support.[14]
  • In May 2024, AMD launched the new 4004 series of CPUs, codenamed Raphael. Sharing the same AM5 socket as desktop Ryzen CPUs. In contrast to desktop parts ECC memories are supported. AM5 motherboard manufacturers do not support the 4004 so available options are very limited to devices which are not suitable for desktop use.
  • On October 10, 2024, AMD launched the new 9005 series of CPUs, codenamed Turin. Sharing the same SP5 socket as Genoa and Bergamo, Turin came with numerous platform advancements, including the support for up to 6400 MT/s DDR5 memory.[15] Turin also increased the core count and frequency offerings, with Turin offering 128 Zen 5 cores per socket, and Turin Dense offering 192 Zen 5c cores per socket. And with the highest frequency SKU (The EPYC 9575F) having a operating frequency of up to 5 GHz.[16]

AMD Epyc CPU codenames follow the naming scheme of Italian cities, including Milan, Rome, Naples, Genoa, Bergamo, Siena, Turin and Venice.

CPU generations

[edit]
AMD Epyc CPU generations[17][18][19][20][21]
Gen Year Codename Product line Cores Socket Memory
Server
1st 2017 Naples 7001 series 32 × Zen SP3 (LGA) DDR4
2nd 2019 Rome 7002 series 64 × Zen 2
3rd 2021 Milan 7003 series 64 × Zen 3
2022 Milan-X
4th Genoa 9004 series 96 × Zen 4 SP5 (LGA) DDR5
2023 Genoa-X
Bergamo 128 × Zen 4c
Siena 8004 series 64 × Zen 4c SP6 (LGA)
2024 Raphael 4004 series 16 × Zen 4 AM5 (LGA)
5th 2024 Turin 9005 series 128 × Zen 5 SP5 (LGA)
Turin Dense 192 × Zen 5c
2025 Grado 4005 series 16 × Zen 5 AM5 (LGA)
Embedded
1st 2018 Snowy Owl Embedded 3001 series 16 × Zen SP4 (BGA) DDR4
2019 Naples Embedded 7001 series 32 × Zen SP3 (BGA)
2nd 2021 Rome Embedded 7002 series 64 × Zen 2
3rd 2022 Milan Embedded 7003 series 64 × Zen 3
4th 2023 Genoa Embedded 9004 series 96 × Zen 4 SP5 (BGA) DDR5
Siena Embedded 8004 series 64 × Zen 4c SP6 (BGA)
5th 2025 Turin Embedded 9005 series 128 × Zen 5 SP5 (BGA)
Turin Dense 192 × Zen 5c

Design

[edit]
A delidded second gen Epyc 7702, showing the die configuration

Epyc CPUs use a multi-chip module design to enable higher yields for a CPU than traditional monolithic dies. First generation Epyc CPUs are composed of four 14 nm compute dies, each with up to 8 cores.[22][23] Cores are symmetrically disabled on dies to create lower binned products with fewer cores but the same I/O and memory footprint. Second and Third gen Epyc CPUs are composed of eight compute dies built on a 7 nm process node, and a large input/output (I/O) die built on a 14 nm process node.[24] Third gen Milan-X CPUs use advanced through-silicon-vias to stack an additional die on top of each of the 8 compute dies, adding 64 MB of L3 cache per die.[25]

Epyc CPUs supports both single socket and dual socket operation. In a dual socket configuration, 64 PCIe lanes from each CPU are allocated to AMD's proprietary Infinity Fabric interconnect to allow for full bandwidth between both CPUs.[26] Thus, a dual socket configuration has the same number of usable PCIe lanes as a single socket configuration. First generation Epyc CPUs had 128 PCIe 3.0 lanes, while second and third generation had 128 PCIe 4.0 lanes. All current Epyc CPUs are equipped with up to eight channels of DDR4 at varying speeds, though next gen Genoa CPUs are confirmed by AMD to support up to twelve channels of DDR5.[7][27]

Unlike Opteron, Intel equivalents and AMD's desktop processors (excluding Socket AM1), Epyc processors are chipset-free - also known as system on a chip. That means most features required to make servers fully functional (such as memory, PCI Express, SATA controllers, etc.) are fully integrated into the processor, eliminating the need for a chipset to be placed on the mainboard. Some features may require the use of additional controller chips to utilize.

A near-infrared photograph of a delidded second gen Epyc 7702. Each CCD has two CCXs.

Reception

[edit]

Initial reception to Epyc was generally positive.[27] Epyc was generally found to outperform Intel CPUs in cases where the cores could work independently, such as in high-performance computing and big-data applications. First generation Epyc fell behind in database tasks compared to Intel's Xeon parts due to higher cache latency.[27] In 2021 Meta Platforms selected Epyc chips for its metaverse data centers.[28]

Epyc Genoa was well received, as it offered improved performance and efficiency compared to previous offerings, though received some criticism for not having 2 DIMMs per channel configurations validating, with some reviewers calling it an "incomplete platform".[29]

List of Epyc processors

[edit]

Server

[edit]

First generation Epyc (Naples)

[edit]

The first generation was composed of only the 7001 series SKUs, all using the same MCM topology with four Zeppelin dies interconnected on the MCM. Each SOC die contributes its two DDR4 memory channels, 32 external PCIe 3.0 lanes, two 4-core core complexes and associated I/O interfaces like 4 SATA ports or several USB ports.

EPYC 7001 series
[edit]

Common features:

  • SP3 socket
  • Zen microarchitecture
  • GloFo 14 nm process
  • MCM with four System-on-a-chip (SOC) dies, two core complexes (CCX) per SOC die[30]
  • Eight-channel DDR4-2666 (the 7251 model is limited to DDR4-2400)
  • 128 PCIe 3.0 lanes per socket, 64 of which are used for Infinity Fabric inter-processor links in 2P platforms
  • 7xx1P series models are limited to uniprocessor operation (1P, single-socket)
Model Cores
(threads)
Chip-
lets
Core
config[i]
Clock rate Cache size Soc-
ket
Sca-
ling
TDP
(W)
Release
date
Release
price
Embedded
options[ii]
Base
(GHz)
Boost
(GHz)
L2
per core
L3
per CCX
Total
7251[31][32]   8 (16) 4[30]   8 × 1 2.1 2.9 512 KB 4 MB 36 MB  SP3 2P 120 Jun 2017[33]   $475 7251
7261[31][34] 2.5 2.9 8 MB 68 MB 2P 155/170 Jun 2018[35]   $570 7261
7281[31][32] 16 (32)   8 × 2 2.1 2.7 4 MB 40 MB 2P 155/170 Jun 2017[33]   $650 7281
7301[31][32] 2.2 2.7 8 MB 72 MB 2P   $800 7301
7351(P)[31][32] 2.4 2.9 2P (1P) $1100 ($750) 7351(735P)
7371[31][36] 3.1 3.8 2P 200 Nov 2018[37] $1550 7371
7401(P)[31][32] 24 (48)   8 × 3 2.0 3.0 8 MB 76 MB 2P (1P) 155/170 Jun 2017[33] $1850 ($1075) 7401(740P)
7451[31][32] 2.3 3.2 2P 180 $2400 7451
7501[31][32] 32 (64)   8 × 4 2.0 3.0 8 MB 80 MB 2P 155/170 Jun 2017[33] $3400 7501
7551(P)[31][32] 2.0 3.0 2P (1P) 180 $3400 ($2100) 7551(755P)
7571[38][39] 2.2 3.0 2P 200 Nov 2018 OEM/AWS --
7601[31][32] 2.2 3.2 2P 180 Jun 2017[33] $4200 7601
  1. ^ Core Complexes (CCX) × cores per CCX
  2. ^ Epyc Embedded 7001 series models have identical specifications as the respective Epyc 7001 series.
A Epyc 7001 die configuration
A second generation Epyc CPU in an SP3 socket

Second generation Epyc (Rome)

[edit]
First generation Epyc processor

In November 2018, AMD announced Epyc 2 at their Next Horizon event, the second generation of Epyc processors codenamed "Rome" and based on the Zen 2 microarchitecture.[40] The processors feature up to eight 7 nm-based "chiplet" processors with a 14 nm-based IO chip providing 128 PCIe 4.0 lanes in the center interconnected via Infinity Fabric. The processors support up to 8 channels of DDR4 RAM up to 4 TB, and introduce support for PCIe 4.0. These processors have up to 64 cores with 128 SMT threads per socket.[41] The 7 nm "Rome" is manufactured by TSMC.[24] It was released on August 7, 2019.[42] It has 39.5 billion transistors.[43]

In April 2020, AMD launched three new SKUs using Epyc's 7nm Rome platform. The three processors introduced were the eight-core Epyc 7F32, the 16-core 7F52 and the 24-core 7F72, featuring base clocks up to 3.7 GHz (up to 3.9 GHz with boost) within a TDP range of 180 to 240 watts. The launch was supported by Dell EMC, Hewlett Packard Enterprise, Lenovo, Supermicro, and Nutanix.[44]

EPYC 7002 series
[edit]

Common features:

  • SP3 socket
  • TSMC 7 nm process for the compute dies, GloFo 14 nm process for the I/O die
  • MCM with one I/O Die (IOD) and multiple Core Complex Dies (CCD) for compute, two core complexes (CCX) per CCD chiplet
  • Eight-channel DDR4-3200
  • 128 PCIe 4.0 lanes per socket, 64 of which are used for Infinity Fabric inter-processor links in 2P platforms
  • 7002P series models are limited to uniprocessor operation (1P, single-socket)
Model Cores
(threads)
Chiplets Core
config[i]
Clock rate Cache Socket Scaling TDP Release
date
Release
price
Base
(GHz)
Boost
(GHz)
L2
per core
L3
per CCX
Total
7232P   8 (16) 2 + IOD   4 × 2 3.1 3.2 512 KB   8 MB 36 MB   SP3 1P 120 W Aug 7, 2019   $450
7252   4 × 2 3.1 3.2 16 MB 68 MB 2P   $475
7262 4 + IOD   8 × 1 3.2 3.4 132 MB 155 W   $575
7F32   8 × 1 3.7 3.9 132 MB 180 W Apr 14, 2020[45] $2100
7272 12 (24) 2 + IOD   4 × 3 2.9 3.2 16 MB 70 MB 2P 120 W Aug 7, 2019   $625
7282 16 (32) 2 + IOD   4 × 4 2.8 3.2 16 MB 72 MB 2P 120 W Aug 7, 2019   $650
7302(P) 4 + IOD   8 × 2 3.0 3.3 136 MB 2P (1P) 155 W   $978 ($825)
7F52 8 + IOD 16 × 1 3.5 3.9 264 MB 2P 240 W Apr 14, 2020[45] $3100
7352 24 (48) 4 + IOD   8 × 3 2.3 3.2 16 MB 140 MB 2P 155 W Aug 7, 2019 $1350
7402(P) 2.8 3.35 2P (1P) 180 W $1783 ($1250)
7F72 6 + IOD 12 × 2 3.2 3.7 204 MB 2P 240 W Apr 14, 2020[45] $2450
7452 32 (64) 4 + IOD   8 × 4 2.35 3.35 16 MB 144 MB 2P 155 W Aug 7, 2019 $2025
7502(P) 2.5 3.35 2P (1P) 180 W $2600 ($2300)
7542 2.9 3.4 2P 225 W $3400
7532 8 + IOD 16 × 2 2.4 3.3 272 MB 200 W $3350
7552 48 (96) 6 + IOD 12 × 4 2.2 3.3 16 MB 216 MB 2P 200 W Aug 7, 2019 $4025
7642 8 + IOD 16 × 3 2.3 3.3 280 MB 225 W $4775
7662 64 (128) 8 + IOD 16 × 4 2.0 3.3 16 MB 288 MB 2P 225 W Aug 7, 2019 $6150
7702(P) 2.0 3.35 2P (1P) 200 W $6450 ($4425)
7742 2.25 3.4 2P 225 W $6950
7H12 2.6 3.3 280 W Sep 18, 2019 ---
  1. ^ Core Complexes (CCX) × cores per CCX
The bottom side of an Epyc 7302 mounted in a plastic carrier

Third generation Epyc (Milan)

[edit]

At the HPC-AI Advisory Council in the United Kingdom in October 2019, AMD stated specifications for Milan, Epyc chips based on the Zen 3 microarchitecture.[46] Milan chips will use Socket SP3, with up to 64 cores on package, and support eight-channel DDR4 RAM and 128 PCIe 4.0 lanes.[46] It also announced plans for the subsequent generation of chips, codenamed Genoa, that will be based on the Zen 4 microarchitecture and use Socket SP5.[46]

Milan CPUs were launched by AMD on March 15, 2021.[47]

Milan-X CPUs were launched March 21, 2022.[6] They use 3D V-Cache technology to increase the maximum L3 cache per socket capacity from 256 MB to 768 MB.[48][49][50]

EPYC 7003 series
[edit]

Common features:

  • SP3 socket
  • Zen 3 microarchitecture
  • TSMC 7 nm process for the compute and cache dies, GloFo 14 nm process for the I/O die
  • MCM with one I/O Die (IOD) and multiple Core Complex Dies (CCD) for compute, one core complex (CCX) per CCD chiplet
  • Eight-channel DDR4-3200
  • 128 PCIe 4.0 lanes per socket, 64 of which are used for Infinity Fabric inter-processor links in 2P platforms
  • 7003X series models include 64 MiB L3 cache dies stacked on top of the compute dies (3D V-Cache)
  • 7003P series models are limited to uniprocessor operation (1P, single-socket)
Model Cores
(threads)
Chiplets Core
config
[i]
Clock rate Cache size Socket Scaling TDP
default (range)
Release
price
Base
(GHz)
Boost
(GHz)
L2
per core
L3
per CCX
Total
7203(P)   8 (16) 2 + IOD  2 × 4  2.8  3.4  512 KB 32 MB 68 MB   SP3 2P (1P) 120 W (120-150)   $348 ($338)
72F3 8 + IOD  8 × 1  3.7  4.1 260 MB 2P 180 W (165-200) $2468
7303(P) 16 (32) 2 + IOD  2 × 8  2.4  3.4 32 MB 72 MB 2P (1P) 130 W (120-150)   $604 ($594)
7313(P) 4 + IOD  4 × 4  3.0  3.7 136 MB 2P (1P) 155 W (155-180) $1083 ($913)
7343  3.2  3.9 2P 190 W (165-200) $1565
73F3 8 + IOD  8 × 2  3.5  4.0 264 MB 240 W (225-240) $3521
7373X 8* + IOD  3.05  3.8 96 MB 776 MB 240 W (225-280) $4185
7413 24 (48) 4 + IOD  4 × 6  2.65  3.6 32 MB 140 MB 2P 180 W (165-200) $1825
7443(P)  2.85  4.0 2P (1P) 200 W (165-200) $2010 ($1337)
74F3 8 + IOD  8 × 3  3.2  4.0 268 MB 2P 240 W (225-240) $2900
7473X 8* + IOD  2.8  3.7 96 MB 780 MB 240 W (225-280) $3900
7453 28 (56) 4 + IOD  4 × 7  2.75  3.45 16 MB 78 MB 2P 225 W (225-240) $1570
7513 32 (64) 4 + IOD  4 × 8  2.6  3.65 32 MB 144 MB 2P 200 W (165-200) $2840
7543(P) 8 + IOD  8 × 4  2.8  3.7 272 MB 2P (1P) 225 W (225-240) $3761 ($2730)
75F3  2.95  4.0 2P 280 W (225-280) $4860
7573X 8* + IOD  2.8  3.6 96 MB 784 MB $5590
7R13[51] 48 (96) 6 + IOD  6 × 8  2.65  3.7 32 MB 216 MB TBD TBD OEM/AWS
7643(P) 8 + IOD  8 × 6  2.3  3.6 280 MB 2P (1P) 225 W (225-240) $4995 ($2722)
7663 56 (112) 8 + IOD  8 × 7  2.0  3.5 32 MB 284 MB 2P 240 W (225-240) $6366
7663P 1P 240 W (225-280) $3139
7713(P) 64 (128) 8 + IOD  8 × 8  2.0  3.675 32 MB 288 MB 2P (1P) 225 W (225-240) $7060 ($5010)
7763  2.45  3.4 2P 280 W (225-280) $7890
7773X 8* + IOD  2.2  3.5 96 MB 800 MB $8800
  1. ^ Core Complexes (CCX) × cores per CCX

Fourth generation Epyc (Genoa, Bergamo and Siena)

[edit]

On November 10, 2022, AMD launched the fourth generation of Epyc server and data center processors based on the Zen 4 microarchitecture, codenamed Genoa.[52] At their launch event, AMD announced that Microsoft and Google would be some of Genoa's customers.[53] Genoa features between 16 and 96 cores with support for PCIe 5.0 and DDR5. There was also an emphasis by AMD on Genoa's energy efficiency, which according to AMD CEO Lisa Su, means "lower total cost of ownership" for enterprise and cloud datacenter clients.[54] Genoa uses AMD's new SP5 (LGA 6096) socket.[55]

On June 13, 2023, AMD introduced Genoa-X with 3D V-Cache technology for technical computing performance and Bergamo (9734, 9754 and 9754S) for cloud native computing.[56]

On September 18, 2023, AMD introduced the low power Siena lineup of processors, based on the Zen 4c microarchitecture. Siena supports up to 64 cores on the new SP6 socket, which is currently only used by Siena processors. Siena uses the same I/O die as Bergamo, however certain features, such as dual socket support, are removed, and other features are reduced, such as the change from 12 channel memory support to 6 channel memory support.[57]

In May 2024, AMD launched the Raphael lineup of processors, based on the Zen4 microarchitecture. Raphael support up to 16 cores on the AM5 socket.

Model Fab Cores
(Threads)
Chiplets Core
config[i]
Clock rate
(GHz)
Cache (MB) Socket Socket
count
PCIe 5.0
lanes
Memory
support
TDP Release
date
Price
(USD)
Base Boost L1 L2 L3 DDR5 ECC
Entry Level (Zen 4 cores)
4124P TSMC
N5
4 (8) 1 × CCD
1 × I/OD
1 × 4 3.8 5.1 0.256 4 16 AM5 1P 24 DDR5-5200
dual-channel
65 W May 21, 2024 $149
4244P 6 (12) 1 × 6 3.8 0.384 6 32 $229
4344P 8 (16) 1 × 8 3.8 5.3 0.5 8 32 $329
4364P 4.5 5.4 32 105 W $399
4464P 12 (24) 2 × CCD
1 × I/OD
2 × 6 3.7 5.4 0.768 12 64 65 W $429
4484PX 4.4 5.6 128 120 W $599
4564P 16 (32) 2 × 8 4.5 5.7 1 16 64 170 W $699
4584PX 4.2 5.7 128 120 W
Low Power & Edge (Zen 4c cores)
8024P TSMC
N5
8 (16) 1 × CCD
1 × I/OD
2 × 4 2.4 3.0 0.5 8 32 SP6 1P 96 DDR5-4800
six-channel
90 W Sep 18, 2023 $409
8024PN 2.05 80 W $525
8124P 16 (32) 2 × CCD
1 × I/OD
4 × 4 2.45 1 16 64 125 W $639
8124PN 2.0 100 W $790
8224P 24 (48) 4 × 6 2.55 1.5 24 160 W $855
8224PN 2.0 120 W $1,015
8324P 32 (64) 4 × CCD
1 × I/OD
8 × 4 2.65 2 32 128 180 W $1,895
8324PN 2.05 130 W $2,125
8434P 48 (96) 8 × 6 2.5 3.1 3 48 200 W $2,700
8434PN 2.0 3.0 155 W $3,150
8534P 64 (128) 8 × 8 2.3 3.1 4 64 200 W $4,950
8534PN 2.0 175 W $5,450
Mainstream Enterprise (Zen 4 cores)
9124 TSMC
N5
16 (32) 4 × CCD
1 × I/OD
4 × 4 3.0 3.7 1 16 64 SP5 1P/2P 128 DDR5-4800
twelve-channel
200 W Nov 10, 2022 $1,083
9224 24 (48) 4 × 6 2.5 3.7 1.5 24 200 W $1,825
9254 4 × 6 2.9 4.15 128 220 W $2,299
9334 32 (64) 4 × 8 2.7 3.9 2 32 210 W $2,990
9354 8 × CCD
1 × I/OD
8 × 4 3.25 3.75 256 280 W $3,420
9354P 1P $2,730
Performance Enterprise (Zen 4 cores)
9174F TSMC
N5
16 (32) 8 × CCD
1 × I/OD
8 × 2 4.1 4.4 1 16 256 SP5 1P/2P 128 DDR5-4800
twelve-channel
320 W Nov 10, 2022 $3,850
9184X 3.55 4.2 768 Jun 13, 2023 $4,928
9274F 24 (48) 8 × 3 4.05 4.3 1.5 24 256 Nov 10, 2022 $3,060
9374F 32 (64) 8 × 4 3.85 4.3 2 32 $4,860
9384X 3.1 3.9 768 Jun 13, 2023 $5,529
9474F 48 (96) 8 × 6 3.6 4.1 3 48 256 360 W Nov 10, 2022 $6,780
High Performance Computing (Zen 4 cores)
9454 TSMC
N5
48 (96) 8 × CCD
1 × I/OD
8 × 6 2.75 3.8 3 48 256 SP5 1P/2P 128 DDR5-4800
twelve-channel
290 W Nov 10, 2022 $5,225
9454P 1P $4,598
9534 64 (128) 8 × 8 2.45 3.7 4 64 1P/2P 280 W $8,803
9554 3.1 3.75 360 W $9,087
9554P 1P $7,104
9634 84 (168) 12 × CCD
1 × I/OD
12 × 7 2.25 3.7 5.25 84 384 1P/2P 290 W $10,304
9654 96 (192) 12 × 8 2.4 3.7 6 96 360 W $11,805
9654P 1P $10,625
9684X 2.55 3.7 1152 1P/2P 400 W Jun 13, 2023 $14,756
Cloud (Zen 4c cores)
9734 TSMC
N5
112 (224) 8 × CCD
1 × I/OD
16 × 7 2.2 3.0 7 112 256 SP5 1P/2P 128 DDR5-4800
twelve-channel
340 W Jun 13, 2023 $9,600
9754S 128 (128) 16 × 8 2.25 3.1 8 128 360 W $10,200
9754 128 (256) $11,900
  1. ^ Core Complexes (CCX) × cores per CCX

Fifth generation Epyc (Grado, Turin and Turin Dense)

[edit]

The fifth generation of Epyc processors were showcased by AMD at Computex 2024 on June 3. Named the Epyc 9005 series, it will come in two variants:[58]

  • Zen 5 based, up to 128 cores and 256 threads, built on TSMC N4X process
  • Zen 5c based, up to 192 cores and 384 threads, built on TSMC N3E process

Both variants are officially referred to under the Turin codename by AMD, although the nickname of "Turin Dense" has also been used to refer to the Zen 5c based CPUs.[59]

Turin Dense support the x2AVIC CPU feature

Both of these processor series will be socket-compatible with the SP5 socket used by Genoa and Bergamo. Epyc 9005 series were launched on October 10, 2024, at AMD's Advancing AI event 2024.[60]

In May 2025, AMD announced the Epyc 4005 series of processors, codenamed Grado. They are based on the Zen 5 microarchitecture and support up to 16 cores.[61] Unlike the 9005 series, these processors are Socket AM5 compatible.

Model Fab Cores
(Threads)
Chiplets Core
config[i]
Clock rate
(GHz)
Cache (MB) Socket Socket
count
PCIe 5.0
lanes
Memory
support
Thermal design power
(TDP)
Release
date
Release price
(USD)
Base Boost L1
Per Core
L2
Per Core
L3
Shared
Turin Dense (Zen 5c cores)
9645 TSMC
N3E
96 (192) 8 × CCD
1 × I/OD
8 × 12 2.3 3.7 80 KB 1 MB 256 MB SP5 1P/2P 128

(160 in 2-socket systems)

DDR5-6400
twelve-channel
320 W 10 Oct, 2024 $11048
9745 128 (256) 8 × 16 2.4 400 W $12141
9825 144 (288) 12 × CCD
1 × I/OD
12 × 12 2.2 384 MB 390 W $13006
9845 160 (320) 10 × CCD
1 × I/OD
10 × 16 2.1 320 MB 390 W $13564
9965 192 (384) 12 × CCD
1 × I/OD
12 × 16 2.25 384 MB 500 W $14813
Turin (Zen 5 cores)
9015 TSMC
N4X
8 (16) 2 × CCD
1 × I/OD
2 × 4 3.6 4.1 80 KB 1 MB 64 MB SP5 1P/2P 128

(160 in 2-socket systems)

DDR5-6400
twelve-channel
125 W 10 Oct, 2024 $527
9115 16 (32) 2 × 8 2.6 4.1 125 W $726
9135 16 (32) 3.65 4.3 200 W $1214
9175F 16 (32) 16 × CCD
1 × I/OD
16 × 1 4.2 5.0 512 MB 320 W $4256
9255 24 (48) 4 × CCD
1 × I/OD
4 × 6 3.25 4.3 128 MB 200 W $2495
9275F 24 (48) 8 × CCD
1 × I/OD
8 × 3 4.1 4.8 256 MB 320 W $3439
9335 32 (64) 4 × CCD
1 × I/OD
4 × 8 3.0 4.4 128 MB 210 W $3178
9355P 32 (64) 8 × CCD
1 × I/OD
8 × 4 3.55 4.4 256 MB 1P 128 280 W $2998
9355 32 (64) 3.55 4.4 1P/2P 128

(160 in 2-socket systems)

280 W $3694
9375F 32 (64) 3.8 4.8 320 W $5306
9365 36 (72) 6 × CCD
1 × I/OD
6 × 6 3.4 4.3 192 MB 300 W $4341
9455P 48 (96) 8 × CCD
1 × I/OD
8 × 6 3.15 4.4 256 MB 1P 128 300 W $4819
9455 48 (96) 3.15 4.4 1P/2P 128

(160 in 2-socket systems)

300 W $5412
9475F 48 (96) 3.65 4.8 400 W $7592
9535 64 (128) 8 × 8 2.4 4.3 300 W $8992
9555P 64 (128) 3.2 4.4 1P 128 360 W $7983
9555 64 (128) 3.2 4.4 1P/2P 128

(160 in 2-socket systems)

360 W $9826
9575F 64 (128) 3.3 5.0 400 W $11791
9565 72 (144) 12 × CCD
1 × I/OD
12 × 6 3.15 4.3 384 MB 400 W $10468
9655P 96 (192) 12 × 8 2.5 4.5 1P 128 400 W $10811
9655 96 (192) 2.5 4.5 1P/2P 128 (160 in 2-socket systems) 400 W $11852
9755 128 (256) 16 × CCD
1 × I/OD
16 × 8 2.7 4.1 512 MB 500 W $12984
Grado (Zen 5 cores)
4245P TSMC
N4X
6 (12) 1 × CCD
1 × I/OD
1 × 6 3.9 5.4 80 KB 1 MB 32 MB AM5 1P 28 DDR5-5600
dual-channel
65 W 13 May, 2025 $239
4345P 8 (16) 1 × 8 3.8 5.5 $329
4465P 12 (24) 2 × CCD
1 × I/OD
2 × 6 3.4 5.4 64 MB $399
4545P 16 (32) 2 × 8 3.0 $549
4565P 4.3 5.7 170 W $589
4585PX 128 MB $699
  1. ^ Core Complexes (CCX) × cores per CCX

Embedded

[edit]

First generation Epyc (Snowy Owl)

[edit]

In February 2018, AMD also announced the Epyc 3000 series of embedded Zen CPUs.[62]

Common features of EPYC Embedded 3000 series CPUs:

  • Socket: SP4 (31xx and 32xx models use SP4r2 package).
  • All the CPUs support ECC DDR4-2666 in dual-channel mode (3201 supports only DDR4-2133), while 33xx and 34xx models support quad-channel mode.
  • L1 cache: 96 KB (32 KB data + 64 KB instruction) per core.
  • L2 cache: 512 KB per core.
  • All the CPUs support 32 PCIe 3.0 lanes per CCD (max 64 lanes).
  • Fabrication process: GlobalFoundries 14 nm.
Model Cores
(threads)
Clock rate (GHz) L3 cache
(total)
TDP Chiplets Core
config[i]
Release
date
Base Boost
All-core Max
3101[63] 4 (4) 2.1 2.9 2.9 8 MB 35 W 1 × CCD 1 × 4 Feb 2018
3151[63] 4 (8) 2.7 16 MB 45 W 2 × 2
3201[63] 8 (8) 1.5 3.1 3.1 30 W 2 × 4
3251[63] 8 (16) 2.5 55 W
3255[64] 25–55 W Dec 2018
3301[63] 12 (12) 2.0 2.15 3.0 32 MB 65 W 2 × CCD 4 × 3 Feb 2018
3351[63] 12 (24) 1.9 2.75 60–80 W
3401[63] 16 (16) 1.85 2.25 85 W 4 × 4
3451[63] 16 (32) 2.15 2.45 80–100 W
  1. ^ Core Complexes (CCX) × cores per CCX

Later embedded models

[edit]

Starting with Zen 2, the embedded option simply shares the same name as the socket equivalent, hence the EPYC Embedded 7002, 7003, 8004, 9004, and 9005 series.[65]

Chinese variants

[edit]

A variant created for the Chinese server market by Hygon Information Technology is the Hygon Dhyana system on a chip.[66][67] It is noted to be a variant of the AMD Epyc, and is so similar that "there is little to no differentiation between the chips".[66] It has been noted that there is "less than 200 lines of new kernel code" for Linux kernel support, and that the Dhyana is "mostly a re-branded Zen CPU for the Chinese server market".[67] Later benchmarks showed that certain floating point instructions are performing worse, probably to comply with US export restrictions.[68] AES and other western cryptography algorithms are replaced by Chinese variants throughout the design.[68]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
EPYC is a brand of multi-core x86-64 server microprocessors designed and marketed by Advanced Micro Devices (AMD) for enterprise data centers, cloud computing, and high-performance computing applications, utilizing the company's Zen microarchitecture in a chiplet-based design that enables high core counts and scalability.[1][2]
Introduced in 2017 with the first-generation 7001 series (codename Naples) based on Zen cores, EPYC processors have evolved through multiple generations, including the second-generation 7002 series (Rome) in 2019, third-generation 7003 series (Milan) in 2021, fourth-generation 9004 series (Genoa) in 2022, and fifth-generation 9005 series (Turin) launched in October 2024, each iteration incorporating advancements such as increased core densities up to 192 cores per socket, higher clock speeds reaching 5 GHz boosts, support for DDR5 memory with up to 12 channels, and expanded PCIe lanes for enhanced I/O connectivity.[3][4][5]
Key defining characteristics include AMD's Infinity Fabric interconnect for multi-chiplet coherence, which facilitates cost-effective scaling of compute resources while maintaining performance, and optimizations for workloads like AI inference, virtualization, and technical computing, where EPYC has demonstrated leadership in benchmarks with over 250 world records in areas such as throughput, energy efficiency, and total cost of ownership reductions compared to prior generations and competitors.[6][7][3]

History

Origins and development

The origins of AMD EPYC trace back to the Zen microarchitecture project, initiated in the early 2010s as a response to the commercial and performance shortcomings of AMD's prior Bulldozer and Piledriver architectures, which had eroded market share against Intel's offerings. AMD rehired CPU architect Jim Keller in August 2012 to lead the effort, resulting in a ground-up redesign emphasizing higher instructions per clock through features like wider execution units, improved branch prediction, and a 4-wide out-of-order pipeline.[8][9] Lisa Su's ascension to CEO in October 2014 marked a pivotal refocus, with AMD divesting non-core assets and channeling resources into Zen's completion amid financial pressures, including near-bankruptcy risks. This commitment yielded tape-out in late 2015 and initial Zen-based consumer Ryzen launches in March 2017, validating the architecture's ~52% IPC uplift over Excavator cores in independent tests.[10][8] EPYC's server-specific development diverged by adopting a multi-chip module (MCM) with chiplet integration from inception, motivated by yield limitations on monolithic dies at GlobalFoundries' 14 nm node and the need for scalable core counts beyond 8-16. Each EPYC "Naples" processor combined up to four 8-core Zen chiplets with a centralized I/O die via Infinity Fabric links and a silicon interposer, enabling configurations from 8 to 32 cores at TDPs up to 225 W. This design, finalized for data center demands like NUMA-aware memory and PCIe 3.0 expansion, culminated in the EPYC 7001 series launch on June 20, 2017.[11][12]

Initial launch and market entry

The first-generation AMD EPYC processors, codenamed Naples and branded as the EPYC 7000 series, were previewed by AMD on March 7, 2017, as part of its re-entry into the high-performance server market following years of diminished presence against Intel's Xeon dominance.[13] These processors utilized the Zen microarchitecture, fabricated on GlobalFoundries' 14 nm process, and supported up to 32 cores per socket with dual-socket configurations enabling up to 64 cores in a system.[14] AMD positioned EPYC as offering superior per-socket core density and Infinity Fabric interconnect for scalable multi-chip performance compared to contemporaneous Intel offerings.[15] Official availability began on June 20, 2017, with initial pricing ranging from approximately $400 for entry-level 8-core models like the EPYC 7251 (2.1 GHz base, 120W TDP) to over $4,000 for high-end 32-core variants such as the EPYC 7601 (2.2 GHz base, 180W TDP).[16][17] Launch-day support came from major original equipment manufacturers (OEMs) including Dell, Hewlett Packard Enterprise (HPE), Lenovo, and Supermicro, which introduced compatible server platforms emphasizing EPYC's advantages in virtualization, database, and HPC workloads.[11] Independent benchmarks at launch highlighted EPYC's competitiveness, with single-socket systems outperforming dual-socket Intel Xeon Scalable equivalents in SPECrate2013_int_base by up to 50% in some configurations, though real-world adoption was tempered by ecosystem maturity and Intel's entrenched market position exceeding 95% share in x86 servers.[16][15] Market entry faced challenges from limited initial validation cycles and supply constraints, but EPYC gained traction in cost-sensitive segments like cloud and edge computing, with early adopters citing lower total cost of ownership (TCO) due to higher core-per-dollar ratios.[17] By late 2017, volume shipments ramped through OEM channels and hyperscalers, marking AMD's first meaningful server revenue resurgence since the Opteron era, though it captured under 5% market share in the initial year amid Intel's response with new Xeon launches.[14]

Evolution through generations

The first-generation AMD EPYC processors, codenamed Naples and marketed as the EPYC 7001 series, launched on June 20, 2017. Built on the Zen 1 microarchitecture using a 14 nm process node, these processors supported up to 32 cores and 64 threads per socket on the SP3 socket, with eight DDR4 memory channels and 128 PCIe 3.0 lanes.[18] This debut marked AMD's re-entry into the server market after the Opteron line, emphasizing a chiplet-based design to scale core counts cost-effectively compared to monolithic dies. The second generation, EPYC 7002 series codenamed Rome, arrived on August 7, 2019. Shifting to the Zen 2 microarchitecture with 7 nm compute chiplets (CCDs) atop a 14 nm I/O die, it doubled maximum core counts to 64 cores and 128 threads per socket while retaining the SP3 socket.[19] Key advancements included higher clock speeds, improved Infinity Fabric interconnect for better multi-chiplet coherence, and sustained support for eight DDR4 channels and 128 PCIe 4.0 lanes, yielding up to 1.8 times the performance of Naples in certain workloads.[20] Third-generation EPYC 7003 processors, codenamed Milan, launched March 15, 2021, incorporating the Zen 3 microarchitecture on refined 7 nm CCDs. Retaining SP3 compatibility, they maintained up to 64 cores but delivered 19% instructions per clock (IPC) uplift over Zen 2, alongside unified L3 cache per chiplet for reduced latency.[21][22] A variant, Milan-X, introduced in 2022 with 3D V-Cache stacking up to 768 MB per socket, targeted cache-sensitive applications like databases and HPC simulations.[23]
GenerationCodenameLaunch DateMicroarchitectureMax Cores/ThreadsSocketKey Process Nodes
1stNaplesJune 20, 2017Zen 132/64SP314 nm
2ndRomeAugust 7, 2019Zen 264/128SP37 nm CCD / 14 nm IOD
3rdMilanMarch 15, 2021Zen 364/128SP37 nm
4thGenoa/BergamoNovember 10, 2022 (Genoa); June 13, 2023 (Bergamo)Zen 4 / Zen 4c96/192 (Genoa); 128/256 (Bergamo)SP55 nm CCD / 6 nm IOD
5thTurinOctober 10, 2024Zen 5192/384SP54 nm / 3 nm variants
The fourth generation transitioned to the EPYC 9004 series with a new SP5 socket, launching Genoa on November 10, 2022, using Zen 4 on 5 nm CCDs and a 6 nm I/O die for up to 96 cores and 12 DDR5 channels.[24][25] Bergamo, released June 13, 2023, employed compact Zen 4c cores for density-optimized workloads, achieving 128 cores per socket.[26] These models expanded to 128 PCIe 5.0 lanes and integrated data accelerators for AI inference. Fifth-generation EPYC 9005 series, codenamed Turin, debuted October 10, 2024, leveraging Zen 5 for up to 16% IPC gains and scaling to 192 cores using denser chiplet configurations on advanced nodes including 4 nm and 3 nm elements.[27][28] Supporting 12 DDR5 channels and enhanced Infinity Fabric, Turin emphasizes AI, cloud, and HPC efficiency, with top models reaching 5 GHz boosts.[29] Each iteration has refined the chiplet paradigm, prioritizing scalable parallelism, lower power per core, and ecosystem compatibility over raw monolithic scaling.

Architecture and design

Core microarchitecture

The core microarchitecture of AMD EPYC processors is derived from the company's Zen family of x86-64 designs, optimized for server workloads with emphases on per-core performance, multi-threading via simultaneous multithreading (SMT), and scalability in multi-socket configurations. Each generation introduces iterative improvements in instructions per clock (IPC), branch prediction, cache hierarchies, and execution pipelines, enabling higher throughput for compute-intensive tasks such as virtualization, databases, and high-performance computing (HPC).[1][6] First-generation EPYC processors (7001 series, codenamed Naples), introduced in June 2017, employed the initial Zen microarchitecture on a 14 nm process node, supporting up to 32 cores and 64 threads per socket with 512 KB L2 cache per core and shared 8 MB L3 cache per chiplet. These cores featured a 4-wide decode/rename stage, 6-execution pipelines (including 4 integer and 3 floating-point), and a 96-entry reorder buffer, marking AMD's return to competitiveness in server CPUs through balanced integer and FP performance.[30] Second-generation EPYC (7002 series, Rome), launched in August 2019, adopted Zen 2 cores fabricated on a 7 nm node, doubling L3 cache to 16 MB per chiplet and introducing chiplet-based scaling for up to 64 cores per socket. Zen 2 enhanced IPC by approximately 15% over Zen 1 via wider dispatch (up to 5 operations per cycle), improved branch prediction with a larger 20-bit global history predictor, and doubled the floating-point throughput with dual 256-bit FMA units, while supporting PCIe 4.0 for better I/O integration.[31][32] Third-generation EPYC (7003 series, Milan), released in March 2021, utilized Zen 3 cores on a refined 7 nm process, unifying L3 cache across chiplets into a 32 MB per-core complex for reduced latency in cross-core access. Key advancements included a unified scheduler for integer and floating-point operations, doubled branch target buffer size to 1K entries, and up to 19% IPC uplift, enabling configurations up to 64 cores with sustained boosts exceeding 3.5 GHz in high-core-count variants.[33][34] Fourth-generation EPYC (9004 series, Genoa), announced in November 2022, incorporated Zen 4 cores on a 5 nm node, supporting up to 96 cores per socket with variants including dense Zen 4c cores featuring reduced cache (1 MB L2, 4 MB L3 per core) for higher core density in cost-sensitive workloads. Zen 4 delivered around 13% IPC gains through an expanded front-end (6-wide fetch/decode), AVX-512 support with double-pumped 512-bit execution, and enhanced matrix operations for AI inferencing, alongside full PCIe 5.0 compatibility.[35][36] Fifth-generation EPYC (9005 series, Turin), unveiled in October 2024, employs Zen 5 cores on TSMC's 4 nm process, achieving up to 17% IPC improvement for general workloads and up to 37% for specific AI/HPC tasks via optimizations in the out-of-order engine, larger 8-wide dispatch, and advanced AI accelerators like FP4/FP6 support. Dense Zen 5c variants scale to 192 cores per socket with traded-off cache for density, maintaining compatibility with prior Infinity Fabric interconnects while prioritizing efficiency in large-scale cloud and edge deployments.[37][5][6]

Chiplet-based multi-chip module

AMD EPYC processors from the second generation (Rome) onward utilize a chiplet-based multi-chip module (MCM) architecture, integrating multiple specialized dies into a single SP5 socket package. This design separates compute logic into core complex dies (CCDs) and system interfaces into a dedicated I/O die (IOD), connected via AMD Infinity Fabric links operating at up to 40 GT/s in recent implementations.[38] The approach enables scalable core counts by stacking additional CCDs, reaching up to 12 CCDs in fifth-generation Turin processors for configurations exceeding 128 cores.[6] Each CCD typically houses eight full Zen cores with 32 MB of L3 cache in standard variants, or denser Zen c cores with reduced cache per core for higher thread density, fabricated on leading-edge nodes such as TSMC's 5 nm for fourth-generation Genoa or 3 nm-class for Turin.[32] The central IOD, produced on more mature processes like 12 nm (Genoa) or 6 nm (Turin), integrates eight DDR5 memory controllers, up to 128 PCIe 5.0 lanes, and connectivity for CXL interfaces, exposing these resources uniformly across the package to minimize NUMA penalties.[39] Inter-die communication relies on the Global Memory Interconnect (GMI), a subset of Infinity Fabric, with each CCD linking directly to the IOD via dedicated high-bandwidth ports, ensuring low-latency access to shared resources.[38] This modular structure improves manufacturing yields by allowing defective CCDs to be discarded without scrapping the entire processor, as smaller dies have higher defect-free probabilities per first-principles semiconductor physics.[12] It also facilitates heterogeneous integration, mixing process technologies to optimize cost and performance—advanced nodes for core density and efficient I/O on cost-effective nodes—resulting in processors like the EPYC 9755 with 128 Zen 5 cores at a 500 W TDP. Empirical data from AMD deployments show this yields superior scalability over monolithic designs, enabling 96-core Genoa parts from yields unattainable in single-die equivalents.[40] The design's causal efficacy stems from disaggregating functions to exploit specialized fabrication, though it introduces minor latency overheads in inter-chiplet traffic, mitigated by Infinity Fabric's mesh topology and clock domain synchronization.[41]

Interconnect and fabric technology

AMD EPYC processors employ Infinity Fabric, a proprietary scalable interconnect architecture that facilitates high-bandwidth, low-latency data transfer and cache coherency across chiplets within a single socket and between sockets in multi-processor configurations. This fabric replaces traditional on-die buses with a modular, die-to-die linking system derived from earlier technologies like HyperTransport, using embedded sensors to dynamically manage control and data flow.[42][43] In EPYC's chiplet-based design, Infinity Fabric connects up to 12 Core Complex Dies (CCDs)—each housing multiple Zen cores and shared L3 cache—to a central I/O Die (IOD) responsible for memory controllers, PCIe interfaces, and other peripherals, creating a unified NUMA domain per socket. Links operate with asymmetric bandwidth, providing 32 bytes per cycle for reads and 16 bytes for writes at the Infinity Fabric clock (FCLK) frequency, which typically ranges from 1.0 to 1.8 GHz depending on configuration and cooling. Intra-socket topology uses a star-like structure with the IOD as the hub, minimizing hops to at most two for data access between CCDs.[41][44][42] Early implementations in 1st-generation EPYC (Naples, 2017) delivered point-to-point die bandwidth of 10.65 GB/s, scaling to aggregate throughputs of 41.4 GB/s per die in dual-socket setups. Subsequent generations enhanced performance: 2nd-generation EPYC (Rome, Zen 2) maintained similar per-link specs but optimized FCLK ratios to memory clocks for better efficiency; 3rd-generation (Milan, Zen 3) introduced Infinity Fabric 3.0 with support for coherent GPU integration; 4th-generation (Genoa, Zen 4) doubled CPU-to-CPU connectivity speeds to up to 36 Gb/s per link (using PCIe Gen 5 physical layers with custom protocols) and reduced NUMA latency variances compared to predecessors; 5th-generation (Turin, Zen 5) further boosts inter-die throughput with dual links per CCD in optimized models, reaching 72 Gb/s aggregate per die.[43][44][42][6] For multi-socket scalability, external Infinity Fabric links (xGMI) enable direct peer-to-peer communication, bypassing full mesh overheads, with up to four 32 Gbps links per socket in 4th- and 5th-generation models, supporting configurations like 2P systems with 512 GB/s theoretical inter-socket bandwidth across four links. These advancements prioritize workload balance in data centers, though effective bandwidth depends on NUMA-aware software tuning to mitigate remote access penalties.[45][42]

Memory subsystem and I/O capabilities

The memory subsystem of AMD EPYC processors utilizes integrated controllers within the central I/O die (IOD) to provide high-bandwidth access via multiple DDR channels per socket, optimized for memory-intensive server workloads such as virtualization and databases. First- through third-generation models (EPYC 7001, 7002, and 7003 series, based on Zen, Zen 2, and Zen 3 cores respectively) support eight channels of DDR4 memory at speeds up to 3200 MT/s, with maximum capacities of 4 TB per socket using registered DIMMs (RDIMMs) or load-reduced DIMMs (LRDIMMs).[46] This configuration delivers aggregate bandwidth exceeding 200 GB/s in balanced populations, with each channel supporting up to two DIMMs for flexibility in 1DPC (one DIMM per channel) or 2DPC setups.[47] Fourth-generation EPYC 9004 series processors (Genoa, Zen 4-based) advanced to twelve channels of DDR5-4800 memory, boosting per-socket bandwidth by approximately 50% over prior DDR4 setups and supporting up to 6 TB capacity with support for 3D-stacked DRAM options.[2] Fifth-generation EPYC 9005 series (Turin, Zen 5-based, launched October 2024) further enhances this with DDR5-6400 speeds on the same twelve channels, enabling up to 9 TB per socket and aggregate bandwidths approaching 500 GB/s in optimal configurations, while maintaining compatibility with prior DDR5 RDIMMs, LRDIMMs, and NVDIMMs.[38][48] Across generations, AMD recommends balanced population across channels to minimize latency and maximize throughput, with BIOS options for fine-tuning NUMA domains and prefetch behaviors.[49] I/O capabilities center on the IOD's extensive PCIe integration, providing up to 128 lanes per socket for connectivity to GPUs, storage, and networking devices, with bifurcation support down to x1 for flexible endpoint allocation. First-generation processors offered 128 PCIe 3.0 lanes at 8 GT/s, while second- and third-generation models upgraded to PCIe 4.0 at 16 GT/s for doubled bandwidth per lane.[46] Fourth- and fifth-generation processors deliver 128 PCIe 5.0 lanes at 32 GT/s, with dual-socket systems accessing up to 160 lanes via additional fabric links, enabling high-density accelerator deployments.[38][6] Starting with the fourth generation, support for Compute Express Link (CXL) 1.1+ allows up to 48 dedicated lanes for coherent memory pooling and fabric-attached devices, facilitating disaggregated memory expansion beyond local DRAM limits.[39] Embedded variants like EPYC 4004/8004 series scale down to 28-96 lanes while retaining Gen5 compatibility for edge and dense server use cases.[50]

Performance and efficiency

Benchmark comparisons

The AMD EPYC processor family has shown marked advantages over Intel Xeon processors in multi-threaded benchmarks, driven by higher core counts, improved IPC via Zen architectures, and efficient chiplet designs. In SPEC CPU 2017 integer rate tests, the 5th generation EPYC 9965 (192 cores) achieves a peak score of over 1,000 in multi-threaded configurations, surpassing comparable Intel Xeon 6 series results submitted to SPEC.[51] Similarly, the EPYC 9754 from the 4th generation Genoa lineup set records with integer rate peaks around 1,780, representing a 2.8-fold improvement over Intel's prior Ice Lake Xeon baselines in equivalent setups.[52][53] Phoronix Test Suite evaluations, encompassing compilation, encoding, simulation, and database workloads, further highlight EPYC's edge. The EPYC 9965 Turin variant delivers geometric mean performance uplifts of 30-50% against the Intel Xeon 6980P in over 140 multi-threaded tests on Ubuntu platforms, with particular dominance in AVX-512 accelerated tasks like scientific simulations.[54] In AWS EC2 cloud instances, EPYC Turin-powered m8a configurations outperform Intel Granite Rapids m8i equivalents in CPU-bound Phoronix benchmarks, yielding higher throughput per vCPU despite the physical-core-only threading model. AWS disables SMT on many of its EC2 instances powered by AMD EPYC processors (4th and 5th generation, including general-purpose m7a/m8a and HPC hpc7a/hpc8a instances) to improve performance consistency, workload isolation, and predictability. By mapping each vCPU directly to a physical core, this eliminates resource sharing between threads (e.g., execution units and cache), reduces intra-core interference, and provides more stable throughput for demanding workloads such as high-performance computing (HPC), machine learning inference, and transactional databases.[55][56][57]
Benchmark SuiteEPYC Model (Generation)Comparable XeonPerformance Edge (EPYC)
SPEC CPU 2017 Integer Rate (Peak)9654 (Genoa, 4th)Ice Lake Platinum~2.8x[58]
Phoronix Geometric Mean (Multi-Threaded)9965 (Turin, 5th)6980P (Sapphire Rapids refresh)30-50%[54]
AWS EC2 CPU-Bound TasksTurin m8aGranite Rapids m8iHigher throughput per core[55]
These results stem from EPYC's scalable core scaling and Infinity Fabric interconnect, though Intel retains leads in select single-threaded or latency-sensitive scenarios not emphasized in aggregate server benchmarks.[54] This trend of generational leadership in core-heavy environments began with the second-generation Rome (EPYC 7002 series, launched in 2019), which generally outperformed Intel's Cascade Lake (Xeon Scalable 2nd gen) processors in server workloads due to advantages in multi-threaded performance, higher core counts in flagship models (e.g., 64 cores in EPYC 7742 vs. 28 cores in Xeon Platinum 8280), superior memory bandwidth (8 channels DDR4-3200 vs. 6 channels DDR4-2933), larger L3 cache, and PCIe 4.0 support (vs. PCIe 3.0). Phoronix 2020 benchmarks showed dual EPYC 7742 delivering approximately 14% better average performance than dual Xeon Platinum 8280 across many Linux workloads.[59] Even in comparable 24-core configurations, the EPYC 7402 provided higher base frequencies (2.8 GHz vs. 2.5 GHz on Xeon Platinum 8255C), significantly larger L3 cache (128 MB vs. 35.75 MB), and improved I/O capabilities. Third-generation Milan further extended this lead, exceeding Intel's 3rd gen Xeon Scalable by up to 50% in SPEC integer rates.[60] TPC-style transaction benchmarks remain less commonly published for direct head-to-heads, but EPYC's multi-core prowess aligns with superior TPC-C throughput in vendor-submitted configurations.[1]

Power consumption and thermal characteristics

AMD EPYC processors exhibit Thermal Design Power (TDP) ratings that scale with core count and generational advancements, generally ranging from 65 W in entry-level variants such as the EPYC 4004 series up to 500 W in high-end 5th-generation models in the 9005 lineup, with lower-power options available such as the EPYC 9115 which has a default TDP of 125 W and configurable TDP (cTDP) ranging from 120 W to 155 W.[61][62][63] Earlier generations, including the 4th-generation 9004 series, feature TDPs from around 200 W for mid-range SKUs like the 64-core EPYC 8534P up to 400 W for dense configurations.[64] These ratings represent the maximum sustained power dissipation under typical workloads, influencing server design for power budgeting and cooling infrastructure. Actual power draw often deviates from TDP, with benchmarks revealing average consumption under load ranging from 221 W to peaks exceeding 355 W for models like the EPYC 9554 in performance-oriented modes.[65] In dual-socket configurations featuring 128-core processors per socket, such as the EPYC 9754 or 9755, total system power draw under load typically ranges from 500-800W or more, accounting for CPU packages, memory, and other components.[66][67] Idle power consumption is notably higher than in consumer AMD Ryzen counterparts, typically 50–110 W for EPYC systems depending on generation, BIOS settings, and peripheral load, due to the server-oriented architecture prioritizing scalability over minimal quiescent draw.[68] Successive generations demonstrate improved performance-per-watt metrics; for instance, 5th-generation EPYC processors achieve up to 37% better instructions per clock efficiency in HPC workloads compared to 4th-generation equivalents, enabling lower overall energy use for equivalent throughput.[69][70] Thermal management relies on AMD's Infinity Power Management system, integrated via the System Management Unit (SMU), which dynamically adjusts voltage, frequency, and power allocation across chiplets while monitoring die temperatures to prevent throttling.[71] Operating junction temperatures are designed to sustain up to 95°C under load for standard models, with extended variants like the EPYC 8004PN supporting ambient environments from -5°C to 85°C for edge deployments.[72] Cooling requirements accommodate air-cooled heatsinks for most configurations, as validated by third-party tests with Noctua TR4-SP3 compatible coolers maintaining sub-throttle temperatures on 1st- and later-generation EPYC dies.[73] However, high-TDP SKUs approaching or exceeding 400 W, particularly in dense multi-socket setups, may necessitate liquid cooling to manage heat densities over 700 W per node when paired with accelerators.[74] Future iterations, such as the anticipated EPYC Venice, could surpass 1,000 W, pushing reliance on advanced direct-to-chip liquid solutions beyond traditional air cooling limits.[75]

Workload-specific optimizations

AMD EPYC processors feature architectural enhancements and BIOS-configurable parameters tailored to high-performance computing (HPC) workloads, including support for up to 192 Zen 5 cores per socket in the 9005 series, which deliver a geometric mean instructions per cycle (IPC) uplift of 1.369x over prior generations across select HPC benchmarks.[6] These optimizations leverage Infinity Fabric interconnects for low-latency scaling across chiplets, enabling efficient parallel processing in simulations and scientific modeling, with BIOS settings like NUMA nodes per socket (NPS) configured to NPS1 for compute-bound tasks to minimize remote memory access latency.[76] For artificial intelligence (AI) and machine learning inference, frequency-optimized variants such as the EPYC 9575F prioritize clock speeds over core density, achieving over 10x lower latency in serving models compared to equivalent Intel Xeon processors in latency-constrained environments.[77] Integration with accelerators via up to 160 PCIe Gen5 lanes supports GPU offloading, while ZenDNN libraries enhance vectorized operations for computer vision and natural language processing tasks; simultaneous multithreading (SMT) can boost throughput by 30-60% in thread-parallel inference workloads when enabled in BIOS.[78][79] Database and analytics workloads benefit from high memory bandwidth via 12-channel DDR5 support and large last-level caches, with 3D V-Cache-equipped models (denoted by 'X' suffix) providing up to 768 MB L3 cache per socket to reduce cache misses in query-intensive operations.[1] BIOS tuning recommends NPS4 for memory-bound databases to align NUMA domains with data locality, improving throughput in OLTP and OLAP scenarios by optimizing data placement across chiplet-based dies.[80] Virtualization environments leverage EPYC's high core counts and Secure Encrypted Virtualization (SEV) for isolated VM execution, with BIOS options enabling maximum performance mode and all cores active to support dense VDI deployments.[81] For network-intensive virtualization, NIC Throughput Intensive profiles adjust Infinity Fabric clocking to sustain high packet rates without dynamic downclocking.[82] Financial services workloads further optimize via compiler flags for AVX-512 utilization and library tuning, yielding measurable gains in risk modeling and transaction processing on 9005 series processors.[83]

Market reception and impact

Adoption by enterprise and hyperscalers

AMD EPYC processors have seen rapid adoption among hyperscalers, driven by their performance in AI inference, cloud workloads, and cost efficiency. Major providers including Meta, Google, Amazon, and Oracle expanded EPYC-based instances by approximately 27% in 2024, exceeding 1,000 instances across their platforms as they scaled data center operations. Meta, a leading adopter, has deployed over 1.5 million EPYC CPUs, utilizing them for training, fine-tuning, and running inference on large models such as its 405-billion-parameter Llama 3.1. In particular, Amazon Web Services (AWS) disables Simultaneous Multithreading (SMT) on many EC2 instances powered by 4th and 5th generation AMD EPYC processors, including HPC instances such as hpc7a and hpc8a, and general-purpose instances such as m7a. Disabling SMT maps each vCPU directly to a physical core, eliminating resource sharing (e.g., execution units, cache) between threads, which reduces intra-core interference and provides improved performance consistency, workload isolation, predictability, and stable throughput for demanding workloads such as high-performance computing (HPC), machine learning inference, and transactional databases. This configuration optimizes for tightly coupled HPC tasks requiring low-latency communication and avoids performance variability introduced by SMT.[56][84][85][86] In the second quarter of 2025, the largest hyperscalers introduced more than 100 new AMD-powered instances, reflecting sustained momentum in EPYC integration for high-density computing. OVHcloud, a hyperscale cloud provider, leverages EPYC for flexible, high-performance platforms supporting cutting-edge workloads. Enterprise adoption has similarly accelerated, with EPYC enabling efficiency gains in private data centers, virtualization, and AI applications. Kakao Enterprise, a South Korean cloud provider, reduced its data center footprint by 50% while increasing performance by 30% after migrating to EPYC CPUs. Cybersecurity firm Rubrik integrated 5th Gen EPYC processors across its data security platform in June 2025 to enhance AI-ready cloud deployments. Partnerships with OEMs have broadened access: Dell Technologies offers PowerEdge R6715 and R7715 servers with 5th Gen EPYC, delivering up to 37% more drive capacity; HPE ProLiant Gen11 servers incorporate EPYC for AI, HPC, and virtualization; and Supermicro expanded its MicroBlade portfolio with EPYC 4005 series in October 2025 for dense edge and enterprise configurations. Governments and organizations worldwide select EPYC-based servers for big data analytics and secure processing. This uptake correlates with EPYC capturing a record 41% revenue share in the data center CPU market during Q2 2025, according to Mercury Research data, up from near-zero in 2017 and reflecting a shift toward chiplet designs offering superior core counts and memory bandwidth for enterprise-scale deployments. Enterprise adoption tripled year-over-year in 2024, fueled by EPYC's optimizations for virtual machines, databases, and hybrid cloud environments.

Competition dynamics with Intel Xeon

AMD EPYC processors entered the server market in June 2017 with the first-generation Naples series, challenging Intel's longstanding dominance in the x86 data center CPU segment, where Xeon held over 95% share prior to AMD's re-entry. AMD's chiplet-based architecture enabled higher core counts at competitive prices, disrupting Intel's premium pricing model sustained by limited competition. By offering up to 32 cores per socket initially—surpassing Intel's then-maximum of 28—EPYC targeted parallel workloads prevalent in cloud and HPC, where scaling cores directly correlated with throughput gains. Subsequent generations amplified this advantage: second-generation Rome (2019) reached 64 cores per socket and generally outperformed Intel's contemporaneous Cascade Lake processors in server workloads due to advantages in multi-threaded performance, higher core counts in flagship models, superior memory bandwidth (eight DDR4-3200 channels vs. six DDR4-2933), larger L3 cache, and PCIe 4.0 support (vs. PCIe 3.0). For comparable core counts, the EPYC 7402 (24 cores) offered a 2.8 GHz base frequency, 128 MB L3 cache, and enhanced I/O capabilities compared to the Xeon Platinum 8255C (24 cores, 2.5 GHz base, 35.75 MB L3 cache). Phoronix benchmarks from 2020 showed dual EPYC 7742 configurations delivering approximately 14% better average performance than dual Xeon Platinum 8280 across 116 Linux workloads.[59] Third-generation Milan (2021) added Zen 3 IPC improvements, and fourth-generation Genoa (2022) scaled to 96 cores with Zen 4 efficiency. Intel responded with delayed monolithic designs like Ice Lake (2021, 40 cores max) and Sapphire Rapids (2023, 60 cores), hampered by process node struggles that limited density and power efficiency. AMD's 5nm-class processes in later EPYC iterations delivered superior performance per watt, often 1.5-2x in multi-threaded benchmarks against equivalent Xeon SKUs, driven by modular chiplets allowing cost-effective scaling without monolithic die yield issues. This forced Intel into pricing adjustments, with Xeon 6 series (launched late 2024) seeing MSRP cuts of up to 30% by January 2025 to counter EPYC 9005 Turin's 192-core density and lower total cost of ownership.[87] Market share dynamics reflect these technical edges: AMD's server CPU revenue share climbed from under 10% in 2018 to approximately 33% by June 2025, eroding Intel's from over 90% to 62%, per Mercury Research estimates, with hyperscalers favoring EPYC for cost-sensitive, core-heavy deployments.[88] In Q1 2025, AMD hit 39.4% unit share, up 6.5% quarter-over-quarter, fueled by EPYC's adoption in AI training and virtualization where thread-parallelism trumps single-thread latency.[89] Intel retained leads in latency-critical enterprise apps via optimized libraries and broader ecosystem maturity, but AMD's value proposition—higher cores at 20-50% lower effective pricing—shifted economics toward commoditization, prompting Intel's hybrid E-core approaches in Sierra Forest for density matching.[90] By mid-2025, both vendors discounted flagship models by up to 50% amid softening demand, underscoring intensified rivalry.[91]

Influence on data center economics and AI workloads

AMD EPYC processors have driven significant cost reductions in data center operations through higher core densities and improved energy efficiency compared to competing Intel Xeon offerings, enabling greater workload consolidation and reduced total cost of ownership (TCO). For instance, the chiplet-based design supports up to 192 cores per socket in the 5th generation EPYC (Turin), allowing operators to achieve equivalent performance with fewer servers, which lowers capital expenditures on hardware and rack space while decreasing power and cooling demands.[27] Independent analyses indicate that EPYC deployments can consolidate infrastructure such that refreshed servers require 31% fewer cores for the same workloads, contributing to rack density improvements and operational savings.[92] Hyperscale providers have accelerated adoption, with over 100 new AMD-powered cloud instances launched in Q2 2025 alone, reflecting a market share of 36.5% for AMD's x86 server CPUs that year, driven by these economic advantages.[93][94] Specific case studies underscore these benefits: Twitter (now X) reported a 25% TCO reduction after deploying 2nd generation EPYC processors across its data centers in 2019, primarily from enhanced virtualization efficiency and reduced hardware footprint.[95] In virtualization environments, EPYC has enabled up to 42% lower VMware licensing costs through superior density, yielding CapEx payback periods of approximately two months.[96] AMD's TCO estimator tools further quantify potential savings, showing EPYC systems offsetting costs via reduced energy consumption and emissions compared to Intel equivalents, with full data center builds potentially paying for themselves through efficiency gains.[97] For AI workloads, EPYC processors enhance economics by providing a scalable CPU foundation that optimizes GPU utilization, data preparation, and inference serving, often at lower power draw than alternatives. The 4th and 5th generation models deliver over 10x better performance in latency-sensitive inference compared to Intel Xeon, balancing CPU-GPU ecosystems to minimize idle resources and operational expenses.[77] This efficiency supports "everyday AI" tasks like analytics and machine learning preprocessing, where EPYC's high memory bandwidth (up to 12 channels of DDR5) and extensive PCIe lanes (up to 128) reduce the need for additional accelerators, cutting cloud AI costs by improving performance per watt and enabling fewer nodes for training or serving.[98][99] Enterprises report OPEX reductions from EPYC's role in consolidating AI infrastructure, as its core density allows hyperscalers to handle surging demand with optimized power budgets rather than proportional hardware scaling.[100]

Variants and adaptations

Embedded and edge computing variants

AMD develops EPYC Embedded processors specifically for applications requiring long product lifecycles, such as industrial control, networking appliances, storage systems, and edge inference, with availability guarantees extending up to 10 years to support embedded deployments.[101] These variants leverage the same Zen-based microarchitectures as mainstream EPYC server processors but incorporate optimizations like configurable TDP, enhanced reliability features, and support for ruggedized systems to meet the demands of non-data-center environments.[102] The inaugural EPYC Embedded 3000 series, released in 2019 and based on the first-generation Zen core, targets single-socket embedded systems with models ranging from 4 to 16 cores, base clocks up to 2.14 GHz, and configurable TDPs from 45W to 180W.[102] It supports dual- or quad-channel DDR4-2666 ECC memory up to 1 TB, up to 128 PCIe 3.0 lanes, and integrated features like dual 10GbE MACs for networking efficiency, making it suitable for storage controllers and telecom edge nodes.[103] Later revisions in 2020 added models like the 16-core EPYC Embedded 3451 with 32 threads and 64 MB L3 cache, emphasizing power efficiency for industrial applications.[104] Subsequent embedded variants align with EPYC server generations for scalability. The EPYC Embedded 7002 series (Zen 2, Rome) introduced higher core densities up to 64 cores, PCIe 4.0 support, and improved per-core performance for edge analytics and real-time processing.[101] The fourth-generation EPYC Embedded 9004 series (Zen 4, Genoa) added DDR5 memory channels and up to 96 cores, enhancing bandwidth for AI inference at the edge while maintaining enterprise-grade RAS (reliability, availability, serviceability) features like advanced error correction.[105] In 2025, AMD launched the fifth-generation EPYC Embedded 9005 series (Zen 5, Turin), scaling from 8 to 192 cores with up to 512 MB L3 cache and 160 PCIe 5.0 lanes, optimized for compute-intensive embedded tasks like industrial AI and high-frequency networking.[106] Complementing this, the EPYC Embedded 4005 series, announced on September 16, 2025, focuses on low-power edge computing with up to 16 cores, energy-efficient designs under 100W TDP, AM5 socket compatibility for easier integration, and low-latency optimizations for real-time data processing in compact appliances.[107][108] For broader edge deployments, AMD positions the EPYC 8004 series (Zen 4C, Siena) as a dense, power-optimized option with up to 64 cores in single-socket configurations, delivering cost-effective performance for GPU-accelerated edge workloads while supporting up to 6 TB DDR5 memory.[109] These variants collectively enable edge computing by providing scalable x86 performance in thermally constrained, space-limited environments, outperforming prior embedded solutions in throughput per watt for tasks like video analytics and 5G baseband processing.[110]

Dense and specialized server variants

AMD EPYC processors feature dense variants tailored for high core-density deployments in scale-out data center environments, prioritizing core count over per-core performance to maximize throughput in virtualized and cloud-native workloads. The Bergamo subfamily within the 4th-generation EPYC 9004 series employs Zen 4c cores, which are physically smaller than standard Zen 4 cores while retaining the same instruction set and microarchitecture features, enabling up to 128 cores and 256 threads per socket on the SP5 platform.[111][112] Launched in 2023, these processors support 12-channel DDR5 memory and 128 PCIe 5.0 lanes, facilitating configurations that consolidate workloads onto fewer nodes, thereby reducing rack space, power draw, and operational costs compared to prior generations.[35] Specialized server variants extend this with optimizations for memory-intensive or latency-sensitive applications. The Genoa-X processors, also in the EPYC 9004 series, incorporate stacked 3D V-Cache technology to expand L3 cache capacity to over 1 GB per socket—specifically up to 1.152 GB in models like the EPYC 9684X—accelerating data access in high-performance computing, in-memory databases, and simulation tasks where cache misses dominate bottlenecks.[113] These differ from standard Genoa by trading some core count for cache density, with up to 96 cores but enhanced hit rates that AMD claims deliver up to 2x performance in select HPC benchmarks.[35] Additionally, frequency-optimized SKUs such as the EPYC 9754F and 9575F variants boost base and boost clocks to 3.0–4.1 GHz for low-latency transactional processing, while maintaining compatibility with dense multi-socket setups up to eight sockets in supported systems.[37] The EPYC 8004 series (Siena), introduced in 2023, offers specialized dense options for cost-sensitive, power-constrained server designs with Zen 4c cores scaled to 64 cores maximum and TDP ratings from 70 W to 225 W, supporting up to four sockets for compact rack deployments in telco or regional data centers.[114] This series uses a reduced I/O die configuration with six DDR5 channels and 96 PCIe 5.0 lanes, targeting efficiency in generalized compute without the full-scale resources of 9004 models, as evidenced by benchmarks showing competitive performance per watt in virtualization tests.[35] Across these variants, AMD emphasizes chiplet-based scalability, with empirical data from independent tests confirming 20–50% gains in core-density workloads over Intel equivalents in equivalent power envelopes.[115]

Region-specific modifications

AMD's joint venture with Chinese firms, including Hygon Information Technology, enables the production of region-specific EPYC-compatible processors under license for the Chinese market. These Dhyana-series CPUs, such as the Hygon Dhyana C86-7395, replicate the core Zen 1 microarchitecture of first-generation EPYC (Naples) processors but incorporate modifications primarily in the integrated I/O die to utilize domestically sourced components and circumvent U.S. export restrictions on advanced semiconductor technology.[116][117] The alterations ensure compliance with local manufacturing mandates while maintaining pin-compatibility with the SP3 socket, allowing deployment in standard EPYC server platforms without hardware changes. This approach differs from global EPYC offerings by prioritizing self-reliance in I/O subsystems, potentially at the cost of optimized interconnect performance compared to AMD's standard designs. Production began around 2018, targeting enterprise and government sectors restricted from importing high-performance U.S.-made chips.[116] Subsequent developments have seen limited updates to these licensed variants, with Hygon largely adhering to Zen 1 equivalents due to licensing constraints, while Chinese firms pivot toward indigenous architectures for newer server needs. No equivalent hardware modifications exist for other regions, such as Europe or Asia-Pacific markets outside China, where standard EPYC SKUs prevail without regional adaptations.[117]

Criticisms and challenges

Hardware errata and reliability issues

AMD EPYC processors feature comprehensive reliability, availability, and serviceability (RAS) mechanisms, including advanced error correction and fault isolation, contributing to low field failure rates comparable to competing Intel Xeon processors in data center environments.[118][119] However, like other high-density server CPUs, EPYC generations include documented silicon errata—deviations from specifications that can lead to hangs, resets, or reduced reliability under specific conditions.[120] These are detailed in AMD's official revision guides, with most addressed via BIOS, firmware, or software workarounds rather than silicon fixes, as no hardware revisions are planned for production parts.[121] In the first-generation EPYC (Naples, Zen 1), systems could experience hangs or crashes after approximately 1044 days of uptime due to a core failing to exit low-power states properly, similar to issues in later generations.[122] Production errata were less publicly highlighted compared to successors, though early prototypes faced booting challenges resolved prior to volume shipment.[123] Second-generation EPYC (Rome, Zen 2) processors exhibited several errata prone to system hangs or resets. Erratum 1474 causes a core to fail exiting CC6 low-power state after roughly 1044 days from last reset, potentially hanging the system depending on spread spectrum clocking and workload; mitigation involves periodic reboots or disabling CC6 via MSR programming.[120][124] Other issues include Erratum 1140, where Data Fabric transaction loss leads to hangs (mitigated by fabric register programming), Erratum 1290 causing GMI link hangs from retraining failures after CRC errors, and Erratum 1315 triggering hangs in dual-socket 3-link configurations.[120] These primarily affect I/O and interconnect reliability in multi-socket setups. Third-generation EPYC (Milan, Zen 3) introduced errata such as ID 1446, where improper on-die regulator initialization during power-up results in permanent boot failure, rendering the processor inoperable.[121] ID 1431 permits core hangs during bus locks with SMT enabled, potentially causing watchdog-induced resets, while ID 1441 risks DMA write data corruption in memory.[121] ID 1462 hinders reboot or shutdown after fatal errors, exacerbating recovery in error-prone scenarios.[121] Some Milan systems reported random OS shutdowns or soft resets, attributed to underlying hardware sensitivities.[125] Fourth-generation EPYC (Genoa, Zen 4) errata include hangs from CXL.mem transaction timeouts (no workaround) and system instability with poisoned PCIe data lacking error logs.[126] Erratum 1560 risks hangs when Data Fabric C-states interact with CXL Type 1 devices (mitigated by disabling DF C-states), and Erratum 1483 generates unexpected fatal errors on uncorrectable DRAM ECC faults.[126] Claims of systemic memory subsystem redesign needs were refuted by AMD, with no confirmed widespread reliability degradation.[127] Fifth-generation EPYC (Turin, Zen 5) have fewer publicized functional errata to date, though an RDSEED instruction flaw affects random number generation reliability for cryptographic seeding, potentially impacting security-dependent workloads.[128] Overall, EPYC errata do not indicate higher aggregate failure rates than peers, with third-party testing confirming robust long-term stability in enterprise deployments.[129][118]

Security vulnerabilities

AMD EPYC processors have been affected by several hardware-level security vulnerabilities, primarily side-channel attacks exploiting speculative execution and microarchitectural features, as well as flaws in virtualization technologies like Secure Encrypted Virtualization (SEV). These issues, common to modern x86 CPUs, enable potential data leakage across processes or virtual machines, though exploitation often requires local access or specific privileges. AMD has addressed most through microcode updates and firmware patches, distributed via BIOS vendors or operating systems, with varying performance impacts.[130] A notable early vulnerability was Zenbleed (CVE-2023-20593), disclosed on July 24, 2023, affecting second-generation EPYC Rome processors based on Zen 2 architecture. This flaw stems from improper clearing of vector registers (YMM) during speculative execution, allowing cross-process information leaks of up to 30-40 bytes per iteration, potentially exposing sensitive data like passwords or keys. Researchers from Google Project Zero demonstrated practical exploitation, prompting AMD to release a microcode patch (AMD-SB-7008); however, applying it reduced performance by up to 15% in vector-heavy workloads on affected EPYC systems.[131][132] EPYC platforms are also vulnerable to Spectre variants (e.g., Spectre v1 and v2), which leverage branch prediction and speculative execution for unauthorized memory access, though AMD chips show lower exploitability for Meltdown compared to Intel due to architectural differences. Mitigations, including retpoline and microcode updates, were rolled out starting January 2018, with ongoing refinements; EPYC users in data centers were advised to apply them to prevent kernel-to-user data leaks in virtualized environments.[130][133] In August 2024, the Sinkclose vulnerability (CVE-2024-56161 et al.) was revealed, affecting multiple AMD architectures including EPYC by bypassing System Management RAM (SMRAM) protections via flawed caching mechanisms, enabling deep code execution in privileged firmware regions. This required physical access but posed risks in server maintenance scenarios; AMD mitigated it via microcode and recommended restricting physical access.[134] Server-specific concerns include SEV-related flaws: In February 2025, a firmware bug (AMD-SB-3009) allowed privileged attackers to read unencrypted guest memory, compromising confidential computing isolation on EPYC with SEV enabled, fixed via updated SEV firmware. Another SEV issue (AMD-SB-3019), due to improper microcode signature verification, permitted malicious microcode loading, potentially undermining encryption guarantees; patches were issued concurrently.[135][136] July 2025 brought Transient Scheduler Attack (TSA) vulnerabilities (CVEs TBD), disclosed by Microsoft researchers and affecting EPYC across generations via combined timing side-channels in scheduler operations, enabling chained info disclosure akin to Spectre/Meltdown. Individually low-severity, their exploitation could leak data in multi-tenant servers; AMD recommended software mitigations and promised microcode fixes.[137][138] August 2025 advisories (AMD-SB-3014) highlighted IOMMU and SEV-SNP weaknesses in EPYC platforms, potentially allowing DMA attacks or nested paging bypasses in virtualized setups, with patches focusing on enhanced memory isolation. AMD maintains a product security incident response team (PSIRT) for ongoing disclosures, emphasizing that while no widespread exploits have been reported for EPYC-specific cases, virtualization-heavy deployments warrant prompt patching to preserve trust in server ecosystems.[139]

Ecosystem and compatibility limitations

Despite its adherence to the x86 instruction set architecture, ensuring binary compatibility with software developed for Intel processors, the AMD EPYC platform presents ecosystem challenges stemming from its multi-chiplet design and historical market position. This architecture divides each socket into multiple NUMA (Non-Uniform Memory Access) domains—configurable via BIOS settings like Nodes per Socket (NPS) modes (1, 2, 4, or 8)—which can lead to suboptimal performance in workloads not explicitly tuned for such topologies. For instance, untuned applications may incur higher latency due to remote memory access across chiplets, necessitating manual optimizations such as NUMA-aware scheduling or pinning, as outlined in AMD's tuning guides for EPYC 9004 series processors.[80] Dell Technologies documentation highlights that while NPS configurations mitigate inter-domain bandwidth penalties, they require workload-specific validation to avoid up to 20-30% performance degradation in latency-sensitive tasks compared to monolithic designs.[140] In virtualization environments, EPYC's NUMA complexity exacerbates compatibility hurdles. Hypervisors like VMware ESXi initially faced NUMA-related inefficiencies on early EPYC generations, such as improper vNUMA exposure leading to VM migration failures or reduced throughput, though patches like adjusted locality/weight affinity settings resolved many issues by ESXi 6.7.[141] Mixed AMD-Intel clusters encounter live migration incompatibilities due to differing CPU models and microarchitectures; for example, in Epic EHR deployments on vSphere, migrating VMs between EPYC and Xeon hosts is unsupported, complicating high-availability setups and requiring homogeneous clusters.[142] Similar constraints appear in Proxmox VE, where assigning EPYC-specific CPU models to VMs can trigger startup errors unless host passthrough or custom topologies are configured.[143] Enterprise software certification lags contribute to adoption barriers, as many legacy applications—optimized over decades for Intel's ecosystem—undergo delayed validation for EPYC. Independent analyses note that while EPYC supports all major OSes and hypervisors post-certification, Intel maintains an edge in "certified for Intel" badges for thousands of business-critical apps, simplifying procurement for risk-averse IT departments and reducing validation timelines.[144] For Red Hat Enterprise Linux 7, AMD EPYC Zen 3 (Milan) processors lack official support, forcing upgrades to RHEL 8 or later despite binary compatibility.[145] Hardware ecosystem limitations include fewer validated third-party peripherals at launch compared to Xeon, though AMD's PCIe validation program has expanded compatibility; early generations saw sporadic issues with add-in cards assuming single-die topologies.[146] These factors, while diminishing with EPYC's market share growth—reaching over 30% in servers by 2024—persist in conservative sectors prioritizing seamless Intel interoperability over EPYC's core-count advantages.[147]

Processor generations

First generation (Naples, Zen 1)

The first-generation AMD EPYC processors, designated the 7000 series and codenamed Naples, marked AMD's re-entry into the server CPU market following a decade-long absence. Built on the Zen 1 microarchitecture and fabricated on GlobalFoundries' 14 nm process node, these processors were officially launched on June 20, 2017, after an announcement at Computex in May 2017.[148][149] They utilized Socket SP3 and supported dual-socket configurations, targeting data center workloads with emphasis on core density and I/O bandwidth.[150] Naples employed a multi-chip module (MCM) design consisting of four core complex dies (CCDs)—each containing two core complexes (CCXs) for a total of eight cores per die—interconnected via AMD's Infinity Fabric for on-package communication.[149] This chiplet-like approach enabled scalability to 32 cores and 64 threads per socket, with 32 MB of L2 cache and 64 MB of L3 cache distributed across the dies, though it introduced non-uniform memory access (NUMA) domains that could impact latency-sensitive applications.[149] Each processor supported eight channels of DDR4-2666 memory (up to 2 TB total capacity) and 128 lanes of PCIe 3.0, providing substantial bandwidth for storage and networking peripherals.[151] The lineup spanned 19 SKUs, from entry-level 8-core models like the EPYC 7251 (base clock 2.1 GHz, TDP 90 W) to high-end 32-core variants such as the EPYC 7601 (base 2.2 GHz, boost up to 3.2 GHz, TDP 180 W) and single-socket-optimized EPYC 7601P.[150] Thermal design power ranged from 90 W to 225 W across models, with pricing starting at around $300 for lower-end parts and reaching $4,200 for flagship 32-core units at launch.[151] These processors competed directly with Intel's Xeon Scalable lineup by offering higher core counts at lower per-core costs, though initial benchmarks revealed mixed results in single-threaded performance due to Zen 1's IPC limitations compared to contemporary Intel architectures.[149]

Second generation (Rome, Zen 2)

The second-generation AMD EPYC processors, codenamed Rome, utilize the Zen 2 microarchitecture and were released on August 7, 2019.[152] These server CPUs employ a multi-chiplet design comprising up to eight 7 nm compute chiplet dies (CCDs), each with eight cores, interconnected via a central 14 nm I/O die that handles memory controllers, PCIe lanes, and inter-die communication through Infinity Fabric links.[153] [154] This architecture enables scalability to 64 cores and 128 threads per socket while maintaining compatibility with the SP3 socket used in the prior Naples generation.[155] The EPYC 7002 series supports eight channels of DDR4-3200 memory, accommodating up to 4 TB capacity, and provides 128 lanes of PCIe 4.0, doubling the bandwidth of PCIe 3.0 in the first generation.[31] Cache hierarchy includes up to 256 MB of shared L3 cache across chiplets, with each core featuring 512 KB L2 cache and improved branch prediction and floating-point units in Zen 2.[156] Thermal design power (TDP) ranges from 120 W to 225 W for most models, with select dual-socket variants rated up to 280 W.[152] Relative to the Zen 1-based Naples processors, Rome delivers enhanced single-threaded performance through approximately 15-20% higher instructions per clock (IPC) on average, with AMD reporting up to 29% uplift in specific integer and floating-point workloads at iso-frequency, alongside better power efficiency from the smaller compute node.[157] [158] The shift to PCIe 4.0 and faster memory reduces I/O bottlenecks, enabling up to 2x overall socket performance in bandwidth-sensitive tasks like HPC simulations and database operations.[159] Security enhancements include AMD Infinity Guard, featuring hardware root of trust and memory encryption capabilities.[31] The lineup comprises 19 models, spanning 8 to 64 cores, such as the 64-core EPYC 7742 at 2.25 GHz base (boost to 3.4 GHz) and the 32-core EPYC 7502 at 2.5 GHz base (boost to 3.35 GHz), targeting diverse workloads from cloud virtualization to technical computing.[160] [161] Compared to Intel's Cascade Lake-based Xeon processors, the second-generation EPYC generally outperformed contemporaries in server workloads, with advantages in multi-threaded performance, memory bandwidth (eight channels DDR4-3200 versus six channels DDR4-2933), larger L3 cache, and PCIe 4.0 support versus PCIe 3.0. For example, the EPYC 7402 (24 cores, 2.8 GHz base frequency with boost to 3.35 GHz, 180 W TDP, 128 MB L3 cache, eight memory channels) provided superior capabilities in cache size, memory bandwidth, and I/O over the Intel Xeon Platinum 8255C (24 cores/48 threads, 2.5 GHz base with boost to 3.9 GHz, 165 W TDP, 35.75 MB L3 cache, six memory channels).[162][163] Benchmarks, such as Phoronix tests in 2020, showed that dual-socket EPYC 7742 configurations delivered approximately 14% better average performance than dual Xeon Platinum 8280 systems in many Linux workloads.[59] Independent benchmarks confirmed leadership in multi-threaded throughput, with Rome systems outperforming Intel counterparts in SPEC CPU2017 rates by up to 50% in certain configurations.[164]

Third generation (Milan, Zen 3)

The third-generation AMD EPYC processors, codenamed Milan and based on the Zen 3 microarchitecture, were launched on March 15, 2021.[165] These processors maintained the maximum of 64 cores and 128 threads per socket from the prior Rome generation while introducing architectural enhancements such as a unified 32 MB L3 cache per eight-core chiplet, improved branch prediction, and higher instructions per clock (IPC) uplift of approximately 19%.[166] Manufactured on TSMC's 7 nm process, Milan processors support eight-channel DDR4-3200 memory and 128 lanes of PCIe 4.0, with thermal design power (TDP) ratings ranging from 180 W to 280 W depending on the model.[167] Key models in the EPYC 7003 series include the flagship EPYC 7763 with 64 cores at a 2.45 GHz base frequency and 3.50 GHz boost, alongside options like the EPYC 7713 (64 cores, 2.00 GHz base, 3.67 GHz boost, 225 W TDP) and lower-core variants such as the EPYC 72F3 (8 cores, optimized for frequency).[168] Independent benchmarks demonstrated average performance gains of 14-17.5% over equivalent Rome models in compute-intensive workloads, attributed to per-core efficiency rather than core count increases.[168] In dual-socket configurations, Milan delivered up to 1.43x better results in fluid dynamics simulations compared to contemporary Intel counterparts.[169] A variant line, the EPYC 7003X "Milan-X" series, extended the architecture with stacked 3D V-Cache technology, adding up to 768 MB of L3 cache per socket for enhanced bandwidth in memory-sensitive applications; the top-end EPYC 7773X features 64 cores and demonstrated 20% average improvements over standard Milan in cache-dependent tasks.[170] These processors emphasized security features like AMD Infinity Guard, including hardware-based memory encryption, while maintaining compatibility with SP3 sockets and existing EPYC ecosystems.[165]
ModelCores/ThreadsBase/Boost Freq. (GHz)L3 Cache (MB)TDP (W)
EPYC 776364/1282.45/3.50256280
EPYC 771364/1282.00/3.67256225
EPYC 7773X (Milan-X)64/1282.20/3.50768280

Fourth generation (Genoa, Bergamo, Siena, Raphael; Zen 4/Zen 4c)

The fourth-generation AMD EPYC processors, launched from late 2022 to 2024, are built on the Zen 4 microarchitecture and its density-optimized variant, Zen 4c, utilizing TSMC's 5 nm process for compute cores and supporting DDR5 memory with up to 12 channels in high-end models.[35] These processors emphasize scalability for data center workloads, featuring up to 128 cores and 256 threads, PCIe 5.0 support with 128 lanes, and integrated AVX-512 instructions for enhanced vector processing efficiency.[35] Independent benchmarks indicate Zen 4 delivers approximately 13% higher instructions per clock (IPC) over Zen 3 in integer workloads, with floating-point gains up to 96% in optimized scenarios, though real-world uplift varies by application.[171] The Genoa series (EPYC 9004) represents the flagship variant, announced on November 10, 2022, and featuring up to 96 Zen 4 cores across 12 chiplet dies (CCDs), each with 8 cores.[172] Configurations range from 16 to 96 cores, with thermal design powers (TDPs) from 155 W to 400 W, and base clocks starting at 1.9 GHz for high-core-count models like the EPYC 9654, boosting to 3.7 GHz.[173] Genoa supports dual-socket SP5 platforms and excels in general-purpose computing, with reviews showing up to 2x throughput over prior generations in memory-bound tasks due to doubled bandwidth from DDR5-4800.[115] However, latency-sensitive applications may experience variability from the chiplet design's NUMA topology, mitigated by configurable NUMA-per-socket (NPS) modes.[174] Bergamo (EPYC 97x4 subset of 9004 series) introduces Zen 4c cores, a compact derivative with identical IPC to Zen 4 but 35% smaller die area per core, enabling 128 cores via eight 16-core CCDs at lower clock speeds (base up to 2.0 GHz, boost to 3.7 GHz) and TDPs up to 360 W.[111] Launched in June 2023, it targets cloud-native and high-density workloads, offering up to 2.8x performance in Java-based applications compared to third-generation EPYC, per AMD benchmarks, while maintaining power efficiency through reduced cache sizes (1 MB L2 per core vs. 1 MB in Zen 4).[64] Zen 4c avoids hybrid "big.LITTLE" designs by preserving full feature parity, including AVX-512, but prioritizes core count over per-core frequency for scalable throughput in containerized environments.[112] Siena (EPYC 8004 series), released in September 2023, adapts Zen 4c for edge and space-constrained deployments on the new single-socket SP6 platform, with up to 64 cores, 6 DDR5 channels, and TDPs from 70 W to 200 W. Models like the EPYC 8434P deliver performance comparable to Intel's 32-core Sapphire Rapids in multi-threaded tasks at lower power, suitable for telecom and embedded systems with NEBS-compliant variants offering extended temperature ranges. The SP6 socket reduces footprint and cost versus SP5, supporting fewer PCIe lanes (96) but retaining full Zen 4c capabilities for efficient inference and general compute.[175] The EPYC 4004 series (codenamed Raphael), launched in 2024, extends the Zen 4 architecture to entry-level single-socket servers on Socket AM5. These processors support dual-channel DDR5-5200 memory, up to 16 cores and 32 threads, and TDPs ranging from 65 W to 170 W. They target affordable entry-level server workloads for small businesses and hosting providers, complementing the main server-oriented Genoa, Bergamo, and Siena lines on SP5 and SP6 sockets by providing a cost-effective option with high per-core performance, though with reduced I/O (28 PCIe 5.0 lanes) compared to high-end models.[176][177] A representative example is the EPYC 4564P, with 16 cores, 32 threads, a base clock of 4.5 GHz, maximum boost up to 5.7 GHz, 64 MB L3 cache, and 170 W TDP, designed for efficient handling of general-purpose entry-level server tasks.[178][177]

Fifth generation (Turin, Grado; Zen 5/Zen 5c)

The fifth-generation AMD EPYC processors, released starting in late 2024, incorporate the Zen 5 and Zen 5c microarchitectures to deliver enhanced instructions per clock performance, with the Zen 5 cores providing up to 17% higher IPC for enterprise and cloud workloads relative to the prior Zen 4 generation.[27] The lineup includes the high-end EPYC 9005 series (codenamed Turin) for datacenter-scale deployments on the SP5 socket and the entry-level EPYC 4005 series (codenamed Grado) for smaller server environments on the AM5 socket.[37][179] These processors emphasize scalability for AI inference, with up to 2x throughput improvement over previous generations in certain workloads, alongside support for dense core configurations via Zen 5c variants optimized for energy efficiency and higher thread counts.[27] The EPYC 9005 Turin series, launched on October 10, 2024, supports up to 128 cores using standard Zen 5 chiplet dies or up to 192 cores with Zen 5c for high-density applications, enabling configurations with as many as 384 threads per socket.[27][37] Representative models include the EPYC 9965 (192 Zen 5c cores, 384 threads, 2.25 GHz base clock, 3.70 GHz max boost, 500 W TDP, 384 MB L3 cache, 1kU list price of $11,988 USD)[180] for maximum density and the EPYC 9755 (128 Zen 5 cores, 256 threads, 2.70 GHz base, 4.10 GHz boost, 500 W TDP, 512 MB L3 cache) for balanced performance; high-frequency options like the EPYC 9575F offer 64 cores at up to 5.00 GHz boost within a 400 W TDP. Efficiency-focused variants include the EPYC 9115 (16 Zen 5 cores, 32 threads, 2.6 GHz base clock, 4.1 GHz max boost, 64 MB L3 cache, 125 W default TDP with configurable TDP ranging from 120 W to 155 W), targeting value applications and power-constrained environments requiring high per-core performance.[61][48] TDP ratings span 125 W to 500 W across the family, with support for 12 DDR5-6400 memory channels (up to 6 TB capacity in 2DPC configurations) and up to 128 PCIe 5.0 lanes in single-socket setups.[48] The architecture maintains compatibility with the SP5 platform from prior generations, facilitating upgrades while introducing optimizations for AI data preprocessing and inference acceleration.[37] The EPYC 4005 Grado series, introduced on May 13, 2025, targets cost-sensitive entry-level servers with up to 16 Zen 5 cores and 32 threads per processor, using the consumer-oriented AM5 socket for easier integration into compact systems.[181] These models, such as the 16-core variants, operate at TDPs around 65 W to 115 W, supporting dual-channel DDR5 memory and emphasizing power efficiency for small to medium business workloads like virtualization and edge computing.[182] Unlike the Turin series, Grado processors lack the multi-chiplet scaling of SP5 but leverage the same Zen 5 IPC gains for superior per-core performance in lighter-duty environments.[183]

References

User Avatar
No comments yet.