Recent from talks
All channels
Be the first to start a discussion here.
Be the first to start a discussion here.
Be the first to start a discussion here.
Be the first to start a discussion here.
Welcome to the community hub built to collect knowledge and have discussions related to Emerald Rapids.
Nothing was collected or created yet.
Emerald Rapids
View on Wikipediafrom Wikipedia
| General information | |
|---|---|
| Launched | December 14, 2023[1] |
| Marketed by | Intel |
| Designed by | Intel |
| Common manufacturer |
|
| Product code | 80722[2] |
| Performance | |
| Max. CPU clock rate | 1.9 GHz to 4.2 GHz |
| QPI speeds | 16 GT/s to 20 GT/s |
| DMI speeds | 16 GT/s |
| Physical specifications | |
| Cores |
|
| Package |
|
| Socket | |
| Cache | |
| L1 cache | 80 KB per core:
|
| L2 cache | 2 MB (per core) |
| L3 cache | 5 MB (per core) |
| Architecture and classification | |
| Application | Server Embedded |
| Technology node | Intel 7 (previously known as 10ESF) |
| Microarchitecture | Raptor Cove |
| Instruction set | x86-64 |
| Instructions | MMX, SSE, SSE2, SSE3, SSSE3, SSE4, SSE4.1, SSE4.2, AVX, AVX2, FMA3, AVX-512, AVX-VNNI, TSX, AMX |
| Extensions | |
| Products, models, variants | |
| Product code name |
|
| Model |
|
| Brand names |
|
| History | |
| Predecessor | Sapphire Rapids |
| Successors | Granite Rapids (P-cores) Sierra Forest (E-cores) |
Emerald Rapids is the codename for Intel's fifth generation Xeon Scalable server processors based on the Intel 7 node.[3][4] Emerald Rapids CPUs are designed for data centers; the roughly contemporary Raptor Lake is intended for desktop and mobile usage.[5][6] Nevine Nassif is a chief engineer for this generation.[7]
Features
[edit]CPU
[edit]- Up to 64 Raptor Cove CPU cores per package[8]
- Up to 32 cores per tile, reducing the max tiles to two
- 5 MB of L3 cache per core (up from 1.875 MB in Sapphire Rapids)
- Speed Select Technology that supports high and low priority cores
I/O
[edit]- DDR5 memory support up to 8-channel DDR5-5600[8]
- Up to 80 PCI Express 5.0 lanes[8]
List of Emerald Rapids processors
[edit]Emerald Rapids-SP (Scalable Performance)
[edit]CPUs in italic are actually Sapphire Rapids processors, and they still have 1.875 MB of L3 cache per core.
Suffixes to denote:[9]
- +: Includes 1 of each of the four accelerators: DSA, IAA, QAT, DLB
- H: Database and analytics workloads, supports 4S (Xeon Gold) and/or 8S (Xeon Platinum) configurations and includes all of the accelerators
- M: Media transcode workloads
- N: Network/5G/Edge workloads (High TPT/Low Latency), some are uniprocessor
- P: Cloud and infrastructure as a service (IaaS) workloads
- Q: Liquid cooling
- S: Storage & Hyper-converged infrastructure (HCI) workloads
- T: Long-life use/High thermal case
- U: Uniprocessor (some workload-specific SKUs may also be uniprocessor)
- V: Optimized for cloud and software as a service (SaaS) workloads, some are uniprocessor
- Y: Speed Select Technology-Performance Profile (SST-PP) enabled (some workload-specific SKUs may also support SST-PP)
- Y+: Speed Select Technology-Performance Profile (SST-PP) enabled and includes 1 of each of the accelerators.
| Model number | Cores (Threads) |
Base clock |
All core turbo boost |
Max turbo boost |
Smart Cache |
TDP | Maximum scalability |
Registered DDR5 w. ECC support |
UPI Links |
Release MSRP (USD) |
|---|---|---|---|---|---|---|---|---|---|---|
| Xeon Platinum (8500) | ||||||||||
| 8593Q | 64 (128) | 2.2 GHz | 3.0 GHz | 3.9 GHz | 320 MB | 385 W | 2S | 5600 MT/s | 4 | $12400 |
| 8592+ | 1.9 GHz | 2.9 GHz | 350 W | $11600 | ||||||
| 8592V | 2.0 GHz | 330 W | 4800 MT/s | 3 | $10995 | |||||
| 8581V | 60 (120) | 2.6 GHz | 300 MB | 270 W | 1S | 0 | $7568 | |||
| 8580 | 2.9 GHz | 4.0 GHz | 350 W | 2S | 5600 MT/s | 4 | $10710 | |||
| 8573C | ? | ? | 3.0 GHz | ? | ? | ? | ? | ? | ? | ? |
| 8571N | 52 (104) | 2.4 GHz | 3.0 GHz | 4.0 GHz | 300 MB | 350 W | 1S | 4800 MT/s | 0 | $6839 |
| 8570 | 56 (112) | 2.1 GHz | 2S | 5600 MT/s | 4 | $9595 | ||||
| 8568Y+ | 48 (96) | 2.3 GHz | 3.2 GHz | $6497 | ||||||
| 8562Y+ | 32 (64) | 2.8 GHz | 3.8 GHz | 4.1 GHz | 60 MB | 300 W | 3 | $5945 | ||
| 8558 | 48 (96) | 2.1 GHz | 3.0 GHz | 4.0 GHz | 260 MB | 330 W | 5200 MT/s | 4 | $4650 | |
| 8558P | 2.7 GHz | 3.2 GHz | 350 W | 5600 MT/s | 3 | $6759 | ||||
| 8558U | 2.0 GHz | 2.9 GHz | 300 W | 1S | 5200 MT/s | 0 | $3720 | |||
| Xeon Gold (5500 and 6500) | ||||||||||
| 6558Q | 32 (64) | 3.2 GHz | 4.1 GHz | 4.1 GHz | 60 MB | 350 W | 2S | 5200 MT/s | 3 | $6416 |
| 6554S | 36 (72) | 2.2 GHz | 3.0 GHz | 4.0 GHz | 180 MB | 270 W | 4 | $3157 | ||
| 6548Y+ | 32 (64) | 2.5 GHz | 3.5 GHz | 4.1 GHz | 60 MB | 250 W | 3 | $3726 | ||
| 6548N | 2.8 GHz | 3.5 GHz | $3875 | |||||||
| 6544Y | 16 (32) | 3.6 GHz | 4.1 GHz | 45 MB | 270 W | $3622 | ||||
| 6542Y | 24 (48) | 2.9 GHz | 3.6 GHz | 60 MB | 250 W | $2878 | ||||
| 6538Y+ | 32 (64) | 2.2 GHz | 3.3 GHz | 4.0 GHz | 225 W | $3141 | ||||
| 6538N | 2.1 GHz | 2.9 GHz | 4.1 GHz | 205 W | $3875 | |||||
| 6534 | 8 (16) | 3.9 GHz | 4.2 GHz | 4.2 GHz | 22.5 MB | 195 W | 4800 MT/s | $2816 | ||
| 6530 | 32 (64) | 2.1 GHz | 2.7 GHz | 4.0 GHz | 160 MB | 270 W | $2128 | |||
| 6526Y | 16 (32) | 2.8 GHz | 3.5 GHz | 3.9 GHz | 37.5 MB | 195 W | 5200 MT/s | $1517 | ||
| 5520+ | 28 (56) | 2.0 GHz | 3.0 GHz | 4.0 GHz | 52.5 MB | 205 W | 4800 MT/s | $1640 | ||
| 5515+ | 8 (16) | 3.2 GHz | 3.6 GHz | 4.1 GHz | 22.5 MB | 165 W | $1099 | |||
| 5512U | 28 (56) | 2.1 GHz | 3.0 GHz | 3.7 GHz | 185 W | 1S | 0 | $1230 | ||
| Xeon Silver (4500) | ||||||||||
| 4516Y+ | 24 (48) | 2.2 GHz | 2.9 GHz | 3.7 GHz | 45 MB | 185 W | 2S | 4400 MT/s | 2 | $1295 |
| 4514Y | 16 (32) | 2.0 GHz | 2.6 GHz | 3.4 GHz | 30 MB | 150 W | $780 | |||
| 4510T | 12 (24) | 2.8 GHz | 3.7 GHz | 115 W | $624 | |||||
| 4510 | 2.4 GHz | 3.3 GHz | 4.1 GHz | 150 W | $563 | |||||
| 4509Y | 8 (16) | 2.6 GHz | 3.6 GHz | 22.5 MB | 125 W | $563 | ||||
| Xeon Bronze (3500) | ||||||||||
| 3508U | 8 (8) | 2.1 GHz | 2.2 GHz | 22.5 MB | 125 W | 1S | 4400 MT/s | 0 | $415-$425 | |
See also
[edit]References
[edit]- ^ "The Intel Innovation 2023 Keynote Live Blog". AnandTech. September 19, 2023. Archived from the original on September 19, 2023. Retrieved September 22, 2023.
- ^ "Intel Xeon Silver 4514Y Processor 30M Cache 2.00 GHz Product Specifications". Intel ARK. December 14, 2023. Archived from the original on December 16, 2023. Retrieved December 16, 2023.
- ^ Cutress, Ian (February 17, 2022). "Intel Discloses Multi-Generation Xeon Scalable Roadmap: New E-Core Only Xeons in 2024". AnandTech. Archived from the original on February 17, 2022. Retrieved February 17, 2022.
- ^ Mujtaba, Hassan (October 14, 2019). "Intel Sapphire Rapids & Granite Rapids Xeons Are LGA 4677 Compatible". Wccftech. Archived from the original on March 10, 2020. Retrieved March 14, 2020.
- ^ Mujtaba, Hassan (August 19, 2021). "Intel Emeralds Rapids-SP, Granite Rapids-SP & Diamond Rapids-SP Xeon CPUs Detailed – 64 Golden Cove Cores in 2023, Raptor Cove in 2024, Next-Gen By 2025". Wccftech. Archived from the original on November 1, 2022. Retrieved November 10, 2022.
- ^ Mujtaba, Hassan (January 21, 2022). "Intel Next-Gen Xeon CPU Rumors: 10nm Emerald Rapids, 7nm Granite Rapids, 5nm Diamond Rapids Detailed, Up To 144 Lion Cove Cores by 2025". Wccftech. Archived from the original on January 23, 2023. Retrieved November 10, 2022.
- ^ "Crafting and Testing the Do-Anything Server Chip". Intel Newsroom (Press release). February 7, 2023. Archived from the original on June 25, 2024. Retrieved May 17, 2024.
- ^ a b c Liu, Zhiye (September 3, 2022). "Intel's Xeon Emerald Rapids CPUs Could Wield Up To 64 Cores". Tom's Hardware. Archived from the original on January 15, 2023. Retrieved January 15, 2023.
- ^ "Intel Xeon Scalable Processor Reference for Lenovo ThinkSystem Servers". Lenovo Press. Lenovo. January 19, 2024. Archived from the original on February 23, 2024. Retrieved February 23, 2024.
Emerald Rapids
View on Grokipediafrom Grokipedia
Development
Background and Announcement
Emerald Rapids represents the fifth generation of Intel's Xeon Scalable processors, succeeding the fourth-generation Sapphire Rapids lineup as an incremental refresh designed to enhance performance in data center environments.[7] This positioning allows Emerald Rapids to extend the lifecycle of existing server platforms while introducing optimizations for emerging workloads, building on the foundational architecture established in prior generations such as Ice Lake and Sapphire Rapids.[8] Intel first revealed details of Emerald Rapids at its Innovation event on September 19, 2023, where CEO Pat Gelsinger showcased the processor as part of the company's broader push into AI-optimized computing.[9] The official launch occurred on December 14, 2023, marking the availability of these processors for deployment in high-performance computing (HPC) and artificial intelligence (AI) applications.[10] This timeline aligned with Intel's strategy to address intensifying market competition from AMD's EPYC processors and Arm-based server solutions, which had gained traction in energy-efficient data centers.[4] The development of Emerald Rapids was driven by the need to support accelerating demands in AI and HPC sectors, where workloads require higher throughput and efficiency to handle generative AI models and scientific simulations.[7] By focusing on these areas, Intel aimed to maintain leadership in scalable computing platforms amid rivals' advances in core density and power optimization.[11] Initial shipments of Emerald Rapids began in the fourth quarter of 2023 to select customers and cloud service providers, enabling early validation in production environments, with general availability starting on December 14, 2023.[12] This phased rollout facilitated rapid integration into existing infrastructures, supporting Intel's goal of delivering immediate value in competitive data center markets.[13]Design and Manufacturing
Emerald Rapids processors are fabricated using Intel's Intel 7 process node, an enhanced 10nm-class technology optimized for high-performance computing with improvements in density and power efficiency over prior iterations.[14] This node enables the integration of advanced Raptor Cove cores while maintaining compatibility with existing LGA 4677 sockets from the Sapphire Rapids generation.[15] The architecture adopts a modular tile-based design, featuring two large compute tiles interconnected via Intel's Embedded Multi-Die Interconnect Bridge (EMIB) technology, which uses high-density silicon bridges to enable efficient die-to-die communication with reduced latency.[3] Each compute tile measures approximately 763 mm² and incorporates up to 33 physical cores, with one core per tile typically disabled to enhance yields, resulting in 32 active cores per tile for the highest-end configurations.[3] This dual-tile approach, connected by three EMIB bridges, simplifies the overall package compared to the four-tile design of Sapphire Rapids, reducing interconnect complexity and silicon bridge area to about 5.8% of total die space.[2] Design goals emphasize scalability for data center workloads, supporting up to 64 cores per socket in single-socket systems and 128 cores across dual-socket configurations, facilitated by Ultra Path Interconnect (UPI) links operating at 20 GT/s.[1] Manufacturing occurs in-house at Intel's facilities, leveraging Intel 7 optimizations such as refined layout and binning strategies that improve yields over Sapphire Rapids by utilizing slightly smaller total die area (1,526 mm² versus 1,576 mm²) and fewer disabled cores in production.[3] These enhancements address prior yield challenges on the same process node, enabling higher effective output for multi-core variants without external foundry partnerships.[16]Architecture
CPU Cores
The Emerald Rapids processors employ Raptor Cove performance cores (P-cores) as their primary compute units, enabling configurations of up to 64 cores per socket in high-density variants.[14] These cores are designed for high-performance server workloads, building on the architecture introduced in prior generations while incorporating optimizations for multi-socket scalability and power efficiency.[16] The Raptor Cove microarchitecture features a 6-wide decode stage, capable of processing up to six x86 instructions per cycle to sustain high instruction throughput.[17] It utilizes 12 execution ports in the out-of-order execution engine, supporting parallel dispatch of integer, floating-point, load/store, and vector operations for balanced performance across diverse computational tasks.[18] Compared to the Golden Cove microarchitecture in Alder Lake, Raptor Cove includes refined branch prediction mechanisms, such as larger branch target buffers and improved indirect branch handling, to reduce misprediction penalties in server-oriented code paths.[19] Each Raptor Cove supports Hyper-Threading, allowing two threads per core for a total of 128 threads in a 64-core socket configuration.[20] The cores fully implement the AVX-512 instruction set extensions, enabling 512-bit vector operations for accelerated compute-intensive applications like AI training and scientific simulations.[21] Clock speeds vary by model and core count, with base frequencies ranging from 1.9 GHz to 3.9 GHz and max turbo boosts up to 4.2 GHz; all-core turbo frequencies vary, reaching up to around 3.0 GHz in high-core-count configurations to maintain thermal limits.[6][20] These cores integrate seamlessly with the processor's cache hierarchy to minimize latency in thread execution.[1]Cache and Memory Subsystem
The cache hierarchy in Emerald Rapids processors features private 2 MB L2 caches per core, providing dedicated storage for each Raptor Cove core to minimize latency for frequently accessed data.[22] The shared L3 cache, serving as the last-level cache, scales with core count at 5 MB per core, resulting in totals such as 120 MB for 24-core variants up to 320 MB for 64-core configurations, enabling efficient data sharing across cores in multi-threaded workloads.[23] This design presents the L3 as a logically monolithic structure to all cores, despite distribution across the on-die mesh, to support seamless access and reduce inter-core communication overhead.[24] The L3 cache incorporates victim cache functionality, where lines evicted from the L2 caches are temporarily stored to facilitate prefetching and improve hit rates for sequential or strided access patterns common in server applications.[25] Prefetchers integrated into the L3 hierarchy anticipate data needs based on access patterns, loading anticipated cache lines to enhance performance in bandwidth-sensitive tasks without excessive off-chip traffic.[26] Memory support centers on eight-channel DDR5 controllers, enabling up to DDR5-5600 operation in single-DIMM-per-channel configurations for optimal bandwidth.[23] Maximum capacity reaches 4 TB per socket using 256 GB RDIMMs across two DIMMs per channel, with DDR5's on-die error correction complemented by full-channel ECC for data integrity in enterprise environments.[2] Advanced RAS features, including patrol scrubbing, memory mirroring, and rank-level error isolation, ensure high availability by detecting and mitigating errors proactively.[23] Theoretical peak memory bandwidth achieves 358 GB/s with DDR5-5600, a 17% uplift over Sapphire Rapids' DDR5-4800, optimized for low-latency access in high-performance computing through interleaved addressing across channels.[5] This configuration supports HPC workloads by prioritizing sustained throughput while maintaining compatibility with CXL 1.1/2.0 for memory expansion.[27] Cache coherency relies on an enhanced 2D mesh interconnect architecture, which facilitates efficient snooping and directory-based protocols across core tiles and the two-die design, ensuring consistent data visibility with reduced latency compared to prior generations.[28] The mesh integrates with Intel UPI 2.0 links at up to 20 GT/s for multi-socket systems, maintaining coherence in shared-memory domains.[23]I/O and Interconnect
The Emerald Rapids processors feature up to 80 lanes of PCIe 5.0 per socket, enabling high-bandwidth connectivity for accelerators such as GPUs and high-performance storage devices.[29] These lanes operate at 32 GT/s and can be configured in various x16, x8, or x4 configurations to optimize for specific workloads, including AI training and data analytics, while maintaining backward compatibility with PCIe 4.0 and earlier generations.[29] In multi-socket configurations, this scales to support up to 160 lanes across two sockets, facilitating dense I/O expansion in server environments.[1] For multi-socket scaling, Emerald Rapids employs the Ultra Path Interconnect (UPI) 2.0, providing up to four links per socket at 20 GT/s for low-latency, cache-coherent communication between processors.[1] This represents a 25% increase in bandwidth over the previous generation's 16 GT/s, enhancing NUMA-aware applications in up to eight-socket systems without requiring additional bridging hardware.[4] The integrated I/O die in Emerald Rapids incorporates advanced fabric support, including Compute Express Link (CXL) 2.0 for memory expansion and pooling across heterogeneous devices.[2] This enables Type 3 CXL devices for disaggregated memory, allowing up to 6 TB of DDR5 per socket to be augmented with external high-bandwidth memory (HBM) or persistent memory modules, improving resource utilization in cloud and HPC deployments.[1] Additionally, the I/O die supports configurable networking options, such as up to 4x 100 GbE interfaces via PCIe-attached adapters, optimizing for high-throughput Ethernet in data center fabrics.[30] Storage interfaces leverage the PCIe 5.0 infrastructure to deliver up to 8 dedicated NVMe lanes per socket, with compatibility for EDSFF (Enterprise and Data Center Standard Form Factor) E3.S and E1.L drives in high-density configurations.[31] This setup supports direct-attached NVMe SSDs for ultra-low latency access, enabling configurations with up to 16 NVMe drives per node without oversubscription, ideal for storage-intensive workloads like databases and virtualization.[32] The EDSFF form factor enhances power efficiency and thermal management for PCIe 5.0 NVMe, allowing denser SSD populations while maintaining performance parity with traditional U.2 drives.[31]Key Features
Compute and Acceleration
The Emerald Rapids processors incorporate Advanced Matrix Extensions (AMX), an x86 instruction set extension designed to accelerate matrix multiplication operations critical for deep learning training and inference. AMX supports low-precision data types such as brain floating-point (BF16) for both training and inference, as well as 8-bit integer (INT8) primarily for inference, enabling efficient handling of AI workloads through dedicated tile registers and matrix multiply-accumulate instructions. Compared to the preceding Sapphire Rapids generation, Emerald Rapids delivers up to 1.4x higher AI inference throughput, attributed to architectural refinements including increased tile memory capacity and optimized execution pipelines that enhance matrix operation performance.[33][34] Integrated QuickAssist Technology (QAT) provides hardware acceleration for data compression and cryptographic operations, offloading these tasks from CPU cores to dedicated engines within the processor. This integration supports standards like AES for encryption and algorithms such as DEFLATE for compression, reducing latency and CPU utilization in network security, storage, and virtualization scenarios. Emerald Rapids features up to 4x QAT engines per socket in high-core-count variants, enabling scalable offload for high-throughput environments without requiring discrete PCIe cards.[35][36] Emerald Rapids supports Data Parallel Extensions (DPX) as part of the oneAPI ecosystem, facilitating high-performance computing (HPC) applications through portable, data-parallel programming models. DPX enables developers to leverage vectorized operations across CPU cores for scientific simulations and data analytics, with optimizations in the oneAPI HPC Toolkit that target AVX-512 instructions for enhanced parallelism. This integration promotes cross-architecture compatibility, allowing HPC workloads to scale efficiently on multi-socket configurations.[37][38] For AI and HPC tasks, Emerald Rapids achieves robust double-precision floating-point (FP64) capabilities using AVX-512 vector extensions, providing suitability for simulations and modeling across up to 64 cores per socket with theoretical peaks up to 6.1 TFLOPS at turbo frequencies.[39]Power and Efficiency
Emerald Rapids processors support a configurable thermal design power (TDP) ranging from 125 W to 350 W across various SKUs, enabling system designers to balance performance and energy use based on workload demands. Higher-end models, particularly those optimized for liquid cooling, can reach up to 385 W to sustain peak performance in demanding environments. Power management features include dynamic voltage and frequency scaling (DVFS) as well as fine-grained power gating at the tile level, allowing inactive compute tiles to enter low-power states while active ones maintain optimal operation. These mechanisms help mitigate thermal throttling and reduce overall power draw during bursty or idle periods.[1] Compared to the predecessor Sapphire Rapids, Emerald Rapids delivers efficiency improvements of up to 34% in performance per watt at iso-power configurations, driven by architectural enhancements such as a 2.6x increase in L3 cache per core and refined micro-operation fusion in the front-end pipeline. These optimizations yield a 20-30% uplift in instructions per cycle (IPC) for certain workloads, particularly those benefiting from larger on-die cache and reduced latency, without requiring a process node shrink—both generations utilize the Intel 7 process. Such gains enable sustained high-throughput computing while maintaining comparable power envelopes, making Emerald Rapids suitable for energy-constrained data center deployments.[34][40][3] For thermal management, high-TDP configurations necessitate advanced cooling solutions, including direct liquid cooling for SKUs exceeding 300 W to ensure reliable operation under prolonged full-load scenarios. Thermal monitoring is facilitated through the Platform Environment Control Interface (PECI), which provides real-time temperature data from on-die sensors to the system management controller, enabling proactive adjustments to fan speeds or power limits. In terms of sustainability, Emerald Rapids achieves lower power per core than 7nm-based competitors like AMD's Milan-X in cache-intensive applications, reducing data center electricity consumption and cooling requirements by leveraging denser L3 cache to minimize off-chip memory accesses.[1][2]Security and Reliability
The Emerald Rapids processors incorporate Intel Software Guard Extensions (SGX) 2.0, enabling application-level isolation for sensitive data processing within secure enclaves. This version supports significantly larger Enclave Page Cache (EPC) sizes, with up to 512 GB per socket, allowing configurations reaching 1 TB in two-socket systems to accommodate demanding confidential workloads.[41] For broader virtual machine protection, Emerald Rapids features Intel Trust Domain Extensions (TDX), which provides hardware-based confidentiality and integrity for cloud and enterprise deployments through isolated trust domains. TDX extends capabilities to include trusted execution environments for device I/O, facilitating encrypted PCIe communications with peripherals while supporting multi-socket setups up to four processors for scalable confidential computing. As of November 2025, TDX adoption has expanded in cloud platforms like Azure Confidential VMs.[42][43] Reliability, Availability, and Serviceability (RAS) enhancements in Emerald Rapids ensure high uptime in data center environments, building on advanced machine check architecture to detect and recover from hardware errors in real time. Predictive failure analysis monitors components like memory and interconnects to preemptively identify potential issues, while hot-swap support allows for component replacement without system downtime, minimizing disruptions in mission-critical operations.[44] To address transient execution vulnerabilities such as Spectre and Meltdown, Emerald Rapids integrates hardware barriers and indirect branch predictors that mitigate side-channel attacks at the architectural level. These processors also receive ongoing microcode updates from Intel to patch emerging variants, ensuring robust protection without requiring full OS or application changes.[45]Processor Lineup
5th Generation Xeon Scalable Models
The 5th Generation Xeon Scalable models based on the Emerald Rapids microarchitecture encompass Intel's 5th Generation Xeon Scalable processors, organized into Platinum, Gold, Silver, and Bronze tiers to address high-performance computing, balanced workloads, entry-level data center needs, and basic storage servers, respectively.[6] These models leverage the Golden Cove performance cores from the CPU cores section, supporting up to 64 cores per socket with hyper-threading for 128 threads, DDR5-5600 memory, and PCIe 5.0 interfaces.[29] Launched on December 14, 2023, the initial lineup includes approximately 32 SKUs, with core counts ranging from 8 to 64 and thermal design power (TDP) from 125 W to 385 W, enabling scalability in dual-socket configurations up to 128 cores.[46] Pricing starts at around $563 for low-end Silver models, scaling to $11,600 for flagship Platinum variants, reflecting their targeted performance envelopes.[14] The Platinum tier prioritizes maximum core density and acceleration for demanding AI, HPC, and virtualization tasks, featuring the highest cache sizes up to 320 MB L3 and integrated accelerators like QuickAssist Technology (QAT) and Data Streaming Accelerator (DSA). Representative models include the Xeon Platinum 8592+ with 64 cores at a 1.9 GHz base frequency (up to 3.9 GHz turbo), 350 W TDP, and $11,600 list price, and the Xeon Platinum 8580 with 60 cores at 2.0 GHz base (up to 4.0 GHz turbo), 350 W TDP, and $10,710 price.[1] These processors deliver up to 40% better performance than prior generations in memory-bound workloads, attributed to enhanced DDR5 support and larger caches.[29] Gold models offer a balanced profile for general-purpose servers, cloud, and database applications, with core counts from 8 to 36 and cache up to 180 MB, often including optimized frequencies for efficiency. Key examples are the Xeon Gold 6548Y+ with 32 cores at 2.5 GHz base (up to 4.1 GHz turbo), 250 W TDP, and $3,726 price, and the Xeon Gold 6554S with 36 cores at 2.2 GHz base (up to 4.0 GHz turbo), 270 W TDP, and $3,157 price.[14] This tier emphasizes power efficiency. The Silver models target cost-sensitive environments like branch offices and storage, providing entry-level performance with 8 to 24 cores and lower TDPs for energy-constrained setups. Notable SKUs include the Xeon Silver 4514Y with 16 cores at 2.0 GHz base (up to 3.4 GHz turbo), 150 W TDP, and $780 price, and the Xeon Silver 4509Y with 8 cores at 2.6 GHz base (up to 4.1 GHz turbo), 125 W TDP, and $563 price.[14] These deliver reliable throughput for basic virtualization and networking, starting at under $600 to broaden accessibility.[6] The Bronze tier provides the most affordable entry point for basic server and storage tasks, with limited core counts and features. An example is the Xeon Bronze 3508U with 8 cores (no hyper-threading), 2.1 GHz base (up to 2.2 GHz turbo), 22.5 MB L3 cache, and 125 W TDP.[47]| Tier | Model Example | Cores/Threads | Base/Turbo Freq (GHz) | L3 Cache (MB) | TDP (W) | List Price (USD) |
|---|---|---|---|---|---|---|
| Platinum | 8592+ | 64/128 | 1.9/3.9 | 320 | 350 | 11,600 |
| Platinum | 8580 | 60/120 | 2.0/4.0 | 300 | 350 | 10,710 |
| Gold | 6548Y+ | 32/64 | 2.5/4.1 | 60 | 250 | 3,726 |
| Gold | 6554S | 36/72 | 2.2/4.0 | 180 | 270 | 3,157 |
| Silver | 4514Y | 16/32 | 2.0/3.4 | 30 | 150 | 780 |
| Silver | 4509Y | 8/16 | 2.6/4.1 | 22.5 | 125 | 563 |
Variants and Configurations
The Emerald Rapids processors support multi-socket configurations ranging from dual-processor (2P) to eight-processor (8P) systems, utilizing the Ultra Path Interconnect (UPI) at speeds up to 20 GT/s for low-latency inter-processor communication.[14] This enables scalable deployments, with maximum configurations achieving up to 512 cores across eight sockets using 64-core models like the Platinum 8592+.[1] Specialized variants include liquid-cooled options (with high TDP ratings up to 385 W) for high-performance computing environments requiring enhanced thermal management, as well as general-purpose "+" models optimized for broad server workloads.[14] These configurations maintain compatibility with existing LGA-4677 sockets and DDR5-5600 memory subsystems, facilitating upgrades from prior generations without major infrastructure changes.[2] OEM integrations feature reference designs from Dell Technologies and Hewlett Packard Enterprise (HPE), which pair Emerald Rapids with PCIe Gen5 I/O for AI and data center applications; while Intel Gaudi 3 accelerators are primarily validated with subsequent Xeon generations, early ecosystem pairings emphasize GPU acceleration for hybrid workloads.[48] Emerald Rapids serves as a bridge in Intel's roadmap, succeeded by the Granite Rapids architecture in the sixth-generation Xeon lineup, which introduces higher core densities and advanced process nodes for continued scalability.[7]References
- https://en.wikichip.org/wiki/intel/mesh_interconnect_architecture
