Hubbry Logo
Emerald RapidsEmerald RapidsMain
Open search
Emerald Rapids
Community hub
Emerald Rapids
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Emerald Rapids
Emerald Rapids
from Wikipedia

Emerald Rapids
General information
LaunchedDecember 14, 2023 (2023-12-14)[1]
Marketed byIntel
Designed byIntel
Common manufacturer
  • Intel
Product code80722[2]
Performance
Max. CPU clock rate1.9 GHz to 4.2 GHz
QPI speeds16 GT/s to 20 GT/s
DMI speeds16 GT/s
Physical specifications
Cores
  • 8-64
Package
  • Flip-chip land grid array (FC-LGA)
Socket
Cache
L1 cache80 KB per core:
  • 32 KB instruction
  • 48 KB data
L2 cache2 MB (per core)
L3 cache5 MB (per core)
Architecture and classification
ApplicationServer
Embedded
Technology nodeIntel 7 (previously known as 10ESF)
MicroarchitectureRaptor Cove
Instruction setx86-64
InstructionsMMX, SSE, SSE2, SSE3, SSSE3, SSE4, SSE4.1, SSE4.2, AVX, AVX2, FMA3, AVX-512, AVX-VNNI, TSX, AMX
Extensions
Products, models, variants
Product code name
  • EMR
Model
  • Emerald Rapids-SP
Brand names
  • Xeon Bronze
  • Xeon Silver
  • Xeon Gold
  • Xeon Platinum
History
PredecessorSapphire Rapids
SuccessorsGranite Rapids (P-cores)
Sierra Forest (E-cores)

Emerald Rapids is the codename for Intel's fifth generation Xeon Scalable server processors based on the Intel 7 node.[3][4] Emerald Rapids CPUs are designed for data centers; the roughly contemporary Raptor Lake is intended for desktop and mobile usage.[5][6] Nevine Nassif is a chief engineer for this generation.[7]

Features

[edit]

CPU

[edit]
  • Up to 64 Raptor Cove CPU cores per package[8]
    • Up to 32 cores per tile, reducing the max tiles to two
  • 5 MB of L3 cache per core (up from 1.875 MB in Sapphire Rapids)
  • Speed Select Technology that supports high and low priority cores

I/O

[edit]

List of Emerald Rapids processors

[edit]

Emerald Rapids-SP (Scalable Performance)

[edit]

CPUs in italic are actually Sapphire Rapids processors, and they still have 1.875 MB of L3 cache per core.

Suffixes to denote:[9]

  • +: Includes 1 of each of the four accelerators: DSA, IAA, QAT, DLB
  • H: Database and analytics workloads, supports 4S (Xeon Gold) and/or 8S (Xeon Platinum) configurations and includes all of the accelerators
  • M: Media transcode workloads
  • N: Network/5G/Edge workloads (High TPT/Low Latency), some are uniprocessor
  • P: Cloud and infrastructure as a service (IaaS) workloads
  • Q: Liquid cooling
  • S: Storage & Hyper-converged infrastructure (HCI) workloads
  • T: Long-life use/High thermal case
  • U: Uniprocessor (some workload-specific SKUs may also be uniprocessor)
  • V: Optimized for cloud and software as a service (SaaS) workloads, some are uniprocessor
  • Y: Speed Select Technology-Performance Profile (SST-PP) enabled (some workload-specific SKUs may also support SST-PP)
  • Y+: Speed Select Technology-Performance Profile (SST-PP) enabled and includes 1 of each of the accelerators.
Model number Cores
(Threads)
Base
clock
All core
turbo
boost
Max turbo
boost
Smart
Cache
TDP Maximum
scalability
Registered
DDR5
w. ECC
support
UPI
Links
Release
MSRP
(USD)
Xeon Platinum (8500)
8593Q 64 (128) 2.2 GHz 3.0 GHz 3.9 GHz 320 MB 385 W 2S 5600 MT/s 4 $12400
8592+ 1.9 GHz 2.9 GHz 350 W $11600
8592V 2.0 GHz 330 W 4800 MT/s 3 $10995
8581V 60 (120) 2.6 GHz 300 MB 270 W 1S 0 $7568
8580 2.9 GHz 4.0 GHz 350 W 2S 5600 MT/s 4 $10710
8573C ? ? 3.0 GHz ? ? ? ? ? ? ?
8571N 52 (104) 2.4 GHz 3.0 GHz 4.0 GHz 300 MB 350 W 1S 4800 MT/s 0 $6839
8570 56 (112) 2.1 GHz 2S 5600 MT/s 4 $9595
8568Y+ 48 (96) 2.3 GHz 3.2 GHz $6497
8562Y+ 32 (64) 2.8 GHz 3.8 GHz 4.1 GHz 60 MB 300 W 3 $5945
8558 48 (96) 2.1 GHz 3.0 GHz 4.0 GHz 260 MB 330 W 5200 MT/s 4 $4650
8558P 2.7 GHz 3.2 GHz 350 W 5600 MT/s 3 $6759
8558U 2.0 GHz 2.9 GHz 300 W 1S 5200 MT/s 0 $3720
Xeon Gold (5500 and 6500)
6558Q 32 (64) 3.2 GHz 4.1 GHz 4.1 GHz 60 MB 350 W 2S 5200 MT/s 3 $6416
6554S 36 (72) 2.2 GHz 3.0 GHz 4.0 GHz 180 MB 270 W 4 $3157
6548Y+ 32 (64) 2.5 GHz 3.5 GHz 4.1 GHz 60 MB 250 W 3 $3726
6548N 2.8 GHz 3.5 GHz $3875
6544Y 16 (32) 3.6 GHz 4.1 GHz 45 MB 270 W $3622
6542Y 24 (48) 2.9 GHz 3.6 GHz 60 MB 250 W $2878
6538Y+ 32 (64) 2.2 GHz 3.3 GHz 4.0 GHz 225 W $3141
6538N 2.1 GHz 2.9 GHz 4.1 GHz 205 W $3875
6534 8 (16) 3.9 GHz 4.2 GHz 4.2 GHz 22.5 MB 195 W 4800 MT/s $2816
6530 32 (64) 2.1 GHz 2.7 GHz 4.0 GHz 160 MB 270 W $2128
6526Y 16 (32) 2.8 GHz 3.5 GHz 3.9 GHz 37.5 MB 195 W 5200 MT/s $1517
5520+ 28 (56) 2.0 GHz 3.0 GHz 4.0 GHz 52.5 MB 205 W 4800 MT/s $1640
5515+ 8 (16) 3.2 GHz 3.6 GHz 4.1 GHz 22.5 MB 165 W $1099
5512U 28 (56) 2.1 GHz 3.0 GHz 3.7 GHz 185 W 1S 0 $1230
Xeon Silver (4500)
4516Y+ 24 (48) 2.2 GHz 2.9 GHz 3.7 GHz 45 MB 185 W 2S 4400 MT/s 2 $1295
4514Y 16 (32) 2.0 GHz 2.6 GHz 3.4 GHz 30 MB 150 W $780
4510T 12 (24) 2.8 GHz 3.7 GHz 115 W $624
4510 2.4 GHz 3.3 GHz 4.1 GHz 150 W $563
4509Y 8 (16) 2.6 GHz 3.6 GHz 22.5 MB 125 W $563
Xeon Bronze (3500)
3508U 8 (8) 2.1 GHz 2.2 GHz 22.5 MB 125 W 1S 4400 MT/s 0 $415-$425

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Emerald Rapids is the codename for Intel's fifth-generation Scalable processors, a family of server central processing units (CPUs) optimized for , , and applications. Released on December 14, 2023, these processors support up to 64 cores and 128 threads per socket, with L3 cache sizes scaling up to 320 MB for enhanced access efficiency. Built on Intel's Intel 7 manufacturing process, Emerald Rapids employs a dual-die monolithic design consisting of two large compute tiles, eschewing the architecture used in some prior and future Intel products for a more integrated approach. The processors feature eight-channel DDR5 memory support at speeds up to 5600 MT/s, delivering theoretical bandwidth of up to 358 GB/s per socket, alongside up to 80 PCIe 5.0 lanes for expanded I/O connectivity. Notable enhancements include (AMX) for accelerated AI and workloads, improved power management with thermal design power (TDP) ratings from 150 W to 350 W, and base clock speeds ranging from 1.9 GHz to 3.0 GHz with turbo boosts up to 4.1 GHz. As a direct successor to the fourth-generation , Emerald Rapids provides incremental performance uplifts through higher memory speeds—a 16.7% increase over ' DDR5-4800 support—and expanded cache capacity, enabling better handling of memory-intensive tasks in enterprise environments. The lineup spans multiple series, including , Silver, Gold, and models, with pricing starting around $563 for lower-end SKUs and reaching over $10,000 for top-tier configurations like the 64-core 8592+. These CPUs maintain compatibility with existing LGA 4677 sockets while introducing optimizations for emerging workloads in AI and large-scale simulations.

Development

Background and Announcement

Emerald Rapids represents the fifth generation of Intel's Scalable processors, succeeding the fourth-generation lineup as an incremental refresh designed to enhance performance in environments. This positioning allows Emerald Rapids to extend the lifecycle of existing server platforms while introducing optimizations for emerging workloads, building on the foundational established in prior generations such as Ice Lake and . Intel first revealed details of Emerald Rapids at its Innovation event on September 19, 2023, where CEO showcased the processor as part of the company's broader push into AI-optimized computing. The official launch occurred on December 14, 2023, marking the availability of these processors for deployment in (HPC) and (AI) applications. This timeline aligned with Intel's strategy to address intensifying market competition from AMD's processors and Arm-based server solutions, which had gained traction in energy-efficient data centers. The development of Emerald Rapids was driven by the need to support accelerating demands in AI and HPC sectors, where workloads require higher throughput and efficiency to handle generative AI models and scientific simulations. By focusing on these areas, aimed to maintain leadership in scalable computing platforms amid rivals' advances in core density and power optimization. Initial shipments of Emerald Rapids began in the fourth quarter of 2023 to select customers and cloud service providers, enabling early validation in production environments, with general availability starting on December 14, 2023. This phased rollout facilitated rapid integration into existing infrastructures, supporting Intel's goal of delivering immediate value in competitive data center markets.

Design and Manufacturing

Emerald Rapids processors are fabricated using 's Intel 7 process node, an enhanced 10nm-class technology optimized for with improvements in density and power efficiency over prior iterations. This node enables the integration of advanced Raptor Cove cores while maintaining compatibility with existing sockets from the generation. The architecture adopts a modular tile-based design, featuring two large compute tiles interconnected via Intel's Embedded Multi-Die Interconnect Bridge (EMIB) technology, which uses high-density bridges to enable efficient die-to-die communication with reduced latency. Each compute tile measures approximately 763 mm² and incorporates up to 33 physical cores, with one core per tile typically disabled to enhance yields, resulting in 32 active cores per tile for the highest-end configurations. This dual-tile approach, connected by three EMIB bridges, simplifies the overall package compared to the four-tile design of , reducing interconnect complexity and bridge area to about 5.8% of total die space. Design goals emphasize scalability for workloads, supporting up to 64 cores per socket in single-socket systems and 128 cores across dual-socket configurations, facilitated by Ultra Path Interconnect (UPI) links operating at 20 GT/s. Manufacturing occurs in-house at 's facilities, leveraging Intel 7 optimizations such as refined layout and binning strategies that improve yields over by utilizing slightly smaller total die area (1,526 mm² versus 1,576 mm²) and fewer disabled cores in production. These enhancements address prior yield challenges on the same process node, enabling higher effective output for multi-core variants without external partnerships.

Architecture

CPU Cores

The Emerald Rapids processors employ Raptor Cove performance cores (P-cores) as their primary compute units, enabling configurations of up to 64 cores per socket in high-density variants. These cores are designed for high-performance server workloads, building on the architecture introduced in prior generations while incorporating optimizations for multi-socket scalability and power efficiency. The Raptor Cove microarchitecture features a 6-wide decode stage, capable of processing up to six x86 to sustain high instruction throughput. It utilizes 12 execution ports in the engine, supporting parallel dispatch of integer, floating-point, load/store, and vector operations for balanced performance across diverse computational tasks. Compared to the microarchitecture in , Raptor Cove includes refined branch prediction mechanisms, such as larger branch target buffers and improved indirect branch handling, to reduce misprediction penalties in server-oriented code paths. Each Raptor Cove supports , allowing two threads per core for a total of 128 threads in a 64-core socket configuration. The cores fully implement the instruction set extensions, enabling 512-bit vector operations for accelerated compute-intensive applications like AI training and scientific simulations. Clock speeds vary by model and core count, with base frequencies ranging from 1.9 GHz to 3.9 GHz and max turbo boosts up to 4.2 GHz; all-core turbo frequencies vary, reaching up to around 3.0 GHz in high-core-count configurations to maintain limits. These cores integrate seamlessly with the processor's to minimize latency in thread execution.

Cache and Memory Subsystem

The in Emerald Rapids processors features private 2 MB L2 caches per core, providing dedicated storage for each Raptor Cove core to minimize latency for frequently accessed data. The shared L3 cache, serving as the last-level cache, scales with core count at 5 MB per core, resulting in totals such as 120 MB for 24-core variants up to 320 MB for 64-core configurations, enabling efficient data sharing across cores in multi-threaded workloads. This design presents the L3 as a logically monolithic structure to all cores, despite distribution across the on-die , to support seamless access and reduce inter-core communication overhead. The L3 cache incorporates victim cache functionality, where lines evicted from the L2 caches are temporarily stored to facilitate prefetching and improve hit rates for sequential or strided access patterns common in server applications. Prefetchers integrated into the L3 hierarchy anticipate data needs based on access patterns, loading anticipated cache lines to enhance performance in bandwidth-sensitive tasks without excessive off-chip traffic. Memory support centers on eight-channel DDR5 controllers, enabling up to DDR5-5600 operation in single-DIMM-per-channel configurations for optimal bandwidth. Maximum capacity reaches 4 TB per socket using 256 GB RDIMMs across two DIMMs per channel, with DDR5's on-die error correction complemented by full-channel ECC for in enterprise environments. Advanced RAS features, including patrol scrubbing, memory mirroring, and rank-level error isolation, ensure by detecting and mitigating errors proactively. Theoretical peak memory bandwidth achieves 358 GB/s with DDR5-5600, a 17% uplift over Sapphire Rapids' DDR5-4800, optimized for low-latency access in through interleaved addressing across channels. This configuration supports HPC workloads by prioritizing sustained throughput while maintaining compatibility with CXL 1.1/2.0 for memory expansion. Cache coherency relies on an enhanced 2D mesh interconnect architecture, which facilitates efficient snooping and directory-based protocols across core tiles and the two-die design, ensuring consistent data visibility with reduced latency compared to prior generations. The mesh integrates with UPI 2.0 links at up to 20 GT/s for multi-socket systems, maintaining coherence in shared-memory domains.

I/O and Interconnect

The Emerald Rapids processors feature up to 80 lanes of PCIe 5.0 per socket, enabling high-bandwidth connectivity for accelerators such as GPUs and high-performance storage devices. These lanes operate at 32 GT/s and can be configured in various x16, x8, or x4 configurations to optimize for specific workloads, including AI training and , while maintaining backward compatibility with PCIe 4.0 and earlier generations. In multi-socket configurations, this scales to support up to 160 lanes across two sockets, facilitating dense I/O expansion in server environments. For multi-socket scaling, Emerald Rapids employs the Ultra Path Interconnect (UPI) 2.0, providing up to four links per socket at 20 GT/s for low-latency, cache-coherent communication between processors. This represents a 25% increase in bandwidth over the previous generation's 16 GT/s, enhancing NUMA-aware applications in up to eight-socket systems without requiring additional bridging hardware. The integrated I/O die in Emerald Rapids incorporates advanced fabric support, including (CXL) 2.0 for memory expansion and pooling across heterogeneous devices. This enables Type 3 CXL devices for disaggregated memory, allowing up to 6 TB of DDR5 per socket to be augmented with external high-bandwidth memory (HBM) or modules, improving resource utilization in and HPC deployments. Additionally, the I/O die supports configurable networking options, such as up to 4x 100 GbE interfaces via PCIe-attached adapters, optimizing for high-throughput Ethernet in fabrics. Storage interfaces leverage the PCIe 5.0 infrastructure to deliver up to 8 dedicated NVMe lanes per socket, with compatibility for EDSFF (Enterprise and Data Center Standard Form Factor) E3.S and E1.L drives in high-density configurations. This setup supports direct-attached NVMe SSDs for ultra-low latency access, enabling configurations with up to 16 NVMe drives per node without oversubscription, ideal for storage-intensive workloads like databases and . The EDSFF form factor enhances power efficiency and thermal management for PCIe 5.0 NVMe, allowing denser SSD populations while maintaining performance parity with traditional drives.

Key Features

Compute and Acceleration

The Emerald Rapids processors incorporate (AMX), an x86 instruction set extension designed to accelerate operations critical for and . AMX supports low-precision data types such as brain floating-point (BF16) for both and , as well as 8-bit integer (INT8) primarily for , enabling efficient handling of AI workloads through dedicated tile registers and matrix multiply-accumulate instructions. Compared to the preceding generation, Emerald Rapids delivers up to 1.4x higher AI throughput, attributed to architectural refinements including increased tile memory capacity and optimized execution pipelines that enhance matrix operation performance. Integrated QuickAssist Technology (QAT) provides for data compression and cryptographic operations, offloading these tasks from CPU cores to dedicated engines within the processor. This integration supports standards like AES for encryption and algorithms such as for compression, reducing latency and CPU utilization in , storage, and scenarios. Emerald Rapids features up to 4x QAT engines per socket in high-core-count variants, enabling scalable offload for high-throughput environments without requiring discrete PCIe cards. Emerald Rapids supports Data Parallel Extensions (DPX) as part of the oneAPI ecosystem, facilitating (HPC) applications through portable, data-parallel programming models. DPX enables developers to leverage vectorized operations across CPU cores for scientific simulations and , with optimizations in the oneAPI HPC Toolkit that target instructions for enhanced parallelism. This integration promotes cross-architecture compatibility, allowing HPC workloads to scale efficiently on multi-socket configurations. For AI and HPC tasks, Emerald Rapids achieves robust double-precision floating-point (FP64) capabilities using AVX-512 vector extensions, providing suitability for simulations and modeling across up to 64 cores per socket with theoretical peaks up to 6.1 TFLOPS at turbo frequencies.

Power and Efficiency

Emerald Rapids processors support a configurable thermal design power (TDP) ranging from 125 W to 350 W across various SKUs, enabling system designers to balance performance and energy use based on workload demands. Higher-end models, particularly those optimized for liquid cooling, can reach up to 385 W to sustain peak performance in demanding environments. Power management features include dynamic voltage and frequency scaling (DVFS) as well as fine-grained power gating at the tile level, allowing inactive compute tiles to enter low-power states while active ones maintain optimal operation. These mechanisms help mitigate thermal throttling and reduce overall power draw during bursty or idle periods. Compared to the predecessor , Emerald Rapids delivers efficiency improvements of up to 34% in performance per watt at iso-power configurations, driven by architectural enhancements such as a 2.6x increase in L3 cache per core and refined fusion in the front-end . These optimizations yield a 20-30% uplift in (IPC) for certain workloads, particularly those benefiting from larger on-die cache and reduced latency, without requiring a process node shrink—both generations utilize the Intel 7 process. Such gains enable sustained while maintaining comparable power envelopes, making Emerald Rapids suitable for energy-constrained deployments. For thermal management, high-TDP configurations necessitate advanced cooling solutions, including direct liquid cooling for SKUs exceeding 300 W to ensure reliable operation under prolonged full-load scenarios. Thermal monitoring is facilitated through the Platform Environment Control Interface (PECI), which provides real-time temperature data from on-die sensors to the system management controller, enabling proactive adjustments to fan speeds or power limits. In terms of sustainability, Emerald Rapids achieves lower power per core than 7nm-based competitors like AMD's Milan-X in cache-intensive applications, reducing data center electricity consumption and cooling requirements by leveraging denser L3 cache to minimize off-chip memory accesses.

Security and Reliability

The Emerald Rapids processors incorporate 2.0, enabling application-level isolation for sensitive data processing within secure enclaves. This version supports significantly larger Enclave Page Cache (EPC) sizes, with up to 512 GB per socket, allowing configurations reaching 1 TB in two-socket systems to accommodate demanding confidential workloads. For broader virtual machine protection, Emerald Rapids features Trust Domain Extensions (TDX), which provides hardware-based confidentiality and integrity for cloud and enterprise deployments through isolated trust domains. TDX extends capabilities to include trusted execution environments for device I/O, facilitating encrypted PCIe communications with peripherals while supporting multi-socket setups up to four processors for scalable . As of November 2025, TDX adoption has expanded in cloud platforms like Azure Confidential VMs. Reliability, Availability, and Serviceability (RAS) enhancements in Emerald Rapids ensure high uptime in data center environments, building on advanced machine check architecture to detect and recover from hardware errors in real time. Predictive failure analysis monitors components like memory and interconnects to preemptively identify potential issues, while hot-swap support allows for component replacement without system downtime, minimizing disruptions in mission-critical operations. To address transient execution vulnerabilities such as Spectre and Meltdown, Emerald Rapids integrates hardware barriers and indirect branch predictors that mitigate side-channel attacks at the architectural level. These processors also receive ongoing updates from to patch emerging variants, ensuring robust protection without requiring full OS or application changes.

Processor Lineup

5th Generation Xeon Scalable Models

The 5th Generation Scalable models based on the Emerald Rapids encompass Intel's 5th Generation Scalable processors, organized into , Gold, Silver, and Bronze tiers to address , balanced workloads, entry-level needs, and basic storage servers, respectively. These models leverage the performance cores from the CPU cores section, supporting up to 64 cores per socket with for 128 threads, DDR5-5600 memory, and PCIe 5.0 interfaces. Launched on December 14, 2023, the initial lineup includes approximately 32 SKUs, with core counts ranging from 8 to 64 and (TDP) from 125 W to 385 W, enabling scalability in dual-socket configurations up to 128 cores. Pricing starts at around $563 for low-end Silver models, scaling to $11,600 for flagship variants, reflecting their targeted performance envelopes. The Platinum tier prioritizes maximum core density and acceleration for demanding AI, HPC, and virtualization tasks, featuring the highest cache sizes up to 320 MB L3 and integrated accelerators like QuickAssist Technology (QAT) and Data Streaming Accelerator (DSA). Representative models include the Platinum 8592+ with 64 cores at a 1.9 GHz base frequency (up to 3.9 GHz turbo), 350 W TDP, and $11,600 list price, and the Platinum 8580 with 60 cores at 2.0 GHz base (up to 4.0 GHz turbo), 350 W TDP, and $10,710 price. These processors deliver up to 40% better performance than prior generations in memory-bound workloads, attributed to enhanced DDR5 support and larger caches. Gold models offer a balanced profile for general-purpose servers, , and database applications, with core counts from 8 to 36 and cache up to 180 MB, often including optimized frequencies for efficiency. Key examples are the Xeon Gold 6548Y+ with 32 cores at 2.5 GHz base (up to 4.1 GHz turbo), 250 W TDP, and $3,726 price, and the Xeon Gold 6554S with 36 cores at 2.2 GHz base (up to 4.0 GHz turbo), 270 W TDP, and $3,157 price. This tier emphasizes power efficiency. The Silver models target cost-sensitive environments like branch offices and storage, providing entry-level performance with 8 to 24 cores and lower TDPs for energy-constrained setups. Notable SKUs include the Xeon Silver 4514Y with 16 cores at 2.0 GHz base (up to 3.4 GHz turbo), 150 W TDP, and $780 price, and the Xeon Silver 4509Y with 8 cores at 2.6 GHz base (up to 4.1 GHz turbo), 125 W TDP, and $563 price. These deliver reliable throughput for basic and networking, starting at under $600 to broaden accessibility. The tier provides the most affordable entry point for basic server and storage tasks, with limited core counts and features. An example is the 3508U with 8 s (no ), 2.1 GHz base (up to 2.2 GHz turbo), 22.5 MB L3 cache, and 125 W TDP.
TierModel ExampleCores/ThreadsBase/Turbo Freq (GHz)L3 Cache (MB)TDP (W)List Price (USD)
8592+64/1281.9/3.932035011,600
858060/1202.0/4.030035010,710
6548Y+32/642.5/4.1602503,726
6554S36/722.2/4.01802703,157
Silver4514Y16/322.0/3.430150780
Silver4509Y8/162.6/4.122.5125563

Variants and Configurations

The Emerald Rapids processors support multi-socket configurations ranging from dual-processor (2P) to eight-processor (8P) systems, utilizing the Ultra Path Interconnect (UPI) at speeds up to 20 GT/s for low-latency inter-processor communication. This enables scalable deployments, with maximum configurations achieving up to 512 cores across eight sockets using 64-core models like the Platinum 8592+. Specialized variants include liquid-cooled options (with high TDP ratings up to 385 W) for environments requiring enhanced thermal management, as well as general-purpose "+" models optimized for broad server workloads. These configurations maintain compatibility with existing LGA-4677 sockets and DDR5-5600 subsystems, facilitating upgrades from prior generations without major changes. OEM integrations feature reference designs from and (HPE), which pair Emerald Rapids with PCIe Gen5 I/O for AI and data center applications; while Intel Gaudi 3 accelerators are primarily validated with subsequent generations, early ecosystem pairings emphasize GPU acceleration for hybrid workloads. Emerald Rapids serves as a bridge in Intel's roadmap, succeeded by the Granite Rapids architecture in the sixth-generation lineup, which introduces higher core densities and advanced process nodes for continued scalability.

References

  1. https://en.wikichip.org/wiki/intel/mesh_interconnect_architecture
Add your contribution
Related Hubs
User Avatar
No comments yet.