Hubbry Logo
Hopper (microarchitecture)Hopper (microarchitecture)Main
Open search
Hopper (microarchitecture)
Community hub
Hopper (microarchitecture)
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Hopper (microarchitecture)
Hopper (microarchitecture)
from Wikipedia

Hopper
LaunchedSeptember 20, 2022; 3 years ago (2022-09-20)
Designed byNvidia
Manufactured by
Fabrication processTSMC N4
Product Series
Server/datacenter
Specifications
L1 cache256 KB (per SM)
L2 cache50 MB
Memory supportHBM3
PCIe supportPCI Express 5.0
Media Engine
Encoder supportedNVENC
History
PredecessorAmpere
VariantAda Lovelace (consumer and professional)
SuccessorBlackwell
4 Nvidia H100 GPUs

Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is used alongside the Lovelace microarchitecture.

Named for computer scientist and United States Navy rear admiral Grace Hopper, the Hopper architecture was leaked in November 2019 and officially revealed in March 2022. It improves upon its predecessors, the Turing and Ampere microarchitectures, featuring a new streaming multiprocessor, a faster memory subsystem, and a transformer acceleration engine.

Architecture

[edit]

The Nvidia Hopper H100 GPU is implemented using the TSMC N4 process with 80 billion transistors. It consists of up to 144 streaming multiprocessors.[1] Due to the increased memory bandwidth provided by the SXM5 socket, the Nvidia Hopper H100 offers better performance when used in an SXM5 configuration than in the typical PCIe socket.[2]

Streaming multiprocessor

[edit]

The streaming multiprocessors for Hopper improve upon the Turing and Ampere microarchitectures, although the maximum number of concurrent warps per streaming multiprocessor (SM) remains the same between the Ampere and Hopper architectures, 64.[3] The Hopper architecture provides a Tensor Memory Accelerator (TMA), which supports bidirectional asynchronous memory transfer between shared memory and global memory.[4] Under TMA, applications may transfer up to 5D tensors. When writing from shared memory to global memory, elementwise reduction and bitwise operators may be used, avoiding registers and SM instructions while enabling users to write warp specialized codes. TMA is exposed through cuda::memcpy_async.[5]

When parallelizing applications, developers can use thread block clusters. Thread blocks may perform atomics in the shared memory of other thread blocks within its cluster, otherwise known as distributed shared memory. Distributed shared memory may be used by an SM simultaneously with L2 cache; when used to communicate data between SMs, this can utilize the combined bandwidth of distributed shared memory and L2. The maximum portable cluster size is 8, although the Nvidia Hopper H100 can support a cluster size of 16 by using the cudaFuncAttributeNonPortableClusterSizeAllowed function, potentially at the cost of reduced number of active blocks.[6] With L2 multicasting and distributed shared memory, the required bandwidth for dynamic random-access memory read and writes is reduced.[7]

Hopper features improved single-precision floating-point format (FP32) throughput with twice as many FP32 operations per cycle per SM than its predecessor. Additionally, the Hopper architecture adds support for new instructions, including the Smith–Waterman algorithm.[6] Like Ampere, TensorFloat-32 (TF-32) arithmetic is supported. The mapping pattern for both architectures is identical.[8]

Memory

[edit]

The Nvidia Hopper H100 supports HBM3 and HBM2e memory up to 80 GB; the HBM3 memory system supports 3 TB/s, an increase of 50% over the Nvidia Ampere A100's 2 TB/s. Across the architecture, the L2 cache capacity and bandwidth were increased.[9]

Hopper allows CUDA compute kernels to utilize automatic inline compression, including in individual memory allocation, which allows accessing memory at higher bandwidth. This feature does not increase the amount of memory available to the application, because the data (and thus its compressibility) may be changed at any time. The compressor will automatically choose between several compression algorithms.[9]

The Nvidia Hopper H100 increases the capacity of the combined L1 cache, texture cache, and shared memory to 256 KB. Like its predecessors, it combines L1 and texture caches into a unified cache designed to be a coalescing buffer. The attribute cudaFuncAttributePreferredSharedMemoryCarveout may be used to define the carveout of the L1 cache. Hopper introduces enhancements to NVLink through a new generation with faster overall communication bandwidth.[10]

Memory synchronization domains

[edit]

Some CUDA applications may experience interference when performing fence or flush operations due to memory ordering. Because the GPU cannot know which writes are guaranteed and which are visible by chance timing, it may wait on unnecessary memory operations, thus slowing down fence or flush operations. For example, when a kernel performs computations in GPU memory and a parallel kernel performs communications with a peer, the local kernel will flush its writes, resulting in slower NVLink or PCIe writes. In the Hopper architecture, the GPU can reduce the net cast through a fence operation.[11]

DPX instructions

[edit]

The Hopper architecture math application programming interface (API) exposes functions in the SM such as __viaddmin_s16x2_relu, which performs the per-halfword . In the Smith–Waterman algorithm, __vimax3_s16x2_relu can be used, a three-way min or max followed by a clamp to zero.[12] Similarly, Hopper speeds up implementations of the Needleman–Wunsch algorithm.[13]

Transformer engine

[edit]

The Hopper architecture was the first Nvidia architecture to implement the transformer engine.[14] The transformer engine accelerates computations by dynamically reducing them from higher numerical precisions (i.e., FP16) to lower precisions that are faster to perform (i.e., FP8) when the loss in precision is deemed acceptable.[14] The transformer engine is also capable of dynamically allocating bits in the chosen precision to either the mantissa or exponent at runtime to maximize precision.[5]

Power efficiency

[edit]

The SXM5 form factor H100 has a thermal design power (TDP) of 700 watts. With regards to its asynchrony, the Hopper architecture may attain high degrees of utilization and thus may have a better performance-per-watt.[15]

Grace Hopper

[edit]
Grace Hopper GH200
Designed byNvidia
Manufactured by
Fabrication processTSMC 4N
CodenameGrace Hopper
Specifications
ComputeGPU: 132 Hopper SMs
CPU: 72 Neoverse V2 cores
Shader clock rate1980 MHz
Memory supportGPU: 96 GB HBM3 or 144 GB HBM3e
CPU: 480 GB LPDDR5X

The GH200 combines a Hopper-based H100 GPU with a Grace-based 72-core CPU on a single module. The total power draw of the module is up to 1000 W. CPU and GPU are connected via NVLink, which provides memory coherence between CPU and GPU memory.[16]

History

[edit]

In November 2019, a well-known Twitter account posted a tweet revealing that the next architecture after Ampere would be called Hopper, named after computer scientist and United States Navy rear admiral Grace Hopper, one of the first programmers of the Harvard Mark I. The account stated that Hopper would be based on a multi-chip module design, which would result in a yield gain with lower wastage.[17]

During the March 2022 Nvidia GTC, Nvidia announced Hopper.[18]

In late 2022, due to US regulations limiting the export of chips to the People's Republic of China, Nvidia adapted the H100 chip to the Chinese market with the H800. This model has lower bandwidth compared to the original H100 model.[19][20] In late 2023, the US government announced new restrictions on the export of AI chips to China, including the A800 and H800 models.[21] This led to Nvidia creating another chip predicated on Hopper microarchitecture: the H20, a modified version of the H100. The H20 had become the most prominent chip in the Chinese market as of 2025.[22]

By 2023, during the AI boom, H100s were in great demand. Larry Ellison of Oracle Corporation said that year that at a dinner with Nvidia CEO Jensen Huang, he and Elon Musk of Tesla, Inc. and xAI "were begging" for H100s, "I guess is the best way to describe it. An hour of sushi and begging".[23]

In January 2024, Raymond James Financial analysts estimated that Nvidia was selling the H100 GPU in the price range of $25,000 to $30,000 each, while on eBay, individual H100s cost over $40,000.[24] As of February 2024, Nvidia was reportedly shipping H100 GPUs to data centers in armored cars.[25]

H100 accelerator and DGX H100

[edit]

Comparison of accelerators used in DGX:[26][27][28]

Model Architecture Socket FP32
CUDA
cores
FP64 cores
(excl. tensor)
Mixed
INT32/FP32
cores
INT32
cores
Boost
clock
Memory
clock
Memory
bus width
Memory
bandwidth
VRAM Single
precision
(FP32)
Double
precision
(FP64)
INT8
(non-tensor)
INT8
dense tensor
INT32 FP4
dense tensor
FP16 FP16
dense tensor
bfloat16
dense tensor
TensorFloat-32
(TF32)
dense tensor
FP64
dense tensor
Interconnect
(NVLink)
GPU L1 Cache L2 Cache TDP Die size Transistor
count
Process Launched
P100 Pascal SXM/SXM2 3584 1792 N/A N/A 1480 MHz 1.4 Gbit/s HBM2 4096-bit 720 GB/sec 16 GB HBM2 10.6 TFLOPS 5.3 TFLOPS N/A N/A N/A N/A 21.2 TFLOPS N/A N/A N/A N/A 160 GB/sec GP100 1344 KB (24 KB × 56) 4096 KB 300 W 610 mm2 15.3 B TSMC 16FF+ Q2 2016
V100 16GB Volta SXM2 5120 2560 N/A 5120 1530 MHz 1.75 Gbit/s HBM2 4096-bit 900 GB/sec 16 GB HBM2 15.7 TFLOPS 7.8 TFLOPS 62 TOPS N/A 15.7 TOPS N/A 31.4 TFLOPS 125 TFLOPS N/A N/A N/A 300 GB/sec GV100 10240 KB (128 KB × 80) 6144 KB 300 W 815 mm2 21.1 B TSMC 12FFN Q3 2017
V100 32GB Volta SXM3 5120 2560 N/A 5120 1530 MHz 1.75 Gbit/s HBM2 4096-bit 900 GB/sec 32 GB HBM2 15.7 TFLOPS 7.8 TFLOPS 62 TOPS N/A 15.7 TOPS N/A 31.4 TFLOPS 125 TFLOPS N/A N/A N/A 300 GB/sec GV100 10240 KB (128 KB × 80) 6144 KB 350 W 815 mm2 21.1 B TSMC 12FFN
A100 40GB Ampere SXM4 6912 3456 6912 N/A 1410 MHz 2.4 Gbit/s HBM2 5120-bit 1.52 TB/sec 40 GB HBM2 19.5 TFLOPS 9.7 TFLOPS N/A 624 TOPS 19.5 TOPS N/A 78 TFLOPS 312 TFLOPS 312 TFLOPS 156 TFLOPS 19.5 TFLOPS 600 GB/sec GA100 20736 KB (192 KB × 108) 40960 KB 400 W 826 mm2 54.2 B TSMC N7 Q1 2020
A100 80GB Ampere SXM4 6912 3456 6912 N/A 1410 MHz 3.2 Gbit/s HBM2e 5120-bit 1.52 TB/sec 80 GB HBM2e 19.5 TFLOPS 9.7 TFLOPS N/A 624 TOPS 19.5 TOPS N/A 78 TFLOPS 312 TFLOPS 312 TFLOPS 156 TFLOPS 19.5 TFLOPS 600 GB/sec GA100 20736 KB (192 KB × 108) 40960 KB 400 W 826 mm2 54.2 B TSMC N7
H100 Hopper SXM5 16896 4608 16896 N/A 1980 MHz 5.2 Gbit/s HBM3 5120-bit 3.35 TB/sec 80 GB HBM3 67 TFLOPS 34 TFLOPS N/A 1.98 POPS N/A N/A N/A 990 TFLOPS 990 TFLOPS 495 TFLOPS 67 TFLOPS 900 GB/sec GH100 25344 KB (192 KB × 132) 51200 KB 700 W 814 mm2 80 B TSMC 4N Q3 2022
H200 Hopper SXM5 16896 4608 16896 N/A 1980 MHz 6.3 Gbit/s HBM3e 6144-bit 4.8 TB/sec 141 GB HBM3e 67 TFLOPS 34 TFLOPS N/A 1.98 POPS N/A N/A N/A 990 TFLOPS 990 TFLOPS 495 TFLOPS 67 TFLOPS 900 GB/sec GH100 25344 KB (192 KB × 132) 51200 KB 1000 W 814 mm2 80 B TSMC 4N Q3 2023
B100 Blackwell SXM6 N/A N/A N/A N/A N/A 8 Gbit/s HBM3e 8192-bit 8 TB/sec 192 GB HBM3e N/A N/A N/A 3.5 POPS N/A 7 PFLOPS N/A 1.98 PFLOPS 1.98 PFLOPS 989 TFLOPS 30 TFLOPS 1.8 TB/sec GB100 N/A N/A 700 W N/A 208 B TSMC 4NP Q4 2024
B200 Blackwell SXM6 N/A N/A N/A N/A N/A 8 Gbit/s HBM3e 8192-bit 8 TB/sec 192 GB HBM3e N/A N/A N/A 4.5 POPS N/A 9 PFLOPS N/A 2.25 PFLOPS 2.25 PFLOPS 1.2 PFLOPS 40 TFLOPS 1.8 TB/sec GB100 N/A N/A 1000 W N/A 208 B TSMC 4NP

Export controls and international trade issues

[edit]

In early 2026, Nvidia’s Hopper-based H200 AI accelerator became a focal point in international trade disputes involving U.S. export policy and Chinese import controls. Although the U.S. government approved the limited export of H200 chips to China under specific security conditions, reports indicated that Chinese customs officials prevented shipments of the processors from entering the country despite the U.S. clearance, leading suppliers to pause production of H200 components amid uncertainty over the import block. Chinese authorities reportedly instructed domestic firms against purchasing the chips unless necessary, though no formal ban was publicly announced and the long-term status of the restrictions remained unclear. The situation highlighted the geopolitical sensitivities surrounding advanced AI hardware exports and the complex interplay between U.S. export regulations and Chinese import policies.[29]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia

Hopper is a (GPU) developed by for datacenter computing, succeeding the architecture and debuting in the H100 Tensor Core GPU. Named after pioneering , the architecture was announced on March 22, 2022, and emphasizes accelerated computing for , , and data analytics rather than consumer graphics.
The Hopper microarchitecture introduces the Transformer Engine, which combines fourth-generation Tensor Cores with support for FP8 precision to deliver up to 9x faster AI training compared to prior generations, alongside dynamic precision scaling for mixed-precision workloads. Fabricated using TSMC's 4N process with over 80 billion transistors, Hopper GPUs like the H100 enable terabyte-scale accelerated computing through innovations such as for secure AI processing and enhanced interconnects for multi-GPU scalability. These advancements position Hopper as a foundational technology for large-scale language models and scientific simulations, powering systems like the Grace Hopper Superchip.

Overview

Design Goals and Innovations

The NVIDIA Hopper microarchitecture was designed to deliver transformative performance for large-scale AI training and , particularly for trillion-parameter models, while enabling exascale (HPC) workloads with enhanced security and scalability. Key objectives included achieving up to 30x faster on large language models compared to the prior architecture (A100), through optimizations for transformer-based neural networks, and providing order-of-magnitude improvements in compute throughput—up to 6x overall—via advanced precision formats and interconnects. These goals addressed the escalating demands of AI supercomputing, targeting secure scaling from enterprise deployments to massive clusters supporting 1 exaFLOP of FP8 sparse AI compute across up to 256 GPUs. Central to Hopper's innovations is the Transformer Engine, an extension of Tensor Core technology that dynamically mixes FP8 and FP16 precisions to accelerate AI model by up to 9x and by up to 30x over A100, while tripling FLOPS performance in formats like TF32, FP64, FP16, and INT8. FP8 precision halves requirements and doubles throughput per streaming multiprocessor (SM) compared to FP16, enabling efficient handling of the precision needs in modern without accuracy loss. Complementing this, DPX instructions optimize dynamic programming algorithms—such as those in bioinformatics (e.g., Smith-Waterman)—delivering up to 7x speedup over GPUs and 40x over dual-socket CPU servers. The architecture also introduces via hardware-enforced memory encryption and secure Tensor Memory Access (TMA), marking the first GPU platform to protect data and models during computation against insider threats or compromised software. Hopper incorporates TSMC's 4N process node, packing over 80 billion transistors into an 814 mm² die, paired with HBM3 memory offering 3 TB/s bandwidth—double that of A100—and a 50 MB L2 cache for improved data locality. 4 provides 900 GB/s bidirectional GPU-to-GPU bandwidth (7x PCIe Gen5), with NVSwitch enabling 57.6 TB/s all-to-all communication in large-scale systems like DGX GH200, facilitating strong scaling and reduced latencies for distributed . Second-generation Multi-Instance GPU (MIG) supports up to 7 isolated instances per GPU with dedicated and , enhancing workload isolation and efficiency in multi-tenant environments. These features collectively prioritize architectural efficiency, simplifying programming while minimizing overheads for mainstream AI and HPC applications.

Position in NVIDIA's Architecture Lineage

Hopper represents the successor to NVIDIA's Ampere microarchitecture in the company's datacenter GPU lineage, building directly on the compute-oriented advancements introduced with the A100 in May 2020. Whereas Ampere emphasized sparse tensor operations and third-generation Tensor Cores for mixed-precision AI workloads, Hopper refines these elements with fourth-generation Tensor Cores and the introduction of the Transformer Engine, which dynamically scales precision from FP8 to FP16 to optimize transformer model performance without accuracy loss. This progression underscores NVIDIA's shift from graphics-centric designs in earlier architectures like Fermi (2010) and Kepler (2012) toward specialized accelerators for high-performance computing (HPC) and artificial intelligence, as evidenced by Hopper's deployment in systems targeting exascale supercomputing. Positioned as a datacenter-exclusive architecture—unlike the hybrid consumer/datacenter Turing (2018) and —Hopper powers the H100 Tensor Core GPU, announced on March 22, 2022, and focuses on multi-instance GPU (MIG) partitioning for secure workload isolation, 4.0 interconnects for enhanced multi-GPU scaling, and features via hardware-rooted trust. It bridges 's bandwidth improvements (with HBM3 memory) and the subsequent Blackwell architecture, unveiled in March 2024, which further scales to support trillion-parameter models with dual-die designs and fifth-generation Tensor Cores. Hopper's emphasis on AI training efficiency, delivering up to 9x performance over A100 on large language models, solidified NVIDIA's lead in accelerated amid surging demand for generative AI infrastructure. In the broader evolutionary context, Hopper continues the post-Volta (2017) trend of privileging tensor-accelerated matrix math over traditional rasterization, with architectures named after computing pioneers: Volta (V), (A), Hopper (H), and Blackwell (B). This lineage prioritizes causal factors like scaling limits and workload-specific bottlenecks, such as and precision trade-offs, over general-purpose versatility seen in Maxwell (2014) or Pascal (2016). Empirical benchmarks confirm Hopper's causal advancements, with H100 achieving 4 petaFLOPS FP8 throughput per GPU, enabling systems like to reach exascale performance in 2022.

Development History

Origins and Announcement

The Hopper microarchitecture derives its name from Grace Hopper, a pioneering U.S. computer scientist and rear admiral known for her contributions to programming languages and early computing systems. NVIDIA developed Hopper as the successor to its Ampere architecture to advance capabilities in accelerated computing, particularly for artificial intelligence and high-performance computing applications requiring enhanced tensor processing and memory efficiency. NVIDIA formally announced the Hopper microarchitecture on March 22, 2022, during its GPU Technology Conference (GTC), positioning it as the foundation for next-generation GPUs. The announcement highlighted the H100 Tensor Core GPU, fabricated on TSMC's 4NP process with over 80 billion transistors, as the inaugural product embodying Hopper's innovations for AI training and inference. This reveal came two years after Ampere's launch, underscoring 's accelerated cadence in architecture iterations driven by demand for scalable AI infrastructure.

Engineering Milestones and Production

NVIDIA revealed the Hopper microarchitecture on March 22, 2022, during its GTC keynote, highlighting its design for accelerating large-scale AI and high-performance computing workloads. The architecture powers the GH100 GPU die, fabricated on TSMC's custom 4N process node with a die area of 814 mm² and 80 billion transistors, achieving unprecedented density for datacenter GPUs. A pivotal milestone was the successful implementation of fourth-generation Tensor Cores integrated with the Transformer Engine, enabling dynamic precision management that delivers up to 6x faster AI training compared to prior architectures. This was complemented by the introduction of HBM3 memory support, providing 3 TB/s bandwidth in the H100 SXM variant, marking the first GPU to utilize this high-speed memory standard. The H100 Tensor Core GPU entered full production on September 20, 2022, with initial shipments commencing in October 2022 to enable partner systems and services. Production ramped amid high demand, with anticipating shipment of around 550,000 units in 2023 to meet AI infrastructure needs. Variants include the SXM5 module with 132 streaming multiprocessors and the PCIe card with 114, supporting scalable deployments in DGX systems starting in Q3 2022.

Architectural Components

Streaming Multiprocessor

The Streaming Multiprocessor (SM) in the Hopper microarchitecture serves as the fundamental processing unit, executing parallel thread workloads through arrays of cores, Tensor Cores, and associated scheduling hardware. Each SM contains 128 FP32 cores and 4 fourth-generation Tensor Cores, enabling high-throughput scalar and matrix operations. This configuration delivers 2x the clock-for-clock FP64 and FP32 performance per SM compared to the architecture's SMs, achieved through architectural refinements in execution pipelines and instruction throughput. Hopper SMs introduce independent thread scheduling via thread block clusters, allowing concurrent execution across multiple SMs with hardware-accelerated synchronization barriers that reduce software overhead for cooperative workloads. supports direct inter-SM communication within graphics processing clusters (GPCs), minimizing cache round-trips and enhancing multi-instance GPU (MIG) partitioning efficiency by up to 3x in compute density over . Each SM allocates 256 KB of combined L1 cache and —1.33x larger than Ampere's 192 KB—configurable in increments up to 228 KB for flexible workload optimization. Tensor Cores in Hopper SMs support FP8 precision with E4M3 and E5M2 formats, doubling matrix-multiply-accumulate (MMA) throughput relative to FP16/BF16 while halving , yielding up to 4x overall rates versus when sparsity acceleration is applied. The Tensor Memory Accelerator (TMA) integrates into SMs for asynchronous, descriptor-driven data transfers between global memory and , overlapping compute with memory operations to boost efficiency in large-model . Additionally, DPX instructions accelerate dynamic programming algorithms, such as Smith-Waterman for , providing up to 7x speedup over implementations by leveraging dedicated SM hardware paths.

Tensor Cores and Transformer Engine

The fourth-generation Tensor Cores in the Hopper microarchitecture represent an evolution from those in the architecture, delivering double the raw dense and sparse matrix multiply-accumulate (MMA) throughput per streaming multiprocessor (SM) at equivalent clock speeds. These cores support a range of precisions, including FP64 for high-precision scientific computing, TF32 and FP16/BF16 for , INT8 for , and the newly introduced FP8 formats (E4M3 and E5M2) to reduce by half while doubling computational throughput relative to FP16. Hopper Tensor Cores also incorporate sparsity acceleration, which exploits structured sparsity in neural networks to achieve up to double the effective performance on compatible workloads. In the H100 GPU, this enables peak FP8 performance of 2000 TFLOPS (scaling to 4000 TFLOPS with sparsity) in the SXM5 variant. The Transformer Engine integrates these Tensor Cores with specialized software libraries to optimize transformer-based models, which dominate large-scale AI and . Introduced as part of Hopper at GTC 2022, it enables dynamic per-layer precision selection—switching between FP8 for compute-intensive operations and higher-precision formats like FP16 to preserve model accuracy—via automated scaling and statistical analysis during forward and backward passes. This hardware-software approach leverages FP8's efficiency without requiring format conversions in , reducing memory usage and enabling faster processing of trillion-parameter models. For instance, it supports FP8 on Hopper GPUs to accelerate workloads in libraries like Transformer Engine API, which handles mixed-precision kernels for both and . Performance benchmarks demonstrate substantial gains: Hopper with the Transformer Engine achieves up to 6x higher AI training throughput without accuracy loss compared to Ampere-based systems, reducing training times for models like a 395-billion-parameter from 7 days to 20 hours on equivalent hardware. Inference throughput improves by up to 30x for large language models such as 530B versus the A100, while maintaining low latency (e.g., 1 second). These advancements triple overall FLOPS rates for TF32, FP16, and related formats relative to the prior generation, positioning Hopper for exascale AI and HPC applications.

Memory System and Bandwidth

The Hopper microarchitecture integrates a high-bandwidth memory subsystem centered on stacked (HBM), with HBM3 employed in premium configurations for superior throughput and HBM2e in cost-optimized variants. In the H100 SXM5 implementation, this comprises 80 GB of HBM3 across five memory stacks, achieving 3 TB/s peak bandwidth—twice the 1.5 TB/s of the Ampere A100's HBM2e—via a widened interface and higher clock rates. The H100 PCIe variant, by contrast, utilizes 80 GB HBM2e with five stacks and 2 TB/s bandwidth to balance performance with PCIe form factor constraints. To mitigate latency from off-chip HBM accesses, Hopper features a 50 MB L2 cache, partitioned across memory partitions for concurrent read/write operations and a 25% capacity increase over Ampere's 40 MB design, thereby caching larger sets and reducing main memory traffic. At the SM level, each multiprocessor allocates 256 KB for unified L1 cache and configurable —33% more than Ampere's 192 KB—supporting finer-grained reuse in compute-intensive workloads. Reliability enhancements include ECC protection via sideband ECC mechanisms and dynamic memory row remapping to isolate faulty cells without . The Tensor Memory Accelerator (TMA) further optimizes bandwidth utilization by enabling asynchronous, descriptor-driven transfers between HBM and , minimizing CPU intervention and overlapping memory ops with computation. These elements collectively prioritize sustained high-bandwidth delivery for AI and , where memory bottlenecks often constrain scaling.

Specialized Instructions and Features

The Hopper microarchitecture introduces DPX instructions optimized for dynamic programming algorithms, which perform fused add-min/max operations to accelerate tasks such as DNA sequence alignment via Smith-Waterman and robot path planning. These instructions deliver up to 7x speedup over Ampere-based GPUs and 40x over dual-socket CPU servers for such workloads. Hopper's fourth-generation Tensor Cores support FP8 precision with E4M3 and E5M2 formats, enabling 4x higher matrix-multiply-accumulate throughput compared to 16-bit formats in prior architectures while halving storage requirements. This is integrated into the Transformer Engine, a hardware-software system that dynamically scales precision between FP8 and FP16 for transformer-based models, yielding up to 9x faster AI training and 30x faster inference relative to A100 GPUs on large language models. The Tensor Memory Accelerator (TMA) provides asynchronous, descriptor-driven transfers of 1D to 5D tensors between global and , minimizing thread launch overhead and supporting diverse layouts like interleaved or planar formats. TMA enhances efficiency in tensor-heavy operations by decoupling data movement from compute kernels, with bi-directional capabilities and integration via async APIs. Additional features include thread block clusters, which extend CUDA cooperation to multiple streaming multiprocessors for finer-grained synchronization and across up to 8 SMs. Hopper also incorporates hardware support for through a secure root of trust and memory encryption, enabling isolated GPU partitions via second-generation Multi-Instance GPU (MIG).

Products and Implementations

H100 Tensor Core GPU Variants

The NVIDIA H100 Tensor Core GPU is produced in multiple variants optimized for distinct environments, differing primarily in form factor, memory configuration, power envelope, interconnect capabilities, and performance tuning. The SXM5 variant targets high-performance multi-GPU configurations in specialized systems like 's HGX and DGX platforms, emphasizing maximum compute density and scaling for training workloads. In contrast, the PCIe variant provides broader compatibility with standard server architectures via interfaces, while the NVL variant, also PCIe-based, prioritizes inference tasks with enhanced memory capacity and bandwidth for handling large language models. The H100 SXM5 employs an SXM socketed module form factor, typically integrated into liquid-cooled or high-density air-cooled chassis, with 80 GB of HBM3 memory delivering 3.35 TB/s bandwidth. It supports a TDP of up to 700 W, enabling peak performance metrics such as 67 TFLOPS in FP64 Tensor Core operations and 989 TFLOPS in TF32 Tensor Core operations, facilitated by 900 GB/s bidirectional NVLink interconnects for eight-way GPU scaling. This variant is designed for exascale HPC and AI training, where sustained high power allows elevated clock speeds and transistor utilization across its 80 billion transistors. The H100 PCIe variant uses a standard dual-slot Gen5 x16 card, suitable for off-the-shelf servers, with 80 GB HBM2e memory at 2.0 TB/s bandwidth and a 350 W TDP. It achieves slightly lower peak throughput, such as 60 TFLOPS FP64 Tensor Core and around 835 TFLOPS TF32 Tensor Core, with 600 GB/s support for moderate multi-GPU connectivity. This configuration trades some performance for easier deployment and lower power requirements, making it viable for hybrid AI/HPC setups without custom interconnects. The H100 NVL variant, a dual-slot PCIe Gen5 x16 card with , features 94 GB HBM3 per GPU at 3.9 TB/s bandwidth, addressing memory-intensive for models like Llama 2 70B. Its TDP ranges from 200 W minimum to over 400 W maximum (configurable to 310 W or higher modes), yielding 60 TFLOPS FP64 Tensor Core and 835 TFLOPS TF32 Tensor Core performance, with 600 GB/s bridges for intra-card or elastic scaling. Optimized for at scale, it incorporates higher-density HBM3 stacks and supports PCI fallback to Gen4 x16 or Gen5 x8, differentiating it from the standard PCIe by prioritizing bandwidth and capacity over training-oriented power scaling.
VariantForm FactorMemoryBandwidthTDPNVLink BandwidthKey Use Case
H100 SXM5SXM module80 GB HBM33.35 TB/sUp to 700 W900 GB/sAI training, HPC scaling
H100 PCIePCIe x1680 GB HBM2e2.0 TB/s350 W600 GB/sGeneral servers
H100 NVLPCIe x1694 GB HBM33.9 TB/s200–400+ W600 GB/sLLM inference
These variants share the core Hopper microarchitecture, including fourth-generation Tensor Cores and the , but tuning for thermal and power constraints results in measurable performance variances under full load.

Grace Hopper Superchip

The integrates the Arm-based NVIDIA Grace CPU with an NVIDIA Hopper GPU via a high-bandwidth chip-to-chip (C2C) interconnect, enabling unified memory access and low-latency data transfer for AI and workloads. This design provides up to 900 GB/s of bidirectional bandwidth between the CPU and GPU, surpassing traditional PCIe connections and reducing data movement overhead by allowing direct GPU access to CPU memory. The Grace CPU component features 72 V2 cores based on the Armv9 architecture, supporting up to 480 GB of LPDDR5X memory with error-correcting code (ECC) for reliability in data-intensive tasks. The integrated Hopper GPU includes variants with 96 GB of HBM3 memory or up to 144 GB of HBM3e memory, delivering enhanced bandwidth of up to 10 TB/s in next-generation configurations for accelerating large-scale generative AI models and scientific simulations. Announced as part of NVIDIA's accelerated roadmap in 2022, the Superchip entered full production on May 28, 2023, powering systems for complex AI training and HPC applications with coherent sharing that eliminates the need for explicit data copies between CPU and GPU address spaces. This integration supports hardware-level unified addressing and page tables, facilitating seamless workload orchestration in environments requiring massive scalability, such as trillion-parameter AI models.

Integrated Systems and Platforms

The NVIDIA HGX H100 platform integrates up to eight H100 Tensor Core GPUs interconnected via fifth-generation NVLink, providing a unified memory domain with aggregate bandwidth exceeding 3 terabytes per second for AI and high-performance computing workloads. Announced on April 21, 2022, HGX H100 serves as a modular building block for server manufacturers, enabling scalable GPU clusters optimized for large language model training and inference. The NVIDIA DGX H100 system builds on HGX by incorporating eight H100 GPUs with dual Intel Xeon Platinum processors, up to 2 terabytes of system memory, and NVIDIA BlueField-3 data processing units for enhanced networking and security. Designed as a turnkey AI factory, DGX H100 delivers over 32 petaFLOPS of FP8 AI performance and supports NVIDIA AI Enterprise software for end-to-end workflows from data preparation to deployment. It forms the core of DGX SuperPOD configurations, which scale to hundreds of GPUs via NVSwitch fabrics for exascale AI training. Platforms leveraging the Superchip, such as NVIDIA MGX, combine the GH200's Grace CPU and Hopper GPU via NVLink-C2C for up to 900 gigabytes per second of interconnect bandwidth, targeting HPC simulations and trillion-parameter AI models. Deployments include the Venado at , featuring GH200 nodes for AI research and ranking as the 19th-fastest system globally as of August 2025. Large-scale implementations extend to custom clusters, exemplified by xAI's Colossus , which interconnects 100,000 Hopper GPUs using Ethernet networking to achieve unprecedented AI training scale, operational as of October 2024. These systems emphasize Hopper's multi-instance GPU partitioning and features for secure, efficient resource utilization across enterprise and research environments.

Performance and Efficiency

Compute Throughput and Benchmarks

The Hopper microarchitecture in the H100 SXM GPU delivers peak FP64 Tensor Core performance of 67 teraFLOPS and FP32 performance of 67 teraFLOPS, representing a tripling of double-precision Tensor Core throughput compared to the prior Ampere architecture. Fourth-generation Tensor Cores enable significantly higher rates in reduced-precision formats optimized for AI workloads, including up to 989 teraFLOPS in TF32, 1,979 teraFLOPS in FP16 and BF16, and 3,958 teraFLOPS in FP8, with these figures incorporating structured sparsity acceleration for compatible sparse matrix operations. Integer operations reach 3,958 TOPS in INT8 via Tensor Cores.
PrecisionPeak Performance (H100 SXM)
FP64 Tensor Core67 TFLOPS
FP3267 TFLOPS
TF32 Tensor Core989 TFLOPS (sparse)
FP16/BF16 Tensor Core1,979 TFLOPS (sparse)
FP8 Tensor Core3,958 TFLOPS (sparse)
INT8 Tensor Core3,958 (sparse)
In MLPerf Training v2.1 benchmarks released in June 2023, systems with 432 H100 GPUs completed a 175B task in a record 49 seconds, demonstrating near-linear scaling efficiency. H100-based platforms set records across all eight MLPerf tests, including new generative AI workloads, with up to 4X faster for large language models like compared to A100 systems. In benchmarks, H100 configurations achieved up to 30X higher throughput for large language model , such as 530B. For HPC-specific MLPerf benchmarks, H100 GPUs delivered up to twice the performance of A100 in AI-accelerated simulations like CosmoFlow. These results underscore Hopper's efficacy in both AI / and hybrid HPC workloads, though real-world throughput varies with , model size, and software optimization.

Power Consumption and Optimization

The H100 GPU, implementing the Hopper microarchitecture, has a (TDP) of 700 W for the SXM5 variant and 350 W for the PCIe variant, compared to 400 W for the Ampere-based A100. TDP is configurable in certain models, such as the H100 NVL at 350–400 W for power-constrained deployments. Hopper enhances power efficiency through process and architectural advancements. Built on TSMC's 4N process node, it delivers superior relative to Ampere's 7 nm node. Fourth-generation Tensor Cores reduce operand delivery power by up to 30%, while HBM3 at 3 TB/s bandwidth and a 50 MB L2 cache—25% larger than A100's—minimize energy-intensive accesses by cutting trips to off-chip storage. The Engine optimizes for transformer-based AI models via FP8 precision support, achieving up to 4x faster training on benchmarks like 175B over prior generations and up to 9x training or 30x inference speedups versus A100, yielding higher effective FLOPS per watt in low-precision formats. These features enable Hopper to provide up to 6x overall compute performance gains over , balancing increased capability with targeted efficiency improvements for AI and HPC workloads.

Comparisons to Ampere and Blackwell

Hopper introduced substantial enhancements over in tensor core capabilities and precision support, enabling up to 6x higher performance in MLPerf training benchmarks for workloads on H100 GPUs compared to A100 GPUs. The architecture features fourth-generation Tensor Cores with native FP8 precision and dynamic scaling for the Engine, which accelerates transformer model training by handling mixed-precision computations more efficiently than Ampere's third-generation Tensor Cores, which lacked FP8 support and relied on FP16/bfloat16 with sparsity acceleration. Hopper also improves multi-instance GPU (MIG) partitioning with second-generation support, allowing finer-grained than Ampere's first-generation MIG, and enhances cache hierarchies with larger L1/L2 caches to reduce in data-intensive tasks. In terms of compute throughput, Hopper's streaming multiprocessors deliver approximately 3.2x the dense tensor performance per core over despite only a 22% increase in core count, driven by architectural optimizations like improved asynchronous execution and better overlap of compute with data movement via DPX instructions. Power efficiency sees gains through these features, with H100 achieving higher FLOPS per watt in AI training due to reduced precision overhead, though remains viable for legacy FP64-heavy HPC tasks where Hopper maintains parity but excels in hybrid workloads. Blackwell builds on Hopper with a dual-chiplet design connecting two GPU dies via NV-HSI links, scaling to 208 billion transistors in GB200 configurations and delivering up to 20 petaFLOPS in FP4/FP8 for AI and , roughly 5x Hopper's peak in similar precisions on H100/H200. This enables 30% higher FP64 and FP32 fused multiply-add performance for scientific simulations compared to Hopper, alongside 25x energy efficiency improvements in large-scale due to fifth-generation Tensor Cores, decompression engines, and HBM3e supporting up to 192 GB per GPU with 8 TB/s bandwidth—double Hopper's HBM3 capacity and 1.5x bandwidth in H200 variants. However, Hopper retains advantages in balanced FP64 throughput for certain HPC applications without Blackwell's emphasis on ultra-low-precision , and its single-die monolithic design avoids potential inter-die latency penalties observed in early Blackwell prototypes.
MetricAmpere (A100)Hopper (H100/H200)Blackwell (B200/GB200)
Peak FP8 TFLOPS (dense)~624~1,979 (with sparsity)~9,000+
FP64 TFLOPS9.734~44
Memory Capacity40-80 GB HBM2e80-141 GB HBM3/HBM3e192 GB HBM3e
Bandwidth (TB/s)23-4.88
Transistors (billions)54.280208 (dual-die)
These figures highlight Hopper's transitional role: bridging Ampere's general-purpose strengths with AI specialization, though real-world efficiency varies by workload, with independent benchmarks confirming inference edge but Hopper's cost-effectiveness for training at scale as of mid-2025.

Reception and Impact

Adoption in AI and HPC Workloads

The NVIDIA Hopper microarchitecture, powering the H100 Tensor Core GPU, has driven significant adoption in workloads, particularly for training and inference of large language models and generative AI systems. Major hyperscalers including , , Google Cloud, and Oracle Cloud Infrastructure began offering H100-based instances in March 2023 to address surging demand for accelerated AI compute. Companies such as , Meta, and Stability AI rapidly integrated H100 clusters for model development, with Meta announcing plans in early 2024 to deploy over 350,000 H100 GPUs across its data centers to support AI research initiatives. By late 2023, estimates indicated deployments exceeding 150,000 H100 equivalents each at Meta and , reflecting Hopper's role in scaling trillion-parameter models with up to 4x faster training compared to prior architectures. In (HPC), the Grace Hopper Superchip—combining Hopper GPUs with Arm-based Grace CPUs—has facilitated adoption in hybrid AI-HPC environments, with over 40 supercomputing projects incorporating it by 2023 for tasks spanning scientific and AI-driven discovery. announced nine Grace Hopper-based supercomputers in May 2024, including France's EXA1-HE (developed by CEA and Eviden), Poland's at Cyfronet, and systems from and , collectively targeting up to 200 exaFLOPS for AI-accelerated workloads. Hopper-powered systems have appeared in the list, such as Israel's Israel-1 cluster (ranked #34 in 2024 with 936 H100 GPUs across 117 HGX systems) and 's DGX SuperPOD, demonstrating its utility in energy-efficient, high-rank HPC deployments despite competition from AMD-based leaders in pure floating-point benchmarks. Systems vendors like , , , HPE, and have accelerated Hopper integrations for enterprise HPC, with examples including ASUS-built clusters ranking in both and for performance-per-watt in AI workloads as of 2024.

Economic and Technological Achievements

The Hopper microarchitecture introduced the Transformer Engine, a hardware-software co-optimized system that dynamically selects precision for transformer model operations, delivering up to 9x faster AI training and 30x faster inference compared to the prior Ampere architecture on large language models. This innovation leverages fourth-generation Tensor Cores supporting FP8 formats, enabling higher computational throughput for matrix multiply-accumulate operations central to deep learning. Additionally, Hopper incorporates second-generation Multi-Instance GPU (MIG) technology, allowing secure partitioning of a single GPU into up to seven isolated instances for multi-tenant cloud environments, enhancing resource utilization in data centers. Hopper's integration of HBM3 memory and fourth-generation interconnect provided 3.35 TB/s bandwidth and 900 GB/s GPU-to-GPU communication, facilitating scalable multi-GPU systems for (HPC) and AI supercomputers. These advancements enabled breakthroughs in training trillion-parameter models, as evidenced by systems like the DGX H100, which achieved record rankings for AI workloads. Economically, Hopper-powered GPUs, particularly the H100, drove NVIDIA's revenue from $15 billion in 2023 to $47.5 billion in 2024, a 217% increase attributed to surging demand for AI training infrastructure. This growth solidified NVIDIA's in AI accelerators, with H100 shipments exceeding 500,000 units in 2023 alone, contributing to the company's overall revenue surpassing $60 billion in fiscal 2024. The architecture's efficiency gains reduced training times for models like equivalents, accelerating enterprise AI adoption and yielding returns on investment through faster time-to-insight in sectors like and modeling.

Criticisms, Limitations, and Controversies

The Hopper microarchitecture, as implemented in the H100 GPU, has encountered reliability challenges primarily related to high-bandwidth (HBM3). During Meta's training of the Llama 3 on a cluster comprising 16,384 H100 GPUs, faulty GPUs and HBM3 modules were responsible for roughly half of all hardware failures, occurring at a rate of one every three hours and contributing to training delays. These issues stemmed from manufacturing defects in the stacks, which are critical for handling the massive throughput in AI workloads. Additional operational limitations include thermal throttling and overheating under sustained high loads, exacerbated by the H100's 700 W (TDP), which demands robust cooling infrastructure in data centers. Power supply instabilities and driver-firmware incompatibilities have also led to GPU detachments from the PCIe bus during intensive computations, as reported in user forums for systems like the ESC4000A-E12. In the Grace Hopper Superchip, which integrates the Hopper GPU with an Arm-based Grace CPU, cache bandwidth falls short of competitors like AMD's offerings, potentially bottlenecking highly parallel workloads despite NVIDIA's positioning for such tasks. Economically, the H100's —often exceeding $30,000 per unit—has drawn for limiting , particularly when benchmarked against AMD's MI250X, which offers comparable performance at lower cost. Supply chain constraints, including U.S. export restrictions on advanced Hopper variants to , have fueled a for used or degraded units, raising concerns over proliferation risks and NVIDIA's compliance with regulations. NVIDIA has refuted persistent claims of manufacturing-induced shortages for H100 and H200 models as of September 2025, attributing availability to scaled production rather than defects. No major architectural controversies have emerged, though the architecture's specialization for AI tensor operations can underperform in legacy HPC kernels without optimization, as noted in microbenchmark analyses.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.