Hubbry Logo
Pascal (microarchitecture)Pascal (microarchitecture)Main
Open search
Pascal (microarchitecture)
Community hub
Pascal (microarchitecture)
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Pascal (microarchitecture)
Pascal (microarchitecture)
from Wikipedia

Pascal
A GTX 1070 Founders Edition graphics based on the Pascal architecture
LaunchedMay 27, 2016; 9 years ago (2016-05-27)
Designed byNvidia
Manufactured by
Fabrication process
CodenameGP10x
Product Series
Desktop
Professional/workstation
Server/datacenter
Specifications
L1 cache24 KB (per SM)
L2 cache256 KB—4 MB
Memory support
PCIe supportPCIe 3.0
Supported Graphics APIs
DirectXDirectX 12 (12.1)
Direct3DDirect3D 12.0
Shader ModelShader Model 6.7
OpenGLOpenGL 4.6
CUDACompute Capability 6.0
VulkanVulkan 1.3
Supported Compute APIs
OpenCLOpenCL 3.0
Media Engine
Encode codecs
Decode codecs
Color bit-depth
  • 8-bit
  • 10-bit
Encoder supportedNVENC
Display outputs
History
PredecessorMaxwell
Successor
Support status
Limited support until November 2025
Security updates until October 2028[1]
Painting of Blaise Pascal, eponym of architecture

Pascal is the codename for a GPU microarchitecture developed by Nvidia, as the successor to the Maxwell architecture. The architecture was first introduced in April 2016 with the release of the Tesla P100 (GP100) on April 5, 2016, and is primarily used in the GeForce 10 series, starting with the GeForce GTX 1080 and GTX 1070 (both using the GP104 GPU), which were released on May 27, 2016, and June 10, 2016, respectively. Pascal was manufactured using TSMC's 16 nm FinFET process,[2] and later Samsung's 14 nm FinFET process.[3]

The architecture is named after the 17th century French mathematician and physicist, Blaise Pascal.

In April 2019, Nvidia enabled a software implementation of DirectX Raytracing on Pascal-based cards starting with the GTX 1060 6 GB, and in the 16 series cards, a feature reserved to the Turing-based RTX series up to that point.[4][5]

Details

[edit]
Die shot of the GP100 GPU used in Nvidia Tesla P100 cards
Die shot of the GP102 GPU found inside GeForce GTX 1080 Ti cards
Die shot of the GP106 GPU found inside GTX 1060 cards

In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016, and released on May 27 of the same year. The Tesla P100 (GP100 chip) has a different version of the Pascal architecture compared to the GTX GPUs (GP104 chip). The shader units in GP104 have a Maxwell-like design.[6]

Architectural improvements of the GP100 architecture include the following:[7][8][9]

  • In Pascal, a SM (streaming multiprocessor) consists of between 64-128 CUDA cores, depending on if it is GP100 or GP104. Maxwell contained 128 CUDA cores per SM; Kepler had 192, Fermi 32 and Tesla 8. The GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units.
  • CUDA Compute Capability 6.0.
  • High Bandwidth Memory 2 — some cards feature 16 GiB HBM2 in four stacks with a total bus width of 4096 bits and a memory bandwidth of 720 GB/s.
  • Unified memory — a memory architecture where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine".
  • NVLink — a high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.[10][11]
  • 16-bit (FP16) floating-point operations (colloquially "half precision") can be executed at twice the rate of 32-bit floating-point operations ("single precision")[12] and 64-bit floating-point operations (colloquially "double precision") executed at half the rate of 32-bit floating point operations.[13]
  • More registers — twice the amount of registers per CUDA core compared to Maxwell.
  • More shared memory.
  • Dynamic load balancing scheduling system.[14] This allows the scheduler to dynamically adjust the amount of the GPU assigned to multiple tasks, ensuring that the GPU remains saturated with work except when there is no more work that can safely be distributed to distribute.[14] Nvidia therefore has safely enabled asynchronous compute in Pascal's driver.[14]
  • Instruction-level and thread-level preemption.[15]

Architectural improvements of the GP104 architecture include the following:[6]

  • CUDA Compute Capability 6.1.
  • GDDR5X — new memory standard supporting 10Gbit/s data rates, updated memory controller.[16]
  • Simultaneous Multi-Projection - generating multiple projections of a single geometry stream, as it enters the SMP engine from upstream shader stages.[17]
  • DisplayPort 1.4, HDMI 2.0b.
  • Fourth generation Delta Color Compression.
  • Enhanced SLI Interface — SLI interface with higher bandwidth compared to the previous versions.
  • PureVideo Feature Set H hardware video decoding HEVC Main10 (10-bit), Main12 (12-bit) and VP9 hardware decoding.
  • HDCP 2.2 support for 4K DRM protected content playback and streaming (Maxwell GM200 and GM204 lack HDCP 2.2 support, GM206 supports HDCP 2.2).[18]
  • NVENC HEVC Main10 10bit hardware encoding.
  • GPU Boost 3.0.
  • Instruction-level preemption.[15] In graphics tasks, the driver restricts preemption to the pixel-level, because pixel tasks typically finish quickly and the overhead costs of doing pixel-level preemption are lower than instruction-level preemption (which is expensive).[15] Compute tasks get thread-level or instruction-level preemption,[15] because they can take longer times to finish and there are no guarantees on when a compute task finishes. Therefore the driver enables the expensive instruction-level preemption for these tasks.[15]

Overview

[edit]

Graphics Processor Cluster

[edit]

A chip is partitioned into Graphics Processor Clusters (GPCs). For the GP104 chips, a GPC encompasses 5 SMs.

Streaming Multiprocessor "Pascal"

[edit]

A "Streaming Multiprocessor" is analogous to AMD's Compute Unit. An SM encompasses 128 single-precision ALUs ("CUDA cores") on GP104 chips and 64 single-precision ALUs on GP100 chips. While all CU versions consist of 64 shader processors (i.e. 4 SIMD Vector Units, each 16 lanes wide), Nvidia experimented with very different numbers of CUDA cores:

  • On Tesla, 1 SM combines 8 single-precision (FP32) shader processors
  • On Fermi, 1 SM combines 32 single-precision (FP32) shader processors
  • On Kepler, 1 SM combines 192 single-precision (FP32) shader processors and 64 double-precision (FP64) units (on GK110 GPUs)
  • On Maxwell, 1 SM combines 128 single-precision (FP32) shader processors
  • On Pascal, it depends:
    • On GP100, 1 SM combines 64 single-precision (FP32) shader processors and also 32 double-precision (FP64) providing a 2:1 ratio of single- to double-precision throughput. The GP100 uses more flexible FP32 cores that are able to process one single-precision or two half-precision numbers in a two-element vector.[19] This is intended to better serve machine learning tasks.
    • On GP104, 1 SM combines 128 single-precision ALUs, 4 double-precision ALUs (providing a 32:1 ratio), and one half-precision ALU which contains a vector of two half-precision floats which can execute the same instruction on both floats, providing a 64:1 ratio if the same instruction is used on both elements.

Polymorph-Engine 4.0

[edit]

The Polymorph Engine version 4.0 is the unit responsible for Tessellation. It corresponds functionally with AMD's Geometric Processor. It has been moved from the shader module to the TPC to allow one Polymorph engine to feed multiple SMs within the TPC.[20]

Chips

[edit]
GTX 1080 Ti PCB and die
  • GP100: Nvidia's Tesla P100 GPU accelerator is targeted at GPGPU applications such as FP64 double precision compute and deep learning training that uses FP16. It uses HBM2 memory.[21] Quadro GP100 also uses the GP100 GPU.
  • GP102: This GPU is used in the Titan Xp,[22] Titan X Pascal[23] and the GeForce GTX 1080 Ti. It is also used in the Quadro P6000[24] & Tesla P40.[25]
  • GP104: This GPU is used in the GeForce GTX 1070, GTX 1070 Ti, GTX 1080, and some GTX 1060 6 GB's. The GTX 1070 has 15/20 and the GTX 1070 Ti has 19/20 of its SMs enabled; both utilize GDDR5 memory. The GTX 1080 is a fully unlocked chip and uses GDDR5X memory. Some GTX 1060 6 GB's use GP104 with 10/20 SMs enabled and GDDR5X memory.[26] It is also used in the Quadro P5000, Quadro P4000, Quadro P3200 (mobile applications) and Tesla P4.
  • GP106: This GPU is used in the GeForce GTX 1060 with GDDR5[27] memory.[28][29] It is also used in the Quadro P2000.
  • GP107: This GPU is used in the GeForce GTX 1050 and 1050 Ti. It is also used in the Quadro P1000, Quadro P600, Quadro P620 & Quadro P400.
  • GP108: This GPU is used in the GeForce GT 1010 and GeForce GT 1030.
Comparison table of some Kepler, Maxwell, and Pascal chips
GK104 GK110 GM204 (GTX 970) GM204 (GTX 980) GM200 GP104 GP100
Dedicated texture cache per SM 48 KiB
Texture (graphics or compute) or read-only data (compute only) cache per SM 48 KiB[30]
Programmer-selectable shared memory/L1 partitions per SM 48 KiB shared memory + 16 KiB L1 cache (default)[31] 48 KiB shared memory + 16 KiB L1 cache (default)[31]
32 KiB shared memory + 32 KiB L1 cache[31] 32 KiB shared memory + 32 KiB L1 cache[31]
16 KiB shared memory + 48 KiB L1 cache[31] 16 KiB shared memory + 48 KiB L1 cache[31]
Unified L1 cache/texture cache per SM 48 KiB[32] 48 KiB[32] 48 KiB[32] 48 KiB[32] 24 KiB[32]
Dedicated shared memory per SM 96 KiB[32] 96 KiB[32] 96 KiB[32] 96 KiB[32] 64 KiB[32]
L2 cache per chip 512 KiB[32] 1536 KiB[32] 1792 KiB[33] 2048 KiB[33] 3072 KiB[32] 2048 KiB[32] 4096 KiB[32]

Performance

[edit]

The theoretical single-precision processing power of a Pascal GPU in GFLOPS is computed as 2 × operations per FMA instruction per CUDA core per cycle × number of CUDA cores × core clock speed (in GHz).

The theoretical double-precision processing power of a Pascal GPU is 1/2 of the single precision performance on Nvidia GP100, and 1/32 of Nvidia GP102, GP104, GP106, GP107 & GP108.

The theoretical half-precision processing power of a Pascal GPU is 2× of the single precision performance on GP100[13] and 1/64 on GP104, GP106, GP107 & GP108.[19]

Successor

[edit]

The Pascal architecture was succeeded in 2017 by Volta in the HPC, cloud computing, and self-driving car markets, and in 2018 by Turing in the consumer and business market.[34]

P100 accelerator and DGX-1

[edit]

Comparison of accelerators used in DGX:[35][36][37]

Model Architecture Socket FP32
CUDA
cores
FP64 cores
(excl. tensor)
Mixed
INT32/FP32
cores
INT32
cores
Boost
clock
Memory
clock
Memory
bus width
Memory
bandwidth
VRAM Single
precision
(FP32)
Double
precision
(FP64)
INT8
(non-tensor)
INT8
dense tensor
INT32 FP4
dense tensor
FP16 FP16
dense tensor
bfloat16
dense tensor
TensorFloat-32
(TF32)
dense tensor
FP64
dense tensor
Interconnect
(NVLink)
GPU L1 Cache L2 Cache TDP Die size Transistor
count
Process Launched
P100 Pascal SXM/SXM2 3584 1792 N/A N/A 1480 MHz 1.4 Gbit/s HBM2 4096-bit 720 GB/sec 16 GB HBM2 10.6 TFLOPS 5.3 TFLOPS N/A N/A N/A N/A 21.2 TFLOPS N/A N/A N/A N/A 160 GB/sec GP100 1344 KB (24 KB × 56) 4096 KB 300 W 610 mm2 15.3 B TSMC 16FF+ Q2 2016
V100 16GB Volta SXM2 5120 2560 N/A 5120 1530 MHz 1.75 Gbit/s HBM2 4096-bit 900 GB/sec 16 GB HBM2 15.7 TFLOPS 7.8 TFLOPS 62 TOPS N/A 15.7 TOPS N/A 31.4 TFLOPS 125 TFLOPS N/A N/A N/A 300 GB/sec GV100 10240 KB (128 KB × 80) 6144 KB 300 W 815 mm2 21.1 B TSMC 12FFN Q3 2017
V100 32GB Volta SXM3 5120 2560 N/A 5120 1530 MHz 1.75 Gbit/s HBM2 4096-bit 900 GB/sec 32 GB HBM2 15.7 TFLOPS 7.8 TFLOPS 62 TOPS N/A 15.7 TOPS N/A 31.4 TFLOPS 125 TFLOPS N/A N/A N/A 300 GB/sec GV100 10240 KB (128 KB × 80) 6144 KB 350 W 815 mm2 21.1 B TSMC 12FFN
A100 40GB Ampere SXM4 6912 3456 6912 N/A 1410 MHz 2.4 Gbit/s HBM2 5120-bit 1.52 TB/sec 40 GB HBM2 19.5 TFLOPS 9.7 TFLOPS N/A 624 TOPS 19.5 TOPS N/A 78 TFLOPS 312 TFLOPS 312 TFLOPS 156 TFLOPS 19.5 TFLOPS 600 GB/sec GA100 20736 KB (192 KB × 108) 40960 KB 400 W 826 mm2 54.2 B TSMC N7 Q1 2020
A100 80GB Ampere SXM4 6912 3456 6912 N/A 1410 MHz 3.2 Gbit/s HBM2e 5120-bit 1.52 TB/sec 80 GB HBM2e 19.5 TFLOPS 9.7 TFLOPS N/A 624 TOPS 19.5 TOPS N/A 78 TFLOPS 312 TFLOPS 312 TFLOPS 156 TFLOPS 19.5 TFLOPS 600 GB/sec GA100 20736 KB (192 KB × 108) 40960 KB 400 W 826 mm2 54.2 B TSMC N7
H100 Hopper SXM5 16896 4608 16896 N/A 1980 MHz 5.2 Gbit/s HBM3 5120-bit 3.35 TB/sec 80 GB HBM3 67 TFLOPS 34 TFLOPS N/A 1.98 POPS N/A N/A N/A 990 TFLOPS 990 TFLOPS 495 TFLOPS 67 TFLOPS 900 GB/sec GH100 25344 KB (192 KB × 132) 51200 KB 700 W 814 mm2 80 B TSMC 4N Q3 2022
H200 Hopper SXM5 16896 4608 16896 N/A 1980 MHz 6.3 Gbit/s HBM3e 6144-bit 4.8 TB/sec 141 GB HBM3e 67 TFLOPS 34 TFLOPS N/A 1.98 POPS N/A N/A N/A 990 TFLOPS 990 TFLOPS 495 TFLOPS 67 TFLOPS 900 GB/sec GH100 25344 KB (192 KB × 132) 51200 KB 1000 W 814 mm2 80 B TSMC 4N Q3 2023
B100 Blackwell SXM6 N/A N/A N/A N/A N/A 8 Gbit/s HBM3e 8192-bit 8 TB/sec 192 GB HBM3e N/A N/A N/A 3.5 POPS N/A 7 PFLOPS N/A 1.98 PFLOPS 1.98 PFLOPS 989 TFLOPS 30 TFLOPS 1.8 TB/sec GB100 N/A N/A 700 W N/A 208 B TSMC 4NP Q4 2024
B200 Blackwell SXM6 N/A N/A N/A N/A N/A 8 Gbit/s HBM3e 8192-bit 8 TB/sec 192 GB HBM3e N/A N/A N/A 4.5 POPS N/A 9 PFLOPS N/A 2.25 PFLOPS 2.25 PFLOPS 1.2 PFLOPS 40 TFLOPS 1.8 TB/sec GB100 N/A N/A 1000 W N/A 208 B TSMC 4NP

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Pascal is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Maxwell architecture, introduced in 2016 and designed to deliver significant advancements in performance, efficiency, and support for emerging technologies such as virtual reality (VR), deep learning, and high-performance computing (HPC). It powers a range of products including the GeForce 10-series consumer graphics cards, Quadro professional workstations, and Tesla datacenter accelerators, featuring up to 150 billion transistors fabricated on TSMC's 16 nm FinFET process for enhanced power efficiency and compute throughput. Announced at Nvidia's GPU Technology Conference (GTC) in April 2016, Pascal marked a major evolution from Maxwell by increasing density and introducing architectural optimizations that achieved significant improvements, including up to 70% higher in select gaming benchmarks over the prior GTX 980 and over 12 times faster training in DGX-1 systems compared to previous generations. The architecture's debut came with the professional Tesla P100 accelerator based on the GP100 chip, followed by consumer GTX 1080 and GTX 1070 cards in May 2016, which set new benchmarks for 4K gaming and VR immersion with up to 70% higher than the GTX 980. At its core, Pascal refines the streaming multiprocessor (SM) design inherited from Maxwell, with each SM containing 128 CUDA cores for single-precision floating-point operations in consumer variants (or 64 in compute-focused GP100 for balanced FP32/FP64 support), alongside improved schedulers for better instruction throughput and native support for FP16 computations to accelerate AI workloads. Key innovations include NVLink for high-bandwidth GPU interconnects in professional systems (up to 160 GB/s bidirectional), HBM2 memory in datacenter GPUs for 720 GB/s bandwidth, and GDDR5X in consumer cards offering up to 484 GB/s effective bandwidth, all contributing to 1.5x to 3x improvements in power efficiency over Maxwell. The architecture also introduces compute preemption for fine-grained task switching and unified memory addressing with 49-bit virtual space, enabling seamless data sharing between CPU and GPU. Pascal's product lineup spans multiple GPU dies tailored to different markets: the flagship GP100 for Tesla P100 with 15.3 billion transistors and 16 GB HBM2; GP102 for high-end GTX 1080 Ti and Titan X with 12 billion transistors and 11 GB GDDR5X; GP104 for GTX 1080/1070; and smaller GP106/GP107 for GTX 1060/1050, all supporting 12, , and 's suite for advanced rendering effects like VRWorks for low-latency VR and Ansel for in-game . These GPUs collectively drove widespread adoption in gaming, professional visualization, and AI, with the alone shipping millions of units and enabling features like simultaneous multi-projection for VR at 90+ fps. Overall, Pascal solidified 's leadership in GPU technology by balancing massive parallelism with energy efficiency, paving the way for subsequent architectures like Volta.

Overview

Introduction

The Pascal microarchitecture is NVIDIA's graphics processing unit (GPU) architecture that succeeded the Maxwell microarchitecture of 2014 and preceded the Volta architecture of 2017, with its initial launch occurring in 2016. Designed to address diverse workloads, Pascal aimed to balance high-performance computing (HPC), artificial intelligence (AI) acceleration, and consumer graphics applications, leveraging TSMC's 16 nm FinFET manufacturing process for improved power efficiency and density. At its core, Pascal introduced compute capability 6.0, which provided enhanced support for advanced computational tasks, including native double-precision floating-point operations and half-precision (FP16) arithmetic optimized for workloads. These features enabled more efficient handling of scientific simulations in HPC and accelerated training in neural networks, marking a step forward in GPU versatility for both professional and emerging AI applications. Building on foundation, Pascal achieved significant scaling, reaching up to 12 billion transistors in its consumer-oriented variants, which facilitated greater parallelism while maintaining compatibility with existing software ecosystems. The architecture organizes its processing resources into Graphics Processing Clusters (GPCs) and Streaming Multiprocessors (SMs) as fundamental building blocks, with innovations like enabling high-bandwidth multi-GPU configurations for scalable computing.

Key Innovations

Pascal's adoption of TSMC's 16 nm FinFET manufacturing process marked a significant advancement in density and power efficiency compared to the preceding 28 nm node, enabling the fabrication of larger GPU dies such as the 610 mm² GP100 while maintaining constraints. This process technology contributed to an increase in over Maxwell, facilitating more complex architectures suitable for and environments. A core innovation was the native hardware support for half-precision (FP16) , executed at the full rate of single-precision (FP32) operations, which delivered up to 21 TFLOPS on the GP100 and accelerated training and inference workloads by enabling faster matrix computations without sacrificing accuracy in many applications. This capability represented a foundational step toward mixed-precision paradigms that would later evolve in subsequent architectures. The architecture introduced an enhanced Unified Memory system, featuring 49-bit virtual addressing to support vast address spaces up to 512 terabytes, hardware-accelerated page faulting for on-demand memory migration, and comprehensive atomic operations across global, shared, and memory spaces, all of which streamlined CPU-GPU data sharing and reduced programming complexity for tasks. Compute preemption at the instruction level allowed running kernels to be interrupted and context-switched with minimal overhead, enhancing system responsiveness in multi-user professional environments by preventing long-running tasks from monopolizing GPU resources or triggering timeouts. Building on FP16 support, Pascal enabled mixed-precision computing through 8, which allows developers to use multiple precisions such as FP16 and FP32 within compute workloads to optimize throughput and for applications requiring variable numerical precision, such as scientific simulations and early AI models.

Architecture

Processing Units

The Streaming Multiprocessor (SM) serves as the core compute unit in the Pascal microarchitecture, enabling parallel processing through a collection of cores and supporting functional units. In consumer-oriented implementations such as the GP104 and GP102 chips, each SM features a full configuration of 128 FP32 cores, 32 load/store units, 32 special function units (SFUs), and 8 texture units, organized to handle both general-purpose computing and graphics workloads efficiently. In contrast, the compute-focused GP100 variant employs a more streamlined design with 64 FP32 cores per SM to prioritize double-precision performance, while maintaining compatibility with the SIMT execution model. Pascal's scheduling architecture enhances concurrency within each SM by incorporating dual warp schedulers in GP100 and four in GP104, allowing up to 64 concurrent warps (2,048 threads) per SM. This setup supports the (SIMT) model, where warps of 32 threads execute in lockstep, with improvements in branch divergence handling to minimize idle cycles during conditional execution paths compared to prior architectures. Load/store operations are facilitated by the 32 units in consumer SMs, enabling overlapped accesses that reduce latency in data-intensive tasks. Double-precision (FP64) performance in Pascal varies by chip: consumer variants maintain a 1:32 ratio relative to FP32 throughput (with 4 dedicated FP64 units per SM), while GP100 achieves a 1:2 ratio through 32 FP64 units per SM, supporting applications. Additionally, Pascal introduces dedicated paths for tensor-like operations via paired FP16 instructions, doubling the effective FP16 throughput over FP32 in GP100 and providing efficient half-precision compute in consumer chips. In consumer designs, each Texture Processing Cluster (TPC) contains one SM with 8 texture units and 32 load/store units to accelerate texture fetch and filtering in pipelines; in GP100, each TPC contains two SMs. Each SM includes a 256 KB register file, comprising 64,000 32-bit registers, which supports dynamic allocation for local variables and thread state. In FP16 mode, this capacity effectively doubles due to the smaller size, enabling higher occupancy for mixed-precision workloads without additional hardware overhead.

Graphics Pipeline

The graphics pipeline in the Pascal microarchitecture centers on fixed-function hardware optimized for efficient rasterization and , enabling high-performance rendering in gaming and visualization applications. At the core of this pipeline are the Processing Clusters (GPCs), with high-end chips supporting up to 6 GPCs per die. Each GPC integrates a raster engine for scan conversion, Raster Operation Processors (ROPs) for final operations, and partitions of the shared L2 cache to facilitate flow between processing stages. This allows for balanced distribution of workloads across the chip, contributing to improved fill rates and reduced latency in rendering pipelines. The Polymorph Engine 4.0 serves as the primary fixed-function unit for in Pascal, handling vertex fetching, transformation, , and primitive setup before passing data to the raster stage. An key enhancement in this version is the addition of a Simultaneous Multi-Projection (SMP) block, which performs efficient transformations and topology load balancing for multi-view rendering, such as in VR environments where multiple projections of the same geometry are generated in a single pass to reduce overhead by up to 2x compared to software-based approaches in prior architectures. This integration minimizes redundant computations for curved displays and head-mounted displays, enhancing performance in immersive applications without burdening the programmable units. Raster Operation Processors (ROPs) in Pascal handle blending, depth testing, and resolution, with high-end configurations featuring 64 ROPs across the chip—typically 16 per GPC in 4-GPC designs like GP104. Each ROP unit supports 4x (MSAA) and advanced compressed color formats, such as delta color compression, to optimize usage during operations. Pascal's ROPs also provide native hardware support for conservative rasterization at Tier 2 level per 12 specifications, ensuring all partially covered pixels are processed for accurate overlap detection in techniques like and contact hardening, without requiring additional geometry shaders. Primitive assembly is managed through dual index engines within the front-end , which assemble vertices into and reduce setup overhead by approximately 20% over Maxwell through optimized index fetching and . This contributes to higher throughput, with rates reaching up to 4 pixels per clock per ROP unit in integer operations, enabling sustained fill rates that scale with clock speeds in demanding rendering scenarios. The overall briefly interfaces with Streaming Multiprocessors () for execution but maintains separation for fixed-function efficiency, while influences ultimate fill rate limits in bandwidth-bound workloads.

Memory and Interconnect

The memory subsystem in Pascal GPUs employs a hierarchical structure to support high-throughput data access for both graphics and compute workloads. Each Streaming Multiprocessor (SM) features a configurable L1 cache of 48 KB, which can be dynamically allocated between L1 caching and to optimize for specific application needs, such as balancing local data reuse and thread block communication. A unified L2 cache, shared across all SMs, scales up to 4 MB in configurations like the GP100, promoting efficient data coherence and minimizing latency to off-chip by caching frequently accessed data. Professional variants, such as those in Tesla and products, incorporate Error-Correcting Code (ECC) support in both L1 and L2 caches using Single Error Correction, Double Error Detection (SECDED) mechanisms to ensure in mission-critical environments. For data center applications, the GP100 GPU integrates High Bandwidth Memory 2 (HBM2) as its primary memory technology, delivering 16 GB of capacity with an aggregate bandwidth of 720 GB/s via a wide 4096-bit interface and advanced CoWoS packaging that stacks the GPU die on silicon interposers alongside HBM2 modules. This setup provides native ECC protection without bandwidth penalties, enabling reliable high-performance computing while consuming less power than traditional DRAM alternatives. In multi-GPU systems, HBM2's low-latency access complements the overall hierarchy by reducing contention in shared data scenarios. Consumer and Pascal implementations, such as the GP102 and GP104 GPUs, utilize GDDR5X memory to achieve high bandwidth suitable for gaming and visualization, reaching up to 484 GB/s on wider buses with effective pin rates of 10-12 Gbps. GDDR5X employs PAM4 signaling with integrated error correction to maintain at these speeds, supporting larger frame buffers like 11 GB or 12 GB without the power overhead of HBM2. This memory type interfaces directly with the L2 cache, ensuring seamless data flow for texture and framebuffer operations. Pascal introduces 1.0 as a high-speed interconnect for GPU-to-GPU and GPU-to-CPU communication, offering 40 GB/s of bidirectional bandwidth per link (20 GB/s per direction) to enable scalable multi-GPU configurations that surpass traditional PCIe Gen3 limits by up to 5 times. In systems like the DGX-1, multiple NVLink connections facilitate between GPUs, reducing overhead in distributed training and simulation tasks. Complementing this, the Page Migration Engine supports Unified Memory by handling hardware-accelerated page faults, automatically migrating data pages between CPU and GPU address spaces across a 49-bit virtual address range without requiring explicit software management, thus simplifying programming for .

Implementations

Chip Designs

The Pascal microarchitecture was realized in a family of GPU dies fabricated exclusively on TSMC's 16 nm FinFET process node, enabling efficient scaling across performance tiers through variations in streaming multiprocessor (SM) counts and memory interfaces. This process supported power targets ranging from 250 W in high-end configurations to around 120 W in lower-power variants, balancing density and thermal constraints. The GP100 represents the pinnacle of Pascal's implementations, designed primarily for high-end workloads. It incorporates 15.3 billion transistors across a mm² die area, with 56 enabled SMs out of a possible 60 for robust parallel processing. GP100 uniquely supports HBM2 memory via wide 4096-bit interfaces and includes interconnect capabilities for high-bandwidth multi-GPU configurations; it launched in May 2016 with a 250 W TDP in its PCIe variant. For enthusiast-level graphics, the GP102 die scales down from GP100 while retaining core architectural features. It features 11.8 billion transistors on a 471 mm² die, supporting up to 30 (typically 28 enabled in production variants) and delivering around 12 TFLOPS of single-precision floating-point performance. Paired with GDDR5X on a 352-bit bus, GP102 targets 250 W power envelopes, emphasizing high-frame-rate rendering without support. The mid-range GP104 die further optimizes for cost and efficiency, housing 7.2 billion transistors in a compact 314 mm² area with up to 20 SMs. It employs GDDR5X memory on a 256-bit interface, suitable for 150–180 W TDP configurations that prioritize balanced compute and graphics throughput. This design serves as the basis for scaling SM counts in consumer-oriented variants, such as 15 or 20 enabled units depending on binning. Entry-level implementations include the GP106, GP107, and GP108 dies, which downscale SM counts and memory buses for mainstream and budget segments. For instance, GP106 features approximately 4.4 billion transistors on a 200 mm² die with 10 and a 192-bit GDDR5 bus, targeting 120 W or lower. GP107 and GP108 further reduce to 3.3 billion transistors on a 132 mm² die with up to 6 , and 1.8 billion transistors on a 74 mm² die with up to 3 , respectively, both with 128-bit GDDR5 buses, enabling sub-100 W operation for integrated or discrete low-power GPUs. These smaller dies leverage the same SM architecture for modular scaling, ensuring compatibility with Pascal's unified model across the lineup.
ChipTransistor Count (billions)Die Size (mm²)Max SMsMemory InterfaceTDP Range (W)Launch Year
GP10015.361060 (56 enabled)HBM2 (4096-bit)250–3002016
GP10211.847130GDDR5X (352-bit)2502016
GP1047.231420GDDR5X (256-bit)150–1802016
GP1064.420010GDDR5 (192-bit)1202016
GP1073.31326GDDR5 (128-bit)752016
GP1081.8743GDDR5 (128-bit)302016

Major Products

The Tesla P100 accelerator, based on the GP100 chip, featured 16 GB of HBM2 and support for interconnects, targeting and workloads in data centers. With a of up to 300 W, it represented 's first Pascal-based product for professional acceleration, enabling scalable GPU clusters for scientific simulations and . The DGX-1 integrated eight Tesla P100 GPUs within an NVLink mesh topology, delivering 170 TFLOPS of FP16 performance optimized for training tasks. Announced in as a system, it combined high-bandwidth interconnects with pre-installed software stacks to accelerate AI model development in enterprise environments. NVIDIA's brought Pascal architecture to consumer gaming, with the flagship GTX 1080 utilizing the GP104 chip, 8 GB of GDDR5X memory, and delivering around 9 TFLOPS for immersive 4K gaming and experiences. Scaled variants included the GTX 1070 and GTX 1060, also based on GP104 and GP106 respectively, which offered mid-range performance for mainstream gamers while maintaining energy efficiency improvements over prior generations. Launched starting in May 2016, the series emphasized VR readiness and high-frame-rate rendering. For professional visualization, the Quadro P series included the P6000, built on the GP102 chip with 24 GB of GDDR5X memory, designed for complex CAD, simulation, and rendering workflows in industries like architecture and media. These cards supported certified drivers for stability in multi-GPU setups, bridging creative design needs with computational demands. The Titan X Pascal served as a high-end consumer option, refreshing the GP102 with 12 GB of GDDR5X to position it between gaming and use cases for enthusiasts tackling advanced rendering and compute tasks. Released in August 2016, it highlighted Pascal's balance of raw power and efficiency for individual creators and researchers.

Performance

Theoretical Metrics

The theoretical peak performance of Pascal GPUs is calculated based on the architecture's streaming multiprocessor (SM) design, where each SM contains 128 cores capable of executing fused multiply-add (FMA) operations, delivering 2 floating-point operations (FLOPs) per clock cycle per core. The single-precision (FP32) throughput is thus: peak FP32 TFLOPS = (number of ) × 128 cores/SM × 2 FLOPS/clock × clock speed in GHz. For instance, the GP104 GPU, with 20 operating at a boost clock of 1.733 GHz, achieves approximately 8.9 TFLOPS of FP32 performance. Double-precision (FP64) performance in Pascal varies by chip variant to balance compute and graphics workloads. Consumer-oriented chips like GP104 maintain a 1:32 FP32-to-FP64 ratio, with only 4 FP64 units per SM, yielding peak FP64 throughput at 1/32 of FP32; for the GTX 1080 based on GP104, this equates to about 0.3 TFLOPS. In contrast, the compute-focused GP100 achieves a 1:2 ratio with 32 FP64 units per SM, delivering 5.3 TFLOPS of FP64 performance at its 1.482 GHz boost clock. Half-precision (FP16) operations leverage packed math instructions for higher throughput in suitable workloads. In GP100, FP16 achieves twice the FP32 rate through dedicated tensor core precursors and optimized packing, reaching 21.2 TFLOPS. Consumer variants like GP104 offer lower FP16 support at 1/64 of FP32 due to prioritized features. Memory bandwidth in Pascal is determined by the memory interface width, type, and effective data rate, calculated as (bus width in bits × data rate in Gbps) / 8 to convert to GB/s. Consumer GPUs like those using GP104 employ a 256-bit GDDR5X bus at 10 Gbps per pin, providing 320 GB/s. The GP100 utilizes HBM2 with a 4096-bit effective interface (four 1024-bit stacks) at 1.525 Gbps per pin, yielding 732 GB/s. Texture fill rate, critical for graphics rendering, is computed from the number of texture mapping units (TMUs) per SM (8 in Pascal) multiplied by the clock speed. For GP104 with 160 TMUs at 1.5 GHz, this results in 240 Gtexels/s.

Real-World Benchmarks

In gaming applications, the GTX 1080 demonstrated strong performance at , delivering over 60 FPS in 2016 titles such as on ultra settings without hairworks, providing smooth gameplay for demanding scenes. Compared to the prior-generation Maxwell-based GTX 980, the Pascal GTX 1080 offered approximately twice the frame rates in GPU-bound scenarios like 4K gaming, thanks to its higher core count and improved efficiency. For and AI workloads, the Tesla P100 excelled in tasks, training the ResNet-50 model up to 3.4 times faster than the Kepler-based Tesla K80 in multi-GPU configurations using frameworks like Caffe and on the dataset. Additionally, Pascal's native FP16 support provided up to 12 times the performance over previous Kepler-generation GPUs, enabling faster training with reduced precision while maintaining accuracy. In professional visualization, the P6000 showed substantial gains in the SPECviewperf benchmark suite, achieving around 30% higher frame rates than the previous-generation Quadro M6000 in the medical-01 viewset, which simulates complex and rendering workloads. Pascal GPUs also improved power efficiency, delivering 1.5 to 2 times better than Maxwell counterparts in benchmarks; for instance, the GTX 1080 provided 8.9 TFLOPS of FP32 compute at 180W, compared to the GTX 980's 4.6 TFLOPS at 165W. Against competitors, the Pascal GTX 1080 outperformed the RX 480 by 20% to 50% in 12 titles leveraging asynchronous compute, such as at 4K, where it achieved 40 FPS versus the RX 480's lower rates in equivalent tests.

Development and Legacy

Background and Release

NVIDIA first announced the Pascal microarchitecture at its GPU Technology Conference (GTC) in March 2014 as the successor to the Maxwell architecture, with further roadmap details revealed at GTC 2015, positioning it as a major advancement for (HPC) and artificial intelligence applications amid the burgeoning revolution. The design emphasized enhanced computational efficiency and scalability to meet the demands of increasingly complex neural networks and scientific simulations, with early roadmaps highlighting features like mixed-precision computing tailored for workloads. Key development milestones included the detailed unveiling of the flagship GP100 GPU at GTC 2016 on April 5, 2016, where demonstrated its capabilities through the Tesla P100 accelerator, marking the architecture's entry into production. The Tesla P100 began shipping in June 2016 for and supercomputing applications, providing early access to Pascal's HPC-focused innovations. Consumer-oriented products followed, with the GTX 1080 revealed on May 6, 2016, and made available on May 27, while the high-end Titan X (Pascal) launched on August 2, 2016; professional variants like the P6000 and P5000 completed the initial rollout in October 2016. Engineering efforts centered on transitioning to TSMC's 16 nm FinFET process for improved power efficiency and density, a shift from prior nodes that required significant retooling of fabrication techniques. Integration of HBM2 memory in products like the Tesla P100 presented challenges related to stacking and yields, ultimately leading to the adoption of GDDR5X for consumer variants to ensure timely availability and cost-effectiveness. Pascal's development occurred in a competitive landscape, responding to AMD's architecture for consumer graphics and Intel's Knights Landing for HPC, with NVIDIA differentiating through , a high-speed GPU-to-GPU interconnect delivering up to 160 GB/s bandwidth to enhance scalability. Complementing the hardware, released 8.0 in September 2016, introducing full Pascal support including advanced Unified Memory APIs that simplified programming by enabling seamless data sharing between CPU and GPU without explicit transfers.

Successor and Impact

The Volta microarchitecture succeeded Pascal as NVIDIA's next-generation GPU architecture, fabricated on a TSMC 12 nm FinFET process node and launched in 2017 with the Tesla V100 accelerator. Building directly on Pascal's introduction of FP16 (half-precision) support for deep learning workloads, Volta introduced the first Tensor Cores, specialized hardware units designed to accelerate matrix multiply-accumulate operations central to neural network training and inference. These Tensor Cores enabled mixed-precision computing, combining FP16 inputs with FP32 accumulation for improved accuracy and throughput, marking a pivotal evolution in GPU-accelerated AI. The transition from Pascal to Volta was driven by Pascal's limitations in handling the explosive growth of demands, particularly in tensor operations where standard FP16 performance proved insufficient for scaling large models. While the Pascal-based Tesla P100 delivered 21 TFLOPS of FP16 performance, Volta's V100 achieved up to 125 TFLOPS in Tensor Core FP16 operations, providing approximately 6x higher peak throughput for inference and 12x for compared to Pascal's capabilities. This enhancement addressed bottlenecks in matrix computations, enabling faster iteration on complex neural networks and broader adoption of GPU-accelerated AI frameworks. Pascal's legacy in AI is profound, as it facilitated the widespread adoption of GPU-based by providing the computational foundation for early frameworks like and . The Tesla P100 and DGX-1 system, powered by eight interconnected P100 GPUs via , became staples for training large-scale models, offering over 170 TFLOPS of FP16 performance in a single chassis and accelerating research in , , and . These systems democratized access to high-performance for academia and industry, powering breakthroughs such as those from and enabling the shift from CPU-dominated workflows to GPU-optimized pipelines. By 2025, Pascal architectures persist in legacy applications like for inference tasks and educational environments for teaching AI fundamentals, where cost-effectiveness outweighs the need for cutting-edge performance. However, in data centers, Pascal has been largely phased out in favor of newer generations, including Hopper (introduced in 2022 with the H100) and Blackwell (launched in 2024 with the B100/B200), which offer vastly superior tensor performance and energy efficiency for modern AI workloads. This transition reflects the rapid evolution of AI hardware demands, with Pascal serving as a bridge to the tensor-optimized era. Pascal's broader industry impact includes standardizing high-bandwidth interconnects like , which provided up to 160 GB/s bidirectional GPU-to-GPU communication—five times faster than PCIe Gen 3—and influenced competitors' designs, such as AMD's Infinity Fabric for scalable multi-chip communication. By enabling efficient multi-GPU scaling, helped solidify NVIDIA's dominance in AI GPUs, contributing to the company's achievement of over 80% in AI accelerators by 2018. This leadership position accelerated the integration of GPUs into AI ecosystems, fostering innovations in and setting benchmarks for future architectures.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.