Recent from talks
Nothing was collected or created yet.
Pascal (microarchitecture)
View on Wikipedia
A GTX 1070 Founders Edition graphics based on the Pascal architecture | |
| Launched | May 27, 2016 |
|---|---|
| Designed by | Nvidia |
| Manufactured by | |
| Fabrication process | |
| Codename | GP10x |
| Product Series | |
| Desktop | |
| Professional/workstation | |
| Server/datacenter | |
| Specifications | |
| L1 cache | 24 KB (per SM) |
| L2 cache | 256 KB—4 MB |
| Memory support | |
| PCIe support | PCIe 3.0 |
| Supported Graphics APIs | |
| DirectX | DirectX 12 (12.1) |
| Direct3D | Direct3D 12.0 |
| Shader Model | Shader Model 6.7 |
| OpenGL | OpenGL 4.6 |
| CUDA | Compute Capability 6.0 |
| Vulkan | Vulkan 1.3 |
| Supported Compute APIs | |
| OpenCL | OpenCL 3.0 |
| Media Engine | |
| Encode codecs | |
| Decode codecs | |
| Color bit-depth |
|
| Encoder supported | NVENC |
| Display outputs | |
| History | |
| Predecessor | Maxwell |
| Successor | |
| Support status | |
| Limited support until November 2025 Security updates until October 2028[1] | |
Pascal is the codename for a GPU microarchitecture developed by Nvidia, as the successor to the Maxwell architecture. The architecture was first introduced in April 2016 with the release of the Tesla P100 (GP100) on April 5, 2016, and is primarily used in the GeForce 10 series, starting with the GeForce GTX 1080 and GTX 1070 (both using the GP104 GPU), which were released on May 27, 2016, and June 10, 2016, respectively. Pascal was manufactured using TSMC's 16 nm FinFET process,[2] and later Samsung's 14 nm FinFET process.[3]
The architecture is named after the 17th century French mathematician and physicist, Blaise Pascal.
In April 2019, Nvidia enabled a software implementation of DirectX Raytracing on Pascal-based cards starting with the GTX 1060 6 GB, and in the 16 series cards, a feature reserved to the Turing-based RTX series up to that point.[4][5]
Details
[edit]


In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016, and released on May 27 of the same year. The Tesla P100 (GP100 chip) has a different version of the Pascal architecture compared to the GTX GPUs (GP104 chip). The shader units in GP104 have a Maxwell-like design.[6]
Architectural improvements of the GP100 architecture include the following:[7][8][9]
- In Pascal, a SM (streaming multiprocessor) consists of between 64-128 CUDA cores, depending on if it is GP100 or GP104. Maxwell contained 128 CUDA cores per SM; Kepler had 192, Fermi 32 and Tesla 8. The GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units.
- CUDA Compute Capability 6.0.
- High Bandwidth Memory 2 — some cards feature 16 GiB HBM2 in four stacks with a total bus width of 4096 bits and a memory bandwidth of 720 GB/s.
- Unified memory — a memory architecture where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine".
- NVLink — a high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.[10][11]
- 16-bit (FP16) floating-point operations (colloquially "half precision") can be executed at twice the rate of 32-bit floating-point operations ("single precision")[12] and 64-bit floating-point operations (colloquially "double precision") executed at half the rate of 32-bit floating point operations.[13]
- More registers — twice the amount of registers per CUDA core compared to Maxwell.
- More shared memory.
- Dynamic load balancing scheduling system.[14] This allows the scheduler to dynamically adjust the amount of the GPU assigned to multiple tasks, ensuring that the GPU remains saturated with work except when there is no more work that can safely be distributed to distribute.[14] Nvidia therefore has safely enabled asynchronous compute in Pascal's driver.[14]
- Instruction-level and thread-level preemption.[15]
Architectural improvements of the GP104 architecture include the following:[6]
- CUDA Compute Capability 6.1.
- GDDR5X — new memory standard supporting 10Gbit/s data rates, updated memory controller.[16]
- Simultaneous Multi-Projection - generating multiple projections of a single geometry stream, as it enters the SMP engine from upstream shader stages.[17]
- DisplayPort 1.4, HDMI 2.0b.
- Fourth generation Delta Color Compression.
- Enhanced SLI Interface — SLI interface with higher bandwidth compared to the previous versions.
- PureVideo Feature Set H hardware video decoding HEVC Main10 (10-bit), Main12 (12-bit) and VP9 hardware decoding.
- HDCP 2.2 support for 4K DRM protected content playback and streaming (Maxwell GM200 and GM204 lack HDCP 2.2 support, GM206 supports HDCP 2.2).[18]
- NVENC HEVC Main10 10bit hardware encoding.
- GPU Boost 3.0.
- Instruction-level preemption.[15] In graphics tasks, the driver restricts preemption to the pixel-level, because pixel tasks typically finish quickly and the overhead costs of doing pixel-level preemption are lower than instruction-level preemption (which is expensive).[15] Compute tasks get thread-level or instruction-level preemption,[15] because they can take longer times to finish and there are no guarantees on when a compute task finishes. Therefore the driver enables the expensive instruction-level preemption for these tasks.[15]
Overview
[edit]Graphics Processor Cluster
[edit]A chip is partitioned into Graphics Processor Clusters (GPCs). For the GP104 chips, a GPC encompasses 5 SMs.
Streaming Multiprocessor "Pascal"
[edit]A "Streaming Multiprocessor" is analogous to AMD's Compute Unit. An SM encompasses 128 single-precision ALUs ("CUDA cores") on GP104 chips and 64 single-precision ALUs on GP100 chips. While all CU versions consist of 64 shader processors (i.e. 4 SIMD Vector Units, each 16 lanes wide), Nvidia experimented with very different numbers of CUDA cores:
- On Tesla, 1 SM combines 8 single-precision (FP32) shader processors
- On Fermi, 1 SM combines 32 single-precision (FP32) shader processors
- On Kepler, 1 SM combines 192 single-precision (FP32) shader processors and 64 double-precision (FP64) units (on GK110 GPUs)
- On Maxwell, 1 SM combines 128 single-precision (FP32) shader processors
- On Pascal, it depends:
- On GP100, 1 SM combines 64 single-precision (FP32) shader processors and also 32 double-precision (FP64) providing a 2:1 ratio of single- to double-precision throughput. The GP100 uses more flexible FP32 cores that are able to process one single-precision or two half-precision numbers in a two-element vector.[19] This is intended to better serve machine learning tasks.
- On GP104, 1 SM combines 128 single-precision ALUs, 4 double-precision ALUs (providing a 32:1 ratio), and one half-precision ALU which contains a vector of two half-precision floats which can execute the same instruction on both floats, providing a 64:1 ratio if the same instruction is used on both elements.
Polymorph-Engine 4.0
[edit]The Polymorph Engine version 4.0 is the unit responsible for Tessellation. It corresponds functionally with AMD's Geometric Processor. It has been moved from the shader module to the TPC to allow one Polymorph engine to feed multiple SMs within the TPC.[20]
Chips
[edit]
- GP100: Nvidia's Tesla P100 GPU accelerator is targeted at GPGPU applications such as FP64 double precision compute and deep learning training that uses FP16. It uses HBM2 memory.[21] Quadro GP100 also uses the GP100 GPU.
- GP102: This GPU is used in the Titan Xp,[22] Titan X Pascal[23] and the GeForce GTX 1080 Ti. It is also used in the Quadro P6000[24] & Tesla P40.[25]
- GP104: This GPU is used in the GeForce GTX 1070, GTX 1070 Ti, GTX 1080, and some GTX 1060 6 GB's. The GTX 1070 has 15/20 and the GTX 1070 Ti has 19/20 of its SMs enabled; both utilize GDDR5 memory. The GTX 1080 is a fully unlocked chip and uses GDDR5X memory. Some GTX 1060 6 GB's use GP104 with 10/20 SMs enabled and GDDR5X memory.[26] It is also used in the Quadro P5000, Quadro P4000, Quadro P3200 (mobile applications) and Tesla P4.
- GP106: This GPU is used in the GeForce GTX 1060 with GDDR5[27] memory.[28][29] It is also used in the Quadro P2000.
- GP107: This GPU is used in the GeForce GTX 1050 and 1050 Ti. It is also used in the Quadro P1000, Quadro P600, Quadro P620 & Quadro P400.
- GP108: This GPU is used in the GeForce GT 1010 and GeForce GT 1030.
| GK104 | GK110 | GM204 (GTX 970) | GM204 (GTX 980) | GM200 | GP104 | GP100 | |
|---|---|---|---|---|---|---|---|
| Dedicated texture cache per SM | 48 KiB | — | — | — | — | — | — |
| Texture (graphics or compute) or read-only data (compute only) cache per SM | — | 48 KiB[30] | — | — | — | — | — |
| Programmer-selectable shared memory/L1 partitions per SM | 48 KiB shared memory + 16 KiB L1 cache (default)[31] | 48 KiB shared memory + 16 KiB L1 cache (default)[31] | — | — | — | — | — |
| 32 KiB shared memory + 32 KiB L1 cache[31] | 32 KiB shared memory + 32 KiB L1 cache[31] | ||||||
| 16 KiB shared memory + 48 KiB L1 cache[31] | 16 KiB shared memory + 48 KiB L1 cache[31] | ||||||
| Unified L1 cache/texture cache per SM | — | — | 48 KiB[32] | 48 KiB[32] | 48 KiB[32] | 48 KiB[32] | 24 KiB[32] |
| Dedicated shared memory per SM | — | — | 96 KiB[32] | 96 KiB[32] | 96 KiB[32] | 96 KiB[32] | 64 KiB[32] |
| L2 cache per chip | 512 KiB[32] | 1536 KiB[32] | 1792 KiB[33] | 2048 KiB[33] | 3072 KiB[32] | 2048 KiB[32] | 4096 KiB[32] |
Performance
[edit]The theoretical single-precision processing power of a Pascal GPU in GFLOPS is computed as 2 × operations per FMA instruction per CUDA core per cycle × number of CUDA cores × core clock speed (in GHz).
The theoretical double-precision processing power of a Pascal GPU is 1/2 of the single precision performance on Nvidia GP100, and 1/32 of Nvidia GP102, GP104, GP106, GP107 & GP108.
The theoretical half-precision processing power of a Pascal GPU is 2× of the single precision performance on GP100[13] and 1/64 on GP104, GP106, GP107 & GP108.[19]
Successor
[edit]The Pascal architecture was succeeded in 2017 by Volta in the HPC, cloud computing, and self-driving car markets, and in 2018 by Turing in the consumer and business market.[34]
P100 accelerator and DGX-1
[edit]Comparison of accelerators used in DGX:[35][36][37]
| Model | Architecture | Socket | FP32 CUDA cores |
FP64 cores (excl. tensor) |
Mixed INT32/FP32 cores |
INT32 cores |
Boost clock |
Memory clock |
Memory bus width |
Memory bandwidth |
VRAM | Single precision (FP32) |
Double precision (FP64) |
INT8 (non-tensor) |
INT8 dense tensor |
INT32 | FP4 dense tensor |
FP16 | FP16 dense tensor |
bfloat16 dense tensor |
TensorFloat-32 (TF32) dense tensor |
FP64 dense tensor |
Interconnect (NVLink) |
GPU | L1 Cache | L2 Cache | TDP | Die size | Transistor count |
Process | Launched |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| P100 | Pascal | SXM/SXM2 | 3584 | 1792 | N/A | N/A | 1480 MHz | 1.4 Gbit/s HBM2 | 4096-bit | 720 GB/sec | 16 GB HBM2 | 10.6 TFLOPS | 5.3 TFLOPS | N/A | N/A | N/A | N/A | 21.2 TFLOPS | N/A | N/A | N/A | N/A | 160 GB/sec | GP100 | 1344 KB (24 KB × 56) | 4096 KB | 300 W | 610 mm2 | 15.3 B | TSMC 16FF+ | Q2 2016 |
| V100 16GB | Volta | SXM2 | 5120 | 2560 | N/A | 5120 | 1530 MHz | 1.75 Gbit/s HBM2 | 4096-bit | 900 GB/sec | 16 GB HBM2 | 15.7 TFLOPS | 7.8 TFLOPS | 62 TOPS | N/A | 15.7 TOPS | N/A | 31.4 TFLOPS | 125 TFLOPS | N/A | N/A | N/A | 300 GB/sec | GV100 | 10240 KB (128 KB × 80) | 6144 KB | 300 W | 815 mm2 | 21.1 B | TSMC 12FFN | Q3 2017 |
| V100 32GB | Volta | SXM3 | 5120 | 2560 | N/A | 5120 | 1530 MHz | 1.75 Gbit/s HBM2 | 4096-bit | 900 GB/sec | 32 GB HBM2 | 15.7 TFLOPS | 7.8 TFLOPS | 62 TOPS | N/A | 15.7 TOPS | N/A | 31.4 TFLOPS | 125 TFLOPS | N/A | N/A | N/A | 300 GB/sec | GV100 | 10240 KB (128 KB × 80) | 6144 KB | 350 W | 815 mm2 | 21.1 B | TSMC 12FFN | |
| A100 40GB | Ampere | SXM4 | 6912 | 3456 | 6912 | N/A | 1410 MHz | 2.4 Gbit/s HBM2 | 5120-bit | 1.52 TB/sec | 40 GB HBM2 | 19.5 TFLOPS | 9.7 TFLOPS | N/A | 624 TOPS | 19.5 TOPS | N/A | 78 TFLOPS | 312 TFLOPS | 312 TFLOPS | 156 TFLOPS | 19.5 TFLOPS | 600 GB/sec | GA100 | 20736 KB (192 KB × 108) | 40960 KB | 400 W | 826 mm2 | 54.2 B | TSMC N7 | Q1 2020 |
| A100 80GB | Ampere | SXM4 | 6912 | 3456 | 6912 | N/A | 1410 MHz | 3.2 Gbit/s HBM2e | 5120-bit | 1.52 TB/sec | 80 GB HBM2e | 19.5 TFLOPS | 9.7 TFLOPS | N/A | 624 TOPS | 19.5 TOPS | N/A | 78 TFLOPS | 312 TFLOPS | 312 TFLOPS | 156 TFLOPS | 19.5 TFLOPS | 600 GB/sec | GA100 | 20736 KB (192 KB × 108) | 40960 KB | 400 W | 826 mm2 | 54.2 B | TSMC N7 | |
| H100 | Hopper | SXM5 | 16896 | 4608 | 16896 | N/A | 1980 MHz | 5.2 Gbit/s HBM3 | 5120-bit | 3.35 TB/sec | 80 GB HBM3 | 67 TFLOPS | 34 TFLOPS | N/A | 1.98 POPS | N/A | N/A | N/A | 990 TFLOPS | 990 TFLOPS | 495 TFLOPS | 67 TFLOPS | 900 GB/sec | GH100 | 25344 KB (192 KB × 132) | 51200 KB | 700 W | 814 mm2 | 80 B | TSMC 4N | Q3 2022 |
| H200 | Hopper | SXM5 | 16896 | 4608 | 16896 | N/A | 1980 MHz | 6.3 Gbit/s HBM3e | 6144-bit | 4.8 TB/sec | 141 GB HBM3e | 67 TFLOPS | 34 TFLOPS | N/A | 1.98 POPS | N/A | N/A | N/A | 990 TFLOPS | 990 TFLOPS | 495 TFLOPS | 67 TFLOPS | 900 GB/sec | GH100 | 25344 KB (192 KB × 132) | 51200 KB | 1000 W | 814 mm2 | 80 B | TSMC 4N | Q3 2023 |
| B100 | Blackwell | SXM6 | N/A | N/A | N/A | N/A | N/A | 8 Gbit/s HBM3e | 8192-bit | 8 TB/sec | 192 GB HBM3e | N/A | N/A | N/A | 3.5 POPS | N/A | 7 PFLOPS | N/A | 1.98 PFLOPS | 1.98 PFLOPS | 989 TFLOPS | 30 TFLOPS | 1.8 TB/sec | GB100 | N/A | N/A | 700 W | N/A | 208 B | TSMC 4NP | Q4 2024 |
| B200 | Blackwell | SXM6 | N/A | N/A | N/A | N/A | N/A | 8 Gbit/s HBM3e | 8192-bit | 8 TB/sec | 192 GB HBM3e | N/A | N/A | N/A | 4.5 POPS | N/A | 9 PFLOPS | N/A | 2.25 PFLOPS | 2.25 PFLOPS | 1.2 PFLOPS | 40 TFLOPS | 1.8 TB/sec | GB100 | N/A | N/A | 1000 W | N/A | 208 B | TSMC 4NP |
See also
[edit]References
[edit]- ^ Kampman, Jeffrey (July 31, 2025). "Nvidia confirms end of Game Ready driver support for Maxwell and Pascal GPUs — affected products will get optimized drivers through October 2025". Tom's Hardware. Retrieved August 21, 2025.
- ^ "NVIDIA 7nm Next-Gen-GPUs To Be Built By TSMC". Wccftech. June 24, 2018. Retrieved July 6, 2019.
- ^ "Samsung to Optical-Shrink NVIDIA "Pascal" to 14 nm". Retrieved August 13, 2016.
- ^ "Accelerating The Real-Time Ray Tracing Ecosystem: DXR For GeForce RTX and GeForce GTX". NVIDIA.
- ^ "Ray Tracing Comes to Nvidia GTX GPUs: Here's How to Enable It". April 11, 2019.
- ^ a b "NVIDIA GeForce GTX 1080" (PDF). International.download.nvidia.com. Retrieved September 15, 2016.
- ^ Gupta, Sumit (March 21, 2014). "NVIDIA Updates GPU Roadmap; Announces Pascal". Blogs.nvidia.com. Retrieved March 25, 2014.
- ^ "Parallel Forall". NVIDIA Developer Zone. Devblogs.nvidia.com. Archived from the original on March 26, 2014. Retrieved March 25, 2014.
- ^ "NVIDIA Tesla P100" (PDF). International.download.nvidia.com. Retrieved September 15, 2016.
- ^ "Inside Pascal: NVIDIA's Newest Computing Platform". April 5, 2016.
- ^ Denis Foley (March 25, 2014). "NVLink, Pascal and Stacked Memory: Feeding the Appetite for Big Data". nvidia.com. Retrieved July 7, 2014.
- ^ "NVIDIA's Next-Gen Pascal GPU Architecture to Provide 10X Speedup for Deep Learning Apps". The Official NVIDIA Blog. Retrieved March 23, 2015.
- ^ a b Smith, Ryan (April 5, 2015). "NVIDIA Announces Tesla P100 Accelerator - Pascal GP100 Power for HPC". AnandTech. Archived from the original on April 6, 2016. Retrieved May 27, 2016.
Each of those SMs also contains 32 FP64 CUDA cores - giving us the 1/2 rate for FP64 - and new to the Pascal architecture is the ability to pack 2 FP16 operations inside a single FP32 CUDA core under the right circumstances
- ^ a b c Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 9. Archived from the original on July 23, 2016. Retrieved July 21, 2016.
- ^ a b c d e Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 10. Archived from the original on July 24, 2016. Retrieved July 21, 2016.
- ^ "GTX 1080 Graphics Card". GeForce. Retrieved September 15, 2016.
- ^ Carbotte, Kevin (May 17, 2016). "Nvidia GeForce GTX 1080 Simultaneous Multi-Projection & Async Compute". Tomshardware.com. Retrieved September 15, 2016.
- ^ "Nvidia Pascal HDCP 2.2". Nvidia Hardware Page. Retrieved May 8, 2016.
- ^ a b Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 5. Archived from the original on July 23, 2016. Retrieved July 21, 2016.
- ^ Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 4. Archived from the original on July 23, 2016. Retrieved July 21, 2016.
- ^ Harris, Mark (April 5, 2016). "Inside Pascal: NVIDIA's Newest Computing Platform". Parallel Forall. Nvidia. Retrieved June 3, 2016.
- ^ "NVIDIA TITAN Xp Graphics Card with Pascal Architecture". NVIDIA.
- ^ "NVIDIA TITAN X Graphics Card with Pascal". GeForce. Retrieved September 15, 2016.
- ^ "New Quadro Graphics Built on Pascal Architecture". NVIDIA. Retrieved September 15, 2016.
- ^ "Accelerating Data Center Workloads with GPUs". NVIDIA. Retrieved September 15, 2016.
- ^ Zhiye Liu (October 22, 2018). "Nvidia GeForce GTX 1060 Gets GDDR5X in Fifth Makeover". Tom's Hardware. Retrieved February 2, 2024.
- ^ "NVIDIA GeForce 10 Series Graphics Cards". NVIDIA.
- ^ "NVIDIA GeForce GTX 1060 to be released on July 7th". VideoCardz.com. June 29, 2016. Retrieved September 15, 2016.
- ^ "GTX 1060 Graphics Cards". GeForce. Retrieved September 15, 2016.
- ^ Smith, Ryan (November 12, 2012). "NVIDIA Launches Tesla K20 & K20X: GK110 Arrives At Last". AnandTech. p. 3. Archived from the original on November 14, 2012. Retrieved July 24, 2016.
- ^ a b c d e f Nvidia (September 1, 2015). "CUDA C Programming Guide". Retrieved July 24, 2016.
- ^ a b c d e f g h i j k l m n o Triolet, Damien (May 24, 2016). "Nvidia GeForce GTX 1080, le premier GPU 16nm en test !". Hardware.fr (in French). p. 2. Retrieved July 24, 2016.
- ^ a b Smith, Ryan (January 26, 2015). "GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". AnandTech. p. 1. Archived from the original on January 28, 2015. Retrieved July 24, 2016.
- ^ "NVIDIA Turing Release Date". Techradar. February 2, 2021.
- ^ Smith, Ryan (March 22, 2022). "NVIDIA Hopper GPU Architecture and H100 Accelerator Announced: Working Smarter and Harder". AnandTech.
- ^ Smith, Ryan (May 14, 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
- ^ "NVIDIA Tesla V100 tested: near unbelievable GPU power". TweakTown. September 17, 2017.
Pascal (microarchitecture)
View on GrokipediaOverview
Introduction
The Pascal microarchitecture is NVIDIA's graphics processing unit (GPU) architecture that succeeded the Maxwell microarchitecture of 2014 and preceded the Volta architecture of 2017, with its initial launch occurring in 2016.[1][3] Designed to address diverse workloads, Pascal aimed to balance high-performance computing (HPC), artificial intelligence (AI) acceleration, and consumer graphics applications, leveraging TSMC's 16 nm FinFET manufacturing process for improved power efficiency and density.[3][1] At its core, Pascal introduced compute capability 6.0, which provided enhanced support for advanced computational tasks, including native double-precision floating-point operations and half-precision (FP16) arithmetic optimized for deep learning workloads.[1] These features enabled more efficient handling of scientific simulations in HPC and accelerated training in neural networks, marking a step forward in GPU versatility for both professional and emerging AI applications.[1] Building on Maxwell's foundation, Pascal achieved significant transistor scaling, reaching up to 12 billion transistors in its consumer-oriented variants, which facilitated greater parallelism while maintaining compatibility with existing software ecosystems.[5] The architecture organizes its processing resources into Graphics Processing Clusters (GPCs) and Streaming Multiprocessors (SMs) as fundamental building blocks, with innovations like NVLink enabling high-bandwidth multi-GPU configurations for scalable computing.[3][1]Key Innovations
Pascal's adoption of TSMC's 16 nm FinFET manufacturing process marked a significant advancement in transistor density and power efficiency compared to the preceding 28 nm node, enabling the fabrication of larger GPU dies such as the 610 mm² GP100 while maintaining thermal design power constraints.[3] This process technology contributed to an increase in performance per watt over Maxwell, facilitating more complex architectures suitable for data center and high-performance computing environments.[3] A core innovation was the native hardware support for half-precision (FP16) floating-point arithmetic, executed at the full rate of single-precision (FP32) operations, which delivered up to 21 TFLOPS on the GP100 and accelerated deep learning training and inference workloads by enabling faster matrix computations without sacrificing accuracy in many neural network applications.[3] This capability represented a foundational step toward mixed-precision computing paradigms that would later evolve in subsequent architectures. The architecture introduced an enhanced Unified Memory system, featuring 49-bit virtual addressing to support vast address spaces up to 512 terabytes, hardware-accelerated page faulting for on-demand memory migration, and comprehensive atomic operations across global, shared, and peer-to-peer memory spaces, all of which streamlined CPU-GPU data sharing and reduced programming complexity for heterogeneous computing tasks.[3][6] Compute preemption at the instruction level allowed running kernels to be interrupted and context-switched with minimal overhead, enhancing system responsiveness in multi-user professional environments by preventing long-running tasks from monopolizing GPU resources or triggering timeouts.[3] Building on FP16 support, Pascal enabled mixed-precision computing through CUDA 8, which allows developers to use multiple precisions such as FP16 and FP32 within compute workloads to optimize throughput and memory bandwidth for applications requiring variable numerical precision, such as scientific simulations and early AI models.[7]Architecture
Processing Units
The Streaming Multiprocessor (SM) serves as the core compute unit in the Pascal microarchitecture, enabling parallel processing through a collection of CUDA cores and supporting functional units. In consumer-oriented implementations such as the GP104 and GP102 chips, each SM features a full configuration of 128 FP32 CUDA cores, 32 load/store units, 32 special function units (SFUs), and 8 texture units, organized to handle both general-purpose computing and graphics workloads efficiently.[8] In contrast, the compute-focused GP100 variant employs a more streamlined design with 64 FP32 CUDA cores per SM to prioritize double-precision performance, while maintaining compatibility with the SIMT execution model.[3] Pascal's scheduling architecture enhances concurrency within each SM by incorporating dual warp schedulers in GP100 and four in GP104, allowing up to 64 concurrent warps (2,048 threads) per SM. This setup supports the Single Instruction, Multiple Threads (SIMT) model, where warps of 32 threads execute in lockstep, with improvements in branch divergence handling to minimize idle cycles during conditional execution paths compared to prior architectures.[8] Load/store operations are facilitated by the 32 units in consumer SMs, enabling overlapped memory accesses that reduce latency in data-intensive tasks. Double-precision (FP64) performance in Pascal varies by chip: consumer variants maintain a 1:32 ratio relative to FP32 throughput (with 4 dedicated FP64 units per SM), while GP100 achieves a 1:2 ratio through 32 FP64 units per SM, supporting high-performance computing applications.[8] Additionally, Pascal introduces dedicated paths for tensor-like operations via paired FP16 instructions, doubling the effective FP16 throughput over FP32 in GP100 and providing efficient half-precision compute in consumer chips. In consumer designs, each Texture Processing Cluster (TPC) contains one SM with 8 texture units and 32 load/store units to accelerate texture fetch and filtering in graphics pipelines; in GP100, each TPC contains two SMs.[3][9] Each SM includes a 256 KB register file, comprising 64,000 32-bit registers, which supports dynamic allocation for local variables and thread state. In FP16 mode, this capacity effectively doubles due to the smaller data size, enabling higher occupancy for mixed-precision workloads without additional hardware overhead.[8]Graphics Pipeline
The graphics pipeline in the Pascal microarchitecture centers on fixed-function hardware optimized for efficient rasterization and geometry processing, enabling high-performance rendering in gaming and visualization applications. At the core of this pipeline are the Graphics Processing Clusters (GPCs), with high-end chips supporting up to 6 GPCs per die. Each GPC integrates a raster engine for scan conversion, Raster Operation Processors (ROPs) for final pixel operations, and partitions of the shared L2 cache to facilitate data flow between processing stages. This structure allows for balanced distribution of graphics workloads across the chip, contributing to improved fill rates and reduced latency in rendering pipelines.[3][10] The Polymorph Engine 4.0 serves as the primary fixed-function unit for geometry processing in Pascal, handling vertex fetching, transformation, tessellation, and primitive setup before passing data to the raster stage. An key enhancement in this version is the addition of a Simultaneous Multi-Projection (SMP) block, which performs efficient viewport transformations and topology load balancing for multi-view rendering, such as in VR environments where multiple projections of the same geometry are generated in a single pass to reduce overhead by up to 2x compared to software-based approaches in prior architectures. This integration minimizes redundant computations for curved displays and head-mounted displays, enhancing performance in immersive applications without burdening the programmable shader units.[11][12] Raster Operation Processors (ROPs) in Pascal handle blending, depth testing, and anti-aliasing resolution, with high-end configurations featuring 64 ROPs across the chip—typically 16 per GPC in 4-GPC designs like GP104. Each ROP unit supports 4x multisample anti-aliasing (MSAA) and advanced compressed color formats, such as delta color compression, to optimize memory bandwidth usage during framebuffer operations. Pascal's ROPs also provide native hardware support for conservative rasterization at Tier 2 level per DirectX 12 specifications, ensuring all partially covered pixels are processed for accurate overlap detection in techniques like shadow mapping and contact hardening, without requiring additional geometry shaders.[13][14] Primitive assembly is managed through dual index engines within the front-end pipeline, which assemble vertices into primitives and reduce setup overhead by approximately 20% over Maxwell through optimized index fetching and culling. This contributes to higher geometry throughput, with polygon rates reaching up to 4 pixels per clock per ROP unit in integer operations, enabling sustained fill rates that scale with clock speeds in demanding rendering scenarios. The overall pipeline briefly interfaces with Streaming Multiprocessors (SMs) for shader execution but maintains separation for fixed-function efficiency, while memory bandwidth influences ultimate fill rate limits in bandwidth-bound workloads.[15][10]Memory and Interconnect
The memory subsystem in Pascal GPUs employs a hierarchical structure to support high-throughput data access for both graphics and compute workloads. Each Streaming Multiprocessor (SM) features a configurable L1 cache of 48 KB, which can be dynamically allocated between L1 caching and shared memory to optimize for specific application needs, such as balancing local data reuse and thread block communication. A unified L2 cache, shared across all SMs, scales up to 4 MB in configurations like the GP100, promoting efficient data coherence and minimizing latency to off-chip memory by caching frequently accessed data. Professional variants, such as those in Tesla and Quadro products, incorporate Error-Correcting Code (ECC) support in both L1 and L2 caches using Single Error Correction, Double Error Detection (SECDED) mechanisms to ensure data integrity in mission-critical environments.[6][3] For data center applications, the GP100 GPU integrates High Bandwidth Memory 2 (HBM2) as its primary memory technology, delivering 16 GB of capacity with an aggregate bandwidth of 720 GB/s via a wide 4096-bit interface and advanced CoWoS packaging that stacks the GPU die on silicon interposers alongside HBM2 modules. This setup provides native ECC protection without bandwidth penalties, enabling reliable high-performance computing while consuming less power than traditional DRAM alternatives. In multi-GPU systems, HBM2's low-latency access complements the overall hierarchy by reducing contention in shared data scenarios.[3] Consumer and workstation Pascal implementations, such as the GP102 and GP104 GPUs, utilize GDDR5X memory to achieve high bandwidth suitable for gaming and professional visualization, reaching up to 484 GB/s on wider buses with effective pin rates of 10-12 Gbps. GDDR5X employs PAM4 signaling with integrated error correction to maintain signal integrity at these speeds, supporting larger frame buffers like 11 GB or 12 GB without the power overhead of HBM2. This memory type interfaces directly with the L2 cache, ensuring seamless data flow for texture and framebuffer operations.[16][17] Pascal introduces NVLink 1.0 as a high-speed interconnect for GPU-to-GPU and GPU-to-CPU communication, offering 40 GB/s of bidirectional bandwidth per link (20 GB/s per direction) to enable scalable multi-GPU configurations that surpass traditional PCIe Gen3 limits by up to 5 times. In systems like the DGX-1, multiple NVLink connections facilitate direct memory access between GPUs, reducing overhead in distributed training and simulation tasks. Complementing this, the Page Migration Engine supports Unified Memory by handling hardware-accelerated page faults, automatically migrating data pages between CPU and GPU address spaces across a 49-bit virtual address range without requiring explicit software management, thus simplifying programming for heterogeneous computing.[3]Implementations
Chip Designs
The Pascal microarchitecture was realized in a family of GPU dies fabricated exclusively on TSMC's 16 nm FinFET process node, enabling efficient scaling across performance tiers through variations in streaming multiprocessor (SM) counts and memory interfaces.[18] This process supported power targets ranging from 250 W in high-end configurations to around 120 W in lower-power variants, balancing density and thermal constraints.[19] The GP100 represents the pinnacle of Pascal's silicon implementations, designed primarily for high-end data center workloads. It incorporates 15.3 billion transistors across a 610 mm² die area, with 56 enabled SMs out of a possible 60 for robust parallel processing.[18][20] GP100 uniquely supports HBM2 memory via wide 4096-bit interfaces and includes NVLink interconnect capabilities for high-bandwidth multi-GPU configurations; it launched in May 2016 with a 250 W TDP in its PCIe variant.[18][21] For enthusiast-level graphics, the GP102 die scales down from GP100 while retaining core architectural features. It features 11.8 billion transistors on a 471 mm² die, supporting up to 30 SMs (typically 28 enabled in production variants) and delivering around 12 TFLOPS of single-precision floating-point performance.[22][23] Paired with GDDR5X memory on a 352-bit bus, GP102 targets 250 W power envelopes, emphasizing high-frame-rate rendering without NVLink support.[19] The mid-range GP104 die further optimizes for cost and efficiency, housing 7.2 billion transistors in a compact 314 mm² area with up to 20 SMs.[24] It employs GDDR5X memory on a 256-bit interface, suitable for 150–180 W TDP configurations that prioritize balanced compute and graphics throughput.[25] This design serves as the basis for scaling SM counts in consumer-oriented variants, such as 15 or 20 enabled units depending on binning. Entry-level implementations include the GP106, GP107, and GP108 dies, which downscale SM counts and memory buses for mainstream and budget segments. For instance, GP106 features approximately 4.4 billion transistors on a 200 mm² die with 10 SMs and a 192-bit GDDR5 bus, targeting 120 W or lower.[26] GP107 and GP108 further reduce to 3.3 billion transistors on a 132 mm² die with up to 6 SMs, and 1.8 billion transistors on a 74 mm² die with up to 3 SMs, respectively, both with 128-bit GDDR5 buses, enabling sub-100 W operation for integrated or discrete low-power GPUs. These smaller dies leverage the same SM architecture for modular scaling, ensuring compatibility with Pascal's unified memory model across the lineup.[25][27][28]| Chip | Transistor Count (billions) | Die Size (mm²) | Max SMs | Memory Interface | TDP Range (W) | Launch Year |
|---|---|---|---|---|---|---|
| GP100 | 15.3 | 610 | 60 (56 enabled) | HBM2 (4096-bit) | 250–300 | 2016 |
| GP102 | 11.8 | 471 | 30 | GDDR5X (352-bit) | 250 | 2016 |
| GP104 | 7.2 | 314 | 20 | GDDR5X (256-bit) | 150–180 | 2016 |
| GP106 | 4.4 | 200 | 10 | GDDR5 (192-bit) | 120 | 2016 |
| GP107 | 3.3 | 132 | 6 | GDDR5 (128-bit) | 75 | 2016 |
| GP108 | 1.8 | 74 | 3 | GDDR5 (128-bit) | 30 | 2016 |