Recent from talks
Nothing was collected or created yet.
GeForce 10 series
View on Wikipedia
Top: Logo of the series Bottom: A GeForce GTX 1080 Ti Founders Edition released in 2017, the series' highest-end non-Titan model | |
| Release date | May 27, 2016 |
|---|---|
| Manufactured by | TSMC Samsung |
| Designed by | Nvidia |
| Marketed by | Nvidia |
| Codename | GP10x |
| Architecture | |
| Models | GeForce GTX series |
| Transistors |
|
| Fabrication process | |
| Cards | |
| Entry-level |
|
| Mid-range |
|
| High-end |
|
| Enthusiast |
|
| API support | |
| OpenCL | OpenCL 3.0[1][a] |
| OpenGL | OpenGL 4.6[2] |
| Vulkan | |
| DirectX | Direct3D 12.0 (feature level 12_1) Shader Model 6.7 |
| History | |
| Predecessor | GeForce 900 series |
| Successor | |
| Support status | |
| Limited support until October 2025 Security updates until October 2028[4] | |
The GeForce 10 series is a series of graphics processing units (GPUs) developed by Nvidia, initially based on the Pascal microarchitecture announced in March 2014. This GPU series succeeded the GeForce 900 series, and is succeeded by the GeForce RTX 20 series and GeForce GTX 16 series using the Turing microarchitecture.
Architecture
[edit]The Pascal microarchitecture, named after Blaise Pascal, was announced in March 2014 as a successor to the Maxwell microarchitecture.[5] The first graphics cards from the series, the GeForce GTX 1080 and 1070, were announced on May 6, 2016, and were released several weeks later on May 27 and June 10, respectively. The architecture incorporates either 16 nm FinFET (TSMC) or 14 nm FinFET (Samsung) technologies. Initially, chips were only produced in TSMC's 16 nm process, but later chips were made with Samsung's newer 14 nm process (GP107, GP108).[6]
New Features in GP10x:
- CUDA Compute Capability 6.0 (GP100 only), 6.1 (GP102, GP104, GP106, GP107, GP108)
- DisplayPort 1.4 (No DSC)
- HDMI 2.0b
- Fourth generation Delta Color Compression
- PureVideo Feature Set H hardware video decoding HEVC Main10 (10 bit), Main12 (12 bit) & VP9 hardware decoding (GM200 & GM204 did not support HEVC Main10/Main12 & VP9 hardware decoding)[7]
- HDCP 2.2 support for 4K DRM protected content playback & streaming (Maxwell GM200 & GM204 lack HDCP 2.2 support, GM206 supports HDCP 2.2)[8]
- NVENC HEVC Main10 10 bit hardware encoding (except GP108 which doesn't support NVENC[9])
- GPU Boost 3.0
- Simultaneous Multi-Projection
- HB SLI Bridge Technology
- New memory controller with GDDR5X & GDDR5 support (GP102, GP104, GP106)[10]
- Dynamic load balancing scheduling system. This allows the scheduler to dynamically adjust the amount of the GPU assigned to multiple tasks, ensuring that the GPU remains saturated with work except when there is no more work that can safely be distributed. Nvidia therefore has safely enabled asynchronous compute in Pascal's driver.[11]
- Instruction-level preemption. In graphics tasks, the driver restricts this to pixel-level preemption because pixel tasks typically finish quickly and the overhead costs of doing pixel-level preemption are much lower than performing instruction-level preemption. Compute tasks get either thread-level or instruction-level preemption. Instruction-level preemption is useful because compute tasks can take long times to finish and there are no guarantees on when a compute task finishes, so the driver enables the very expensive instruction-level preemption for these tasks.[12]
- Triple buffering implemented in the driver level. Nvidia calls this "Fast Sync". This has the GPU maintain three frame buffers per monitor. This results in the GPU continuously rendering frames, and the most recently completely rendered frame is sent to a monitor each time it needs one. This removes the initial delay that double buffering with vsync causes and disallows tearing. The costs are that more memory is consumed for the buffers and that the GPU will consume power drawing frames that might be wasted because two or more frames could possibly be drawn between the time a monitor is sent a frame and the time the same monitor needs to be sent another frame. In this case, the latest frame is picked, causing frames drawn after the previously displayed frame but before the frame that is picked to be completely wasted.[13] This feature has been backported to Maxwell-based GPUs in driver version 372.70.[14]
Nvidia has announced that the Pascal GP100 GPU will feature four High Bandwidth Memory stacks, allowing a total of 16 GB HBM2 on the highest-end models,[15] 16 nm technology,[6] Unified Memory and NVLink.[16]
Starting with Windows 10 version 2004, support has been added for using a hardware graphics scheduler to reduce latency and improve performance, which requires a driver level of WDDM 2.7.
Products
[edit]Founders Edition
[edit]Announcing the GeForce 10 series products, Nvidia introduced Founders Edition graphics card versions of the GTX 1060, 1070, 1070 Ti, 1080 and 1080 Ti. These are what were previously known as reference cards, i.e. which were designed and built by Nvidia and not by its authorized board partners. These cards were used as reference to measure performance of partner cards. The Founders Edition cards have a die cast machine-finished aluminum body with a single radial fan and a vapor chamber cooling (1070 Ti, 1080, 1080 Ti only[17]), an upgraded power supply and a new low profile backplate (1070, 1070 Ti, 1080, 1080 Ti only).[18] Nvidia also released a limited supply of Founders Edition cards for the GTX 1060 that were only available directly from Nvidia's website.[19] Founders Edition cards prices (with the exception of the GTX 1070 Ti and 1080 Ti) are greater than MSRP of partners cards; however, some partners' cards, incorporating a complex design, with liquid or hybrid cooling may cost more than Founders Edition.
GeForce 10 (10xx) series for desktops
[edit]- Supported display standards are: DP 1.3/1.4, HDMI 2.0b, dual link DVI[b][20]
- Supported APIs are: Direct3D 12 (feature level 12_1), OpenGL 4.6, OpenCL 3.0 and Vulkan 1.3
| Model | Launch | Code name(s) | Fab (nm)
|
Transistors (billion)
|
Bus interface |
Core config[c] |
SM count[d] |
L2 cache (KB) |
Clock speeds[e] | Fillrate[f][g] | Memory[e] | Processing power (GFLOPS)[h] | TDP (watts) |
SLI HB support[i] |
Launch MSRP (USD) | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Base core clock (MHz) |
Boost core clock (MHz) |
Memory (MT/s) |
Pixel (GP/s) |
Texture (GT/s) |
Size (GB) |
Bandwidth (GB/s) |
Type | Bus width (bit) |
Single precision (boost) |
Double precision (boost) |
Half precision (boost)[21] |
Standard | Founders Edition | ||||||||||||
| GeForce GT 1010 (DDR4)[j][22][23][24] |
Jun 7, 2022 | GP108-200-A1 | 14 | 1.8 | 74 | PCIe 3.0 ×4 |
256:16:16 | 2 | 256 | 1151 | 1379 | 2100 | 18.42 | 18.42 | 2 | 16.8 | DDR4 | 64 | 589.3 (706.1) |
24.56 (29.42) |
— | 20 | No | ? | — |
| GeForce GT 1010[j][25][26] |
Jan 13, 2021 | GP108-200-A1 | 1228 | 1468 | 5000 | 23.49 | 23.49 | 40.1 | GDDR5 | 629 (752) |
26.2 (31.3) |
30 | 70[27] | ||||||||||||
| GeForce GT 1030 (DDR4)[j][28][29] |
Mar 12, 2018 | GP108-310-A1 | 384:24:16 | 3 | 512 | 1151 | 1379 | 2100 | 18.41 | 27.6 | 16.8 | DDR4 | 883 (1059) |
27 (33) |
13 (16) |
20 | 80[30] | ||||||||
| GeForce GT 1030[j][28][31] |
May 17, 2017 | GP108-300-A1 | 1227 | 1468 | 6000 | 19.6 | 29.4 | 48 | GDDR5 | 942 (1127) |
29 (35) |
15 (18) |
30 | ||||||||||||
| GeForce GTX 1050 (2GB)[32][33] |
Oct 25, 2016 | GP107-300-A1 | 3.3 | 132 | PCIe 3.0 ×16 |
640:40:32 | 5 | 1024 | 1354 | 1455 | 7000 | 43.3 | 54.2 | 112 | 128 | 1733 (1862) |
54 (58) |
27 (29) |
75 | 109 | |||||
| GeForce GTX 1050 (3GB)[34] |
May 21, 2018 | GP107-301-A1 | 768:48:24 | 6 | 768 | 1392 | 1518 | 33.4 | 66.8 | 3 | 84 | 96 | 2138 (2332) |
66 (72) |
33 (36) |
? | |||||||||
| GeForce GTX 1050 Ti[32][35][36] |
Oct 25, 2016 | GP107-400-A1 | 768:48:32 | 1024 | 1290 | 1392 | 41.3 | 61.9 | 4 | 112 | 128 | 1981 (2138) |
62 (67) |
31 (33) |
139 | ||||||||||
| GeForce GTX 1060 (3GB)[37][38] |
Aug 18, 2016 | GP106-300-A1 | 16 | 4.4 | 200 | 1152:72:48 | 9 | 1536 | 1506 | 1708 | 8000 | 72.3 | 108.4 | 3 | 192 | 192 | 3470 (3935) |
108 (123) |
54 (61) |
120 | 199 | ||||
| GeForce GTX 1060 (5GB)[39][40] |
Dec 26, 2017 (Only available in China) |
GP106-350-K3-A1 | 1280:80:40 | 10 | 1280 | 8000 | 60.2 | 120.5 | 5 | 160 | 160 | 3855 (4372) |
120 (137) |
60 (68) |
OEM | ||||||||||
| GeForce GTX 1060[37][41][42] |
Jul 19, 2016 | GP106-400-A1 GP106-410-A1 |
1280:80:48 | 1536 | 8000 9000 |
72.3 | 6 | 192 216 |
192 | 249 | 299 | ||||||||||||||
| GeForce GTX 1060 (GDDR5X)[43] |
Oct 18, 2018 | GP104-150-KA-A1 | 7.2 | 314 | 8000 | 192 | GDDR5X | — | — | ||||||||||||||||
| GeForce GTX 1070[44][45] |
Jun 10, 2016 | GP104-200-A1 | 1920:120:64 | 15 | 2048 | 1683 | 96.4[k][46] | 180.7 | 8 | 256 | GDDR5 | 256 | 5783 (6463) |
181 (202) |
90 (101) |
150 | 2-way SLI HB[47] or 2/3/4-way SLI[48] |
379 | 449 | ||||||
| GeForce GTX 1070 Ti[49] |
Nov 2, 2017 | GP104-300-A1 | 2432:152:64 | 19 | 1607 | 102.8 | 244.3 | 7816 (8186) |
244 (256) |
122 (128) |
180 | 449 | |||||||||||||
| GeForce GTX 1080[20][50][51] |
May 27, 2016 | GP104-400-A1 GP104-410-A1 |
2560:160:64 | 20 | 1733 | 10000 11000 |
257.1 | 320 352 |
GDDR5X | 8228 (8873) |
257 (277) |
128 (139) |
599 | 699 | |||||||||||
| GeForce GTX 1080 Ti[52] |
Mar 10, 2017 | GP102-350-K1-A1 | 12 | 471 | 3584:224:88 | 28 | 2816 | 1480 | 1582 | 11000 | 130.2 | 331.5 | 11 | 484 | 352 | 10609 (11340) |
332 (354) |
166 (177) |
250 | 699 | |||||
| Nvidia Titan X[53][54] |
Aug 2, 2016 | GP102-400-A1 | 3584:224:96 | 3072 | 1417 | 1531 | 10000 | 136 | 317.4 | 12 | 480 | 384 | 10157 (10974) |
317 (343) |
159 (171) |
— | 1200 | ||||||||
| Nvidia Titan Xp[55][56] |
Apr 6, 2017 | GP102-450-A1 | 3840:240:96 | 30 | 1405 | 1582 | 11410 | 135 | 337.2 | 547.7 | 10790 (12150) |
337 (380) |
169 (190) | ||||||||||||
- ^ In OpenCL 3.0, OpenCL 1.2 functionality has become a mandatory baseline, while all OpenCL 2.x and OpenCL 3.0 features were made optional.
- ^ The Nvidia Titan Xp & the Founders Edition GTX 1080 Ti does not have a dual link DVI port, but a DisplayPort to single link DVI adapter is included in the box.
- ^ Shader Processors: Texture mapping units: Render output units
- ^ The number of streaming multiprocessors on the GPU.
- ^ a b GTX 1060 and GTX 1080 cards shipped after April 2017 feature increased memory speeds, thus increasing memory bandwidth.
- ^ Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.
- ^ Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.
- ^ For calculating the processing power, see the Performance subsection of the Pascal architecture article.
- ^ SLI HB only supports a maximum of 2-way SLI using SLI HB bridges, however if using traditional SLI bridges it can support a maximum of 4-way SLI but the performance is mostly improved in synthetic benchmarks only.
- ^ a b c d Lacks hardware video encoder
- ^ The GTX 1070 has one of the four GPCs disabled in the die. Losing one of the Raster Engines only allows for the use of 48 ROPs per cycle.
GeForce 10 (10xx) series for notebooks
[edit]The biggest highlight to this line of notebook GPUs is the implementation of configured specifications close to (for the GTX 1060–1080) and exceeding (for the GTX 1050/1050 Ti) that of their desktop counterparts, as opposed to having "cut-down" specifications in previous generations. As a result, the "M" suffix is completely removed from the model's naming schemes, denoting these notebook GPUs to possess similar performance to those made for desktop PCs, including the ability to overclock their core frequencies by the user, something not possible with previous generations of notebook GPUs. This was made possible by having lower Thermal Design Power (TDP) ratings as compared to their desktop equivalents, making these desktop-level GPUs thermally feasible to be implemented into OEM notebook chassis with improved thermal dissipation designs, and, as such, are only available through the OEMs. In addition, the entire line of GTX Notebook GPUs also are available in lower-TDP and quieter variations called the "Max-Q Design", specifically made for ultra-thin gaming systems in conjunction with OEM Partners that incorporate enhanced heat dissipation mechanisms with lower operating noise volumes, which are also made available as an additional more powerful option to existing gaming notebooks as well, which was launched on 27 June 2017.
In addition, the GT series line of Notebook GPUs is no longer introduced starting from this generation, replaced by the MX series of Notebook GPUs. Only the MX150 is based on Pascal's GP108 die used on the GT1030 for Desktops, with higher clock frequencies compared to its Desktop counterpart, while the other chips in the MX series were re-branded versions of the previous generation GPUs (MX130 is a re-branded GT940MX GPU while MX110 is a re-branded GT920MX GPU).[citation needed]
- Supported APIs are: Direct3D 12 (feature level 12_1 or 11_0 on MX110 and MX130), OpenGL 4.6, OpenCL 3.0 and Vulkan 1.3
- Only GTX 1070 and GTX 1080 have SLI support.
| Model | Launch | Code name(s) |
Fab (nm)
|
Transistors (billion)
|
Die size (mm2)
|
Bus interface |
Core config[a] |
SM Count[b] |
L2 cache (MB) |
Clock speeds | Fillrate[c][d] | Memory | Processing power (GFLOPS)[e] | TDP (watts) | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Base core clock (MHz) |
Boost core clock (MHz) |
Memory (MT/s) |
Pixel (GP/s) |
Texture (GT/s) |
Size (GB) |
Bandwidth (GB/s) |
Type | Bus width (bit) |
Single precision (Boost) |
Double precision (Boost) |
Half precision (Boost) | |||||||||||
| GeForce MX110[f][57][58] |
Nov 17, 2017 | GM108 (N16V-GMR1) |
28 | ? | ? | PCIe 3.0 ×4 |
384:24:8 | 3 | 1.0 | 965 | 993 | 1800 (DDR3) 5000 (GDDR5) |
7.944 | 23.83 | 2 | 14.4 (DDR3) 40.1 (GDDR5) |
DDR3 GDDR5 |
64 | 741.1 (762.6) |
23.16 (23.83) |
— | 30 |
| GeForce MX130[f][59][60] |
GM108 (N16S-GTR) |
1122 | 1242 | 9.936 | 29.81 | 861.7 (953.9) |
26.93 (29.81) | |||||||||||||||
| GeForce MX150[f][61][62][63] |
May 17, 2017 | GP108 (N17S-LG) |
14 | 1.8 | 74 | 384:24:16 | 0.5 | 937 | 1038 | 5000 | 14.99 | 22.49 | 2 4 |
40.1 | GDDR5 | 719.6 (797.2) |
22.49 (24.91) |
11.24 (12.45) |
10 | |||
| GP108 (N17S-G1) |
1468 | 1532 | 6000 | 23.49 | 35.23 | 48 | 1127 (1177) |
35.23 (36.77) |
17.62 (18.38) |
25 | ||||||||||||
| GeForce GTX 1050 Max-Q (Notebook)[64][65] |
Jan 3, 2018 | GP107 (N17P-G0) |
3.3 | 132 | PCIe 3.0 ×16 |
640:40:16 | 5 | 1.0 | 999–1189 | 1139–1328 | 7000 | 19.02 | 47.56 | 112 | 128 | 1278–1521 (1457–1699) |
39.96–47.56 (45.56–53.12) |
19.98–23.78 (22.78–26.56) |
34-40 | |||
| GeForce GTX 1050 (Notebook)[64][65] |
Jan 3, 2017 | 1354 | 1493 | 21.66 | 54.16 | 1733 (1911) |
54.16 (59.72) |
27.08 (29.86) |
53 | |||||||||||||
| GeForce GTX 1050 Ti Max-Q (Notebook)[64][66] |
Jan 3, 2018 | GP107 (N17P-G1) |
768:48:32 | 6 | 1151–1290 | 1290–1417 | 41.28 | 61.92 | 4 | 1767–1981 (1981–2176) |
55.24–61.92 (61.92–68.02) |
27.62–30.96 (30.96–34.01) |
40-46 | |||||||||
| GeForce GTX 1050 Ti (Notebook)[64][66] |
Jan 3, 2017 | 1493 | 1620 | 47.78 | 71.66 | 2293 (2488) |
71.66 (77.76) |
35.83 (38.88) |
64 | |||||||||||||
| GeForce GTX 1060 Max-Q (Notebook)[64][67] |
Jun 27, 2017 | GP106 (N17E-G1) |
16 | 4.4 | 200 | 1280:80:48 | 10 | 1.5 | 1063–1265 | 1341–1480 | 8000 | 60.72 | 101.2 | 3 6 |
192 | 192 | 2721–3238 (3432–3788) |
85.04–101.2 (107.3–118.4) |
42.52–50.60 (53.64–59.20) |
60-70 | ||
| GeForce GTX 1060 (Notebook)[64][67] |
Aug 16, 2016 | 1404 | 1670 | 67.39 | 112.3 | 3594 (4275) |
112.3 (133.6) |
56.16 (66.80) |
80 | |||||||||||||
| GeForce GTX 1070 Max-Q (Notebook)[64][68] |
Jun 27, 2017 | GP104 (N17E-G2) |
7.2 | 314 | 2048:128:64 | 16 | 2.0 | 1101–1215 | 1265–1379 | 77.76 | 155.5 | 8 | 256 | 256 | 4509–4977 (5181–5648) |
140.9–155.5 (161.9–176.5) |
70.46–77.76 (80.96–88.26) |
80-90 | ||||
| GeForce GTX 1070 (Notebook)[64][68] |
Aug 16, 2016 | 1442 | 1645 | 92.29 | 184.6 | 5906 (6738) |
184.6 (210.6) |
92.29 (105.3) |
115 | |||||||||||||
| GeForce GTX 1080 Max-Q (Notebook)[64][69] |
Jun 27, 2017 | GP104 (N17E-G3) |
2560:160:64 | 20 | 1101–1290 | 1278–1458 | 10000 | 82.56 | 206.4 | 320 | GDDR5X | 5637–6605 (6543–7465) |
176.2–206.4 (204.5–233.3) |
88.08–103.2 (102.2–116.6) |
90-110 | |||||||
| GeForce GTX 1080 (Notebook)[64][69] |
Aug 16, 2016 | 1556 | 1733 | 99.58 | 249.0 | 7967 (8873) |
249.0 (277.3) |
124.5 (138.6) |
150 | |||||||||||||
- ^ Shader Processors: Texture mapping units: Render output units
- ^ The number of streaming multiprocessors on the GPU.
- ^ Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.
- ^ Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.
- ^ For calculating the processing power, see the Performance subsection of the Pascal architecture article.
- ^ a b c Lacks hardware video encoder and decoder
Reintroduction
[edit]
Due to production problems surrounding the RTX 30-series cards and a general shortage of graphics cards due to production issues caused by the ongoing COVID-19 pandemic, which led to a global shortage of semiconductor chips, and general demand for graphics cards increasing due to an increase in cryptocurrency mining, the GTX 1050 Ti, alongside the RTX 2060 and its Super counterpart,[70] was brought back into production in 2021.[71][72]
In addition, Nvidia quietly released the GeForce GT 1010 in January 2021.[25]
Support
[edit]Nvidia stopped releasing 32-bit drivers for 32-bit operating systems after driver 391.35 in March 2018.[73]
Nvidia announced that after the release of the 470 driver branch, it would transition driver support for the Windows 7 and Windows 8.1 operating systems to legacy status and continue to provide critical security updates for these operating systems through September 2024.[74] The GeForce 10 series is the last Nvidia GPU generation to support 32-bit operating systems; beginning with the Turing architecture, newer Nvidia GPUs now require a 64-bit operating system.
In May 2025, Nvidia discontinued developer support for the Maxwell, Pascal, and Volta architectures, which includes the GeForce 10 series.[75] On July 1, 2025, Nvidia announced that driver branch 580 will be the last to support these architectures.[76]
See also
[edit]References
[edit]- ^ "OpenCL Driver Support | NVIDIA Developer". Nvidia Developer. April 24, 2013. Retrieved June 10, 2022.
- ^ "OpenGL Driver Support | NVIDIA Developer". developer.nvidia.com. NVIDIA. August 19, 2013. Retrieved June 10, 2022.
- ^ "Vulkan Driver Support | NVIDIA Developer". Nvidia Developer. February 10, 2016. Archived from the original on April 8, 2016. Retrieved April 25, 2018.
- ^ Kampman, Jeffrey (July 31, 2025). "Nvidia confirms end of Game Ready driver support for Maxwell and Pascal GPUs — affected products will get optimized drivers through October 2025". Tom's Hardware. Retrieved August 21, 2025.
- ^ "NVIDIA Updates GPU Roadmap; Announces Pascal". Nvidia. Archived from the original on March 25, 2014. Retrieved July 30, 2014.
- ^ a b btarunr (September 17, 2015). "NVIDIA "Pascal" GPUs to be Built on 16 nm TSMC FinFET Node". TechPowerUp. Archived from the original on September 18, 2015. Retrieved August 25, 2015.
- ^ "How The New Pascal Architecture Supports Next-Generation Video Playback". geforce.com. May 17, 2016. Archived from the original on June 10, 2016. Retrieved June 11, 2016.
- ^ "Nvidia Pascal HDCP 2.2". GeForce. Archived from the original on May 8, 2016. Retrieved May 8, 2016.
- ^ "Whether GT1030 is support nvenc encoder?". forums.developer.nvidia.com. December 6, 2017. Archived from the original on May 27, 2018. Retrieved May 25, 2018.
- ^ Shrout, Ryan (July 14, 2016). "3DMark Time Spy: Looking at DX12 Asynchronous Compute Performance". PC Perspective. Archived from the original on July 15, 2016. Retrieved July 14, 2016.
- ^ Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 9. Archived from the original on July 23, 2016. Retrieved July 21, 2016.
- ^ Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 10. Archived from the original on July 24, 2016. Retrieved July 21, 2016.
- ^ Smith, Ryan; Wilson, Derek (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 13. Archived from the original on July 23, 2016. Retrieved July 22, 2016.
- ^ "Release 370 Graphics Drivers for Windows, Version 372.70" (PDF). Nvidia. August 30, 2016. Archived (PDF) from the original on September 9, 2016. Retrieved August 30, 2016.
- ^ Harris, Mark (April 5, 2016). "Inside Pascal: NVIDIA's Newest Computing Platform". Parallel Forall. Nvidia. Archived from the original on May 7, 2017. Retrieved June 3, 2016.
- ^ "NVIDIA Pascal GPU Architecture to Provide 10X Speedup for Deep Learning Apps | NVIDIA Blog". blogs.nvidia.com. Archived from the original on April 2, 2015. Retrieved March 17, 2015.
- ^ Oh, Nate (November 2, 2017). "The NVIDIA GeForce GTX 1070 Ti Founders Edition Review: GP104 Comes in Threes". AnandTech. Archived from the original on November 11, 2017. Retrieved November 10, 2017.
- ^ Burnes, Andrew (May 18, 2016). "GeForce GTX 1080 Founders Edition: Premium Construction & Advanced Features". GeForce.com. Archived from the original on May 21, 2016. Retrieved May 19, 2016.
- ^ "A Quantum Leap for Every Gamer: NVIDIA Unveils the GeForce GTX 1060". nvidianews.nvidia.com. July 7, 2016. Archived from the original on October 19, 2016. Retrieved October 18, 2016.
- ^ a b Nvidia. "GTX 1080 Graphics Card". Archived from the original on May 7, 2016. Retrieved May 7, 2016.
- ^ Smith, Ryan (July 20, 2016). "FP16 Throughput on GP104: Good for Compatibility (and Not Much Else) - The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. Archived from the original on July 23, 2016. Retrieved July 22, 2016.
- ^ "NVIDIA GeForce GT 1010 DDR4 Specs". TechPowerUp. Retrieved June 8, 2022.
- ^ Liu, Zhiye (June 8, 2022). "Colorful Unchains GeForce GT 1010 With DDR4 to Rival iGPUs". Tom's Hardware. Retrieved June 8, 2022.
- ^ WhyCry (June 7, 2022). "Colorful launches entry-level GeForce GT 1010 with DDR4 memory and 20W TDP". VideoCardz. Retrieved June 8, 2022.
- ^ a b "NVIDIA quietly introduces the GeForce GT 1010". NotebookCheck. January 16, 2021. Retrieved February 17, 2021.
- ^ "NVIDIA GeForce GT 1010 Specs". TechPowerUp. Retrieved February 17, 2021.
- ^ Subramaniam, Vaidyanathan (February 2, 2022). "Nvidia GT 1010 now available for US$70, GP108 Pascal card 40% slower than GT 1030; entry-level Radeon RX 6500 XT is a cool 1,378% faster". NotebookCheck. Retrieved June 8, 2022.
- ^ a b Nvidia. "GeForce GT 1030 | GeForce". geforce.com. Archived from the original on May 20, 2017. Retrieved May 17, 2017.
- ^ "NVIDIA GeForce GT 1030 DDR4 Specs". TechPowerUp.
- ^ Paul, Ian (May 17, 2017). "Nvidia quietly launches the GeForce GT 1030, a Radeon RX 550 rival with a modest price". PCWorld. Archived from the original on May 17, 2017. Retrieved May 17, 2017.
- ^ Angelini, Chris (July 13, 2017). "Nvidia GeForce GT 1030 2GB Review". Tom's Hardware. Retrieved July 13, 2017.
- ^ a b Nvidia. "GTX 1050 Graphics Card". Archived from the original on December 22, 2016. Retrieved October 18, 2016.
- ^ "NVIDIA GeForce GTX 1050". TechPowerUp. Archived from the original on March 6, 2017. Retrieved February 28, 2017.
- ^ "Nvidia GTX 1050". Archived from the original on May 21, 2018. Retrieved May 22, 2018.
- ^ "NVIDIA GeForce GTX 1050 Ti". TechPowerUp. Archived from the original on March 6, 2017. Retrieved February 28, 2017.
- ^ Hagedoorn, Hilbert. "Palit GeForce GTX 1050 Ti KalmX Review - Introduction". Guru3D. Archived from the original on February 26, 2017. Retrieved February 24, 2017.
- ^ a b Nvidia. "GTX 1060 Graphics Card". Archived from the original on December 22, 2016. Retrieved August 18, 2016.
- ^ "NVIDIA GeForce GTX 1060 3 GB". TechPowerUp. Archived from the original on March 1, 2017. Retrieved February 28, 2017.
- ^ "ASUS GTX1060-5G-SI". asus.com.cn (in Chinese). Archived from the original on July 4, 2018. Retrieved June 10, 2022.
- ^ "NVIDIA GeForce GTX 1060 5GB". videocardz.net. Archived from the original on July 4, 2018. Retrieved May 31, 2018.
- ^ "NVIDIA GeForce GTX 1060 6 GB". TechPowerUp. Archived from the original on March 1, 2017. Retrieved February 28, 2017.
- ^ Angelini, Chris (July 19, 2016). "Nvidia GeForce GTX 1060 6GB Review". Tom's Hardware. Retrieved July 19, 2016.
- ^ "NVIDIA GeForce 10 Series Graphics Cards". NVIDIA. Archived from the original on December 22, 2016. Retrieved December 23, 2016.
- ^ Nvidia. "GTX 1070 Graphics Card". Archived from the original on October 27, 2017. Retrieved May 18, 2016.
- ^ "NVIDIA GeForce GTX 1070". TechPowerUp. Archived from the original on March 1, 2017. Retrieved February 28, 2017.
- ^ Smith, Ryan (July 20, 2016). "Synthetics - The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. Archived from the original on July 23, 2016. Retrieved July 21, 2016.
- ^ W1zzard (May 17, 2016). "Nvidia GeForce GTX 1080 8 GB". TechPowerUp. Archived from the original on May 21, 2016. Retrieved May 17, 2016.
{{cite web}}: CS1 maint: numeric names: authors list (link) - ^ W1zzard (June 21, 2016). "Nvidia GeForce GTX 1080 SLI". TechPowerUp. Archived from the original on June 24, 2016. Retrieved June 21, 2016.
{{cite web}}: CS1 maint: numeric names: authors list (link) - ^ Nvidia. "GTX 1070 Ti Graphics Card". Archived from the original on October 27, 2017. Retrieved October 26, 2017.
- ^ "NVIDIA GeForce GTX 1080". TechPowerUp. Archived from the original on March 1, 2017. Retrieved February 28, 2017.
- ^ Shrout, Ryan. "The GeForce GTX 1080 8GB Founders Edition Review – GP104 Brings Pascal to Gamers". PC Perspective. Archived from the original on May 18, 2016. Retrieved May 17, 2016.
- ^ "NVIDIA GeForce 10 Series Graphics Cards". NVIDIA. Archived from the original on March 1, 2017. Retrieved March 1, 2017.
- ^ Nvidia. "NVIDIA TITAN X Graphics Card with Pascal". Archived from the original on December 22, 2016. Retrieved July 21, 2016.
- ^ "NVIDIA TITAN X Pascal". TechPowerUp. Archived from the original on March 1, 2017. Retrieved March 2, 2017.
- ^ Nvidia. "TITAN Xp Graphics Card with Pascal Architecture". Archived from the original on April 6, 2017. Retrieved April 6, 2017.
- ^ "NVIDIA TITAN Xp". TechPowerUp. Archived from the original on August 24, 2017. Retrieved April 21, 2017.
- ^ "GeForce MX110 Specifications GeForce". geforce.com. Archived from the original on December 24, 2017. Retrieved March 29, 2018.
- ^ "NVIDIA GeForce MX110 Specs". TechPowerUp. Retrieved June 17, 2022.
- ^ "GeForce MX130 Specifications GeForce". geforce.com. Archived from the original on December 25, 2017. Retrieved March 29, 2018.
- ^ "NVIDIA GeForce MX130 Specs". TechPowerUp. Retrieved June 17, 2022.
- ^ Burnes, Andrew (May 25, 2017). "Introducing GeForce MX150 Laptops: Supercharged For Work and Play | Geforce News | NVIDIA". Nvidia. Archived from the original on July 6, 2017. Retrieved June 17, 2022.
- ^ "NVIDIA GeForce MX150 Specs (N17S-LG-A1)". TechPowerUp. Retrieved June 17, 2022.
- ^ "NVIDIA GeForce MX150 Specs (N17S-G1-A1)". TechPowerUp. Retrieved June 17, 2022.
- ^ a b c d e f g h i j "GeForce GTX 10-Series Laptops". nvidia.com. Archived from the original on March 16, 2018. Retrieved June 17, 2022.
- ^ a b "NVIDIA GeForce GTX 1050 Max-Q vs NVIDIA GeForce GTX 1050 Mobile". NotebookCheck. Retrieved June 17, 2022.
- ^ a b "NVIDIA GeForce GTX 1050 Ti Max-Q vs NVIDIA GeForce GTX 1050 Ti Mobile". NotebookCheck. Retrieved June 17, 2022.
- ^ a b "NVIDIA GeForce GTX 1060 Max-Q vs NVIDIA GeForce GTX 1060 Mobile". NotebookCheck. Retrieved June 17, 2022.
- ^ a b "NVIDIA GeForce GTX 1070 Max-Q vs NVIDIA GeForce GTX 1070 Mobile". NotebookCheck. Retrieved June 17, 2022.
- ^ a b "NVIDIA GeForce GTX 1080 Max-Q vs NVIDIA GeForce GTX 1080 Mobile". NotebookCheck. Retrieved June 17, 2022.
- ^ WhyCry (January 20, 2021). "NVIDIA to reintroduce GeForce RTX 2060 and RTX 2060 SUPER to the market". VideoCardz.
- ^ Dent, Steve (February 12, 2021). "NVIDIA revives the GTX 1050 Ti in the face of GPU shortages". Engadget.
- ^ Chacos, Brad (February 11, 2021). "Confirmed: Nvidia taps the GTX 1050 Ti to battle graphics card shortages". PCWorld.
- ^ "Support Plan for 32-bit and 64-bit Operating Systems | NVIDIA". nvidia.custhelp.com.
- ^ "Support Plan for Windows 7 and Windows 8/8.1 | NVIDIA". nvidia.custhelp.com.
- ^ "NVIDIA CUDA Toolkit Release Notes". Retrieved June 3, 2025.
- ^ "NVIDIA confirms end of support for GeForce GTX 700, 900 and 10 series after 580 drivers". VideoCardz.com. Retrieved July 23, 2025.
External links
[edit]- Official website
- NVIDIA GeForce GTX 1080 whitepaper
Media related to Nvidia GeForce 10 series video cards at Wikimedia Commons
GeForce 10 series
View on GrokipediaBackground and Development
Announcement and Timeline
The Pascal microarchitecture, which would power the GeForce 10 series, was first announced by Nvidia CEO Jensen Huang during his keynote at the GPU Technology Conference (GTC) in March 2014, positioning it as the successor to the Maxwell architecture used in the GeForce 900 series.[8] This reveal highlighted Pascal's focus on delivering enhanced performance for emerging demands like virtual reality (VR) and 4K gaming, building on Maxwell's foundations while promising significant advancements in compute power and efficiency.[9] The architecture's development was part of Nvidia's broader roadmap to address the growing needs of high-resolution displays and immersive computing experiences.[10] The GeForce 10 series officially debuted on May 6, 2016, with the unveiling of the flagship GeForce GTX 1080 and the mid-range GTX 1070 at a dedicated event, marking the consumer launch of Pascal-based GPUs. The GTX 1080 became available for purchase on May 27, 2016, followed by the GTX 1070 on June 10, 2016, introducing features optimized for VR readiness and 4K resolution gaming as a direct evolution from the 900 series.[3] Subsequent releases expanded the lineup, including the GTX 1060 on July 19, 2016, which targeted mainstream gamers seeking improved performance over Maxwell-era cards.[11] Further additions to the series continued through 2017, with the GTX 1050 and GTX 1050 Ti launching on October 25, 2016, to serve budget-conscious users transitioning from older 900 series models.[12] The GTX 1080 Ti arrived on March 10, 2017, as the top-tier offering, and the GTX 1070 Ti on November 2, 2017, each reinforcing the series' emphasis on VR and 4K capabilities.[13][14] Entry-level options rounded out the portfolio, such as the GT 1030 on May 17, 2017, providing accessible upgrades for basic 4K video playback and light VR support over Maxwell's entry points.[15] Later, the GT 1010 emerged in early 2021 as an ultra-budget extension.[16]Design Goals and Innovations
The development of the GeForce 10 series, powered by NVIDIA's Pascal architecture, aimed to deliver a 1.5x improvement in power efficiency over the preceding Maxwell generation while targeting substantial performance gains for emerging demands such as 4K gaming and virtual reality (VR). Engineers focused on achieving up to 70% higher graphics performance at comparable power envelopes, exemplified by the GeForce GTX 1080's 180W TDP enabling 8,873 GFLOPs compared to the GTX 980's 165W and 4,981 GFLOPs. This efficiency emphasis stemmed from the need to support high-resolution displays at 60Hz with HDR, alongside VR requirements for sustained frame rates above 90 FPS and reduced latency to prevent motion sickness.[17][18] The architecture also prioritized simultaneous graphics and compute workloads through asynchronous compute capabilities, allowing dynamic load balancing between rendering and general-purpose computing tasks without stalling the pipeline.[17] A core innovation was the integration of GDDR5X memory, marking the first consumer GPUs to leverage this technology for up to 43% higher bandwidth than Maxwell's GDDR5, reaching 320 GB/s on the GTX 1080 via a 10 Gbps interface and 8 GB capacity. This upgrade facilitated bandwidth-intensive scenarios like 4K texture streaming and multi-monitor setups, while enhanced memory compression techniques provided an effective 1.7x bandwidth multiplier. For VR-specific advancements, Pascal introduced Simultaneous Multi-Projection (SMP), including Single Pass Stereo and Lens Matched Shading, which optimized rendering for asymmetric VR lenses and delivered up to 2x performance uplift by reducing redundant computations. Pixel-level preemption further minimized latency in VR by enabling immediate task switching, ensuring responsive interactions in immersive environments.[19][17][20] The transition from the 28 nm process of Maxwell to TSMC's 16 nm FinFET node addressed key challenges in scaling transistor density and clock speeds, packing 7.2 billion transistors into a more compact die for improved energy efficiency and thermal management. This shift enabled Pascal to achieve 1.5x better performance per watt overall, with scalability across form factors from power-constrained mobile GPUs to high-end desktops via unified architectural features like NVLink interconnects for multi-GPU configurations. Despite the complexities of FinFET adoption, such as optimizing gate control for higher frequencies exceeding 1.7 GHz, the process yielded significant gains in compute throughput for mixed workloads, positioning the GeForce 10 series as a versatile platform for gaming, content creation, and early AI acceleration.[18][17][21]Architecture
Pascal Microarchitecture
The Pascal microarchitecture, employed in the GeForce 10 series graphics processing units, represents NVIDIA's evolution from the preceding Maxwell architecture, emphasizing enhanced parallel processing efficiency and support for emerging workloads such as virtual reality and machine learning. At its core, Pascal organizes compute resources into Streaming Multiprocessors (SMs), the fundamental execution units responsible for handling threads in groups known as warps. Each SM in GeForce-oriented Pascal variants, such as the GP104 graphics processor, incorporates 128 CUDA cores for general-purpose computing and graphics shading, paired with 8 texture mapping units (TMUs) to accelerate texture sampling operations.[17][22] These SMs also include dedicated hardware for load/store operations, with a unified L1 cache and shared memory totaling up to 96 KB configurable for flexible use in compute or graphics tasks.[23] A key advancement in Pascal's SM design is the refined warp scheduler, which builds on Maxwell's dual-scheduler approach by incorporating four independent schedulers per SM to manage 128 cores more dynamically. This enables better instruction issue rates, reducing latency from divergent warps and improving overall throughput for mixed workloads by up to 1.5x in certain scenarios compared to prior generations.[23][18] The architecture supports dynamic allocation of resources, allowing developers to balance shared memory and L1 cache based on application needs, which enhances performance in bandwidth-intensive tasks like ray tracing precursors or simulation.[23] Pascal's memory subsystem leverages TSMC's 16 nm FinFET transistor technology, enabling denser integration and higher operating frequencies while maintaining power efficiency. This process node facilitates transistor densities around 23 million per square millimeter, as seen in chips like GP104 with a die size of 314 mm² and approximately 7.2 billion transistors.[24][22] The design includes a high-bandwidth on-chip memory hierarchy, with support for unified addressing that extends virtual memory to 49 bits, allowing seamless access to system memory beyond the GPU's physical limits for large-scale compute applications.[21] In terms of compute capabilities, GeForce 10 series GPUs based on Pascal achieve CUDA compute capability 6.1, introducing native support for atomic operations on shared memory and improved dynamic parallelism for recursive kernel launches. For AI and machine learning acceleration, Pascal provides FP16 (half-precision) arithmetic at a performance ratio of 1/64 relative to FP32 (single-precision), enabling modest speedups in training and inference tasks tolerant of reduced precision, though less optimized than data-center variants.[25] This is complemented by enhanced support for instructions like half-precision fused multiply-add (HFMA), which doubles throughput for FP16 over scalar operations in compatible workloads.[26] The graphics pipeline in Pascal receives targeted enhancements for modern rendering techniques, notably Simultaneous Multi-Projection (SMP), which allows a single geometry pass to generate multiple independent viewpoints—up to 16—optimized for VR headsets by reducing redundant shading computations by 20-30% in lens-distorted projections. This feature integrates with lens-matched shading to minimize overdraw in peripheral vision areas, improving VR frame rates without sacrificing visual fidelity. While anisotropic filtering remains a core capability, Pascal's pipeline delivers it with higher efficiency due to increased TMU throughput and cache coherency, supporting up to 16x levels with reduced aliasing in complex scenes.[27]Manufacturing Process and Features
The GeForce 10 series GPUs, based on the Pascal microarchitecture, were primarily fabricated using TSMC's 16 nm FinFET process node for high-end chips such as the GP100 through GP104, enabling significant improvements in transistor density and power efficiency compared to the preceding Maxwell generation on 28 nm.[18] Lower-end mobile variants, including the GP107-based GTX 1050, utilized Samsung's 14 nm FinFET process to optimize for compact laptop designs while maintaining performance scalability.[28] This shift to advanced FinFET nodes contributed to enhanced energy efficiency, exemplified by the GTX 1080's 180 W TDP delivering roughly double the performance of the prior-generation GTX 980 Ti at 250 W, through reduced leakage and optimized voltage scaling.[29] Display connectivity in the GeForce 10 series featured native support for DisplayPort 1.4, capable of resolutions up to 8K at 60 Hz, and HDMI 2.0b with 4K at 60 Hz including HDR capabilities, allowing simultaneous output to up to four displays for multi-monitor setups.[4] These interfaces ensured compatibility with emerging high-resolution standards, facilitating immersive gaming and content creation workflows without external adapters. Power management was advanced through GPU Boost 3.0, which dynamically adjusted clock speeds and voltages based on real-time thermal, power, and current limits to maximize performance while preventing thermal throttling.[30] This technology included voltage-frequency curve optimization and adaptive power limits, enabling the GPUs to sustain higher boosts under varying workloads compared to prior iterations. Reference designs for high-end models, such as the Founders Edition, incorporated vapor chamber cooling to efficiently dissipate heat across the die, paired with dual-fan axial designs for quieter operation and sustained performance in compact form factors.[2] The series introduced an updated NVENC hardware encoder, supporting efficient H.264 and HEVC encoding for video workloads, marking an early integration of dedicated media acceleration in consumer GPUs with improved quality and throughput for streaming and recording applications.[18]Product Lineup
Desktop GPUs
The GeForce 10 series desktop GPUs, powered by NVIDIA's Pascal microarchitecture, spanned high-end, mid-range, and entry-level segments, offering substantial improvements in performance and efficiency over prior generations. High-end models like the GeForce GTX 1080 and GTX 1080 Ti targeted 4K gaming and virtual reality (VR), while mid-range options such as the GTX 1070, GTX 1070 Ti, and GTX 1060 focused on 1440p and 1080p resolutions, and entry-level cards including the GTX 1050 series and GT 1030 emphasized budget-friendly upgrades for lighter workloads.[31]High-End Models
The flagship GeForce GTX 1080, based on the GP104 GPU, featured 2560 CUDA cores, 8 GB of GDDR5X memory on a 256-bit interface, a base clock of 1607 MHz, and a boost clock of 1733 MHz, with a thermal design power (TDP) of 180 W. Its successor, the GeForce GTX 1080 Ti on the GP102 GPU, elevated capabilities with 3584 CUDA cores, 11 GB of GDDR5X memory on a 352-bit interface, a base clock of 1481 MHz, and a boost clock of 1582 MHz, consuming 250 W TDP. These cards delivered exceptional rasterization performance; for instance, the GTX 1080 achieved approximately 70% higher frame rates than the previous-generation GTX 980 in select 4K gaming benchmarks, enabling smooth high-resolution play.[31] Both models earned NVIDIA's VR Ready certification, supporting immersive experiences with low latency and high frame rates essential for headsets like the Oculus Rift and HTC Vive.[4]Mid-Range Models
Mid-range desktop GPUs in the series included the GeForce GTX 1070 (GP104 GPU) with 1920 CUDA cores, 8 GB GDDR5 memory on a 256-bit interface, base/boost clocks of 1506/1683 MHz, and 150 W TDP. The GeForce GTX 1070 Ti (GP104 GPU), released in November 2017, offered 2432 CUDA cores, 8 GB GDDR5 memory on a 256-bit interface, base/boost clocks of 1607/1683 MHz, and 180 W TDP.[32] The GeForce GTX 1060 (GP106 GPU) came in 6 GB and 3 GB variants, offering 1280 and 1152 CUDA cores respectively, both with 6 GB/3 GB GDDR5 memory on a 192-bit interface, base/boost clocks of 1506/1708 MHz, and 120 W TDP. These cards excelled in 1440p gaming, with the GTX 1070 providing around 60% uplift over the GTX 970 in demanding titles at that resolution. Like their high-end counterparts, GTX 1070, GTX 1070 Ti, and GTX 1060 (6 GB) models were certified VR Ready, facilitating broad adoption of VR gaming without requiring top-tier hardware.[31][4]Entry-Level Models
Entry-level options comprised the GeForce GTX 1050 Ti (GP107 GPU) with 768 CUDA cores, 4 GB GDDR5 memory on a 128-bit interface, base/boost clocks of 1290/1392 MHz, and 75 W TDP, alongside the standard GTX 1050 (also GP107) featuring 640 CUDA cores, 2 GB or 4 GB GDDR5 variants, base/boost clocks of 1354/1455 MHz (2 GB) or 1392/1518 MHz (4 GB/3 GB), and the same 75 W TDP. The GeForce GT 1030 (GP108 GPU) served as the most affordable, featuring 384 CUDA cores, 2 GB memory on a 64-bit interface (GDDR5 or DDR4 variants), and requiring a PCIe 3.0 x16 slot (electrically x4 lanes). It has no external power connector and is bus-powered from the PCIe slot, with a recommended 300 W system power supply. The GDDR5 variant has base/boost clocks of 1228/1468 MHz and a 30 W TDP, while the DDR4 variant features lower clocks and a 20 W TDP. These GPUs targeted 1080p eSports and multimedia tasks, with the GTX 1050 Ti offering playable performance in modern games at medium settings. While not all entry-level models received full VR Ready status, the GTX 1050 Ti supported basic VR compatibility in optimized applications.[31][33][4] Board partners such as ASUS, MSI, and Gigabyte produced custom variants of these desktop GPUs, often featuring factory overclocks (e.g., MSI's Gaming X series boosting the GTX 1080 to 1746 MHz) and advanced cooling solutions like dual-fan or triple-fan designs with RGB lighting for improved thermal management and acoustics under load. These aftermarket cards maintained compatibility with NVIDIA's reference specifications while enhancing overclocking potential and aesthetics for enthusiasts.| Model | GPU Die | CUDA Cores | Memory | Base/Boost Clock (MHz) | TDP (W) |
|---|---|---|---|---|---|
| GTX 1080 Ti | GP102 | 3584 | 11 GB GDDR5X | 1481/1582 | 250 |
| GTX 1080 | GP104 | 2560 | 8 GB GDDR5X | 1607/1733 | 180 |
| GTX 1070 Ti | GP104 | 2432 | 8 GB GDDR5 | 1607/1683 | 180 |
| GTX 1070 | GP104 | 1920 | 8 GB GDDR5 | 1506/1683 | 150 |
| GTX 1060 6 GB | GP106 | 1280 | 6 GB GDDR5 | 1506/1708 | 120 |
| GTX 1060 3 GB | GP106 | 1152 | 3 GB GDDR5 | 1506/1708 | 120 |
| GTX 1050 Ti | GP107 | 768 | 4 GB GDDR5 | 1290/1392 | 75 |
| GTX 1050 | GP107 | 640 | 2/4 GB GDDR5 | 1354/1455 | 75 |
| GT 1030 | GP108 | 384 | 2 GB GDDR5 | 1228/1468 | 30 |
Mobile GPUs
The GeForce 10 series mobile GPUs were designed specifically for laptops, featuring the "M" suffix in their naming convention, such as GTX 1070 Mobile and GTX 1050 Mobile, to denote their optimized implementations for portable systems. Introduced alongside the desktop variants in 2016, these GPUs leveraged the Pascal architecture but incorporated adjustments for thermal constraints and power efficiency in notebook environments. Additionally, NVIDIA introduced Max-Q variants starting in 2017, targeted at slim laptops, which reduced voltage and clock speeds to enable high performance within lower power envelopes, exemplified by the GTX 1080 Max-Q operating at around 90W compared to the full 150W TDP of its non-Max-Q counterpart.[34] Key specifications for representative models highlight these adaptations. The GTX 1070 Mobile, based on the GP104M chip, includes 2048 CUDA cores, 8 GB of GDDR5 memory on a 256-bit bus, base clock speeds of 1443 MHz, and boost clocks up to 1645 MHz, with a typical TDP of 115W. In contrast, the entry-level GTX 1050 Mobile utilizes the GP107M chip with 640 CUDA cores and 4 GB GDDR5 memory on a 128-bit bus, operating at base clocks of 1354 MHz and boosts of 1493 MHz, paired with a lower 50W TDP for better efficiency in thinner chassis. These configurations allowed mobile GPUs to deliver solid 1080p gaming capabilities while fitting within laptop cooling limits.[35][36][37][38] Optimizations focused on balancing performance with battery life and heat management, including NVIDIA's BatteryBoost technology, which dynamically caps frame rates to 30 or 60 FPS during unplugged gaming, extending battery life by up to 2x compared to uncapped modes. Many laptops also incorporated MUX switches, enabling the discrete GPU to connect directly to the display and bypass the integrated graphics for reduced latency and up to 20% higher frame rates in demanding titles. Overall, mobile TDPs ranged from 40W for low-end models to 120W for high-end ones, allowing flexibility across chassis designs while prioritizing sustained performance under thermal throttling.[39][40][41] In benchmarks, these mobile GPUs achieved approximately 80-90% of their desktop equivalents' performance, with the GTX 1070 Mobile delivering frame rates suitable for high settings at 1080p resolutions in games like those tested in 2016-2017 reviews, though subject to thermal limitations in prolonged sessions. This made them ideal for mobile 1080p gaming without the unrestricted power of desktop cards. They were integrated into various gaming laptops from 2016 to 2020, including the Razer Blade series for premium thin-and-light builds and MSI's GT series for robust high-performance models, powering VR-ready experiences in portable form factors.[42][43][44][45]| Model | Chip | CUDA Cores | Memory | Base/Boost Clock (MHz) | TDP (W) |
|---|---|---|---|---|---|
| GTX 1070 Mobile | GP104M | 2048 | 8 GB GDDR5 | 1443 / 1645 | 115 |
| GTX 1050 Mobile | GP107M | 640 | 4 GB GDDR5 | 1354 / 1493 | 50 |
| GTX 1080 Max-Q | GP104M | 2560 | 8 GB GDDR5X | 1290 / 1468 | 90 |