Recent from talks
Contribute something
Nothing was collected or created yet.
GeForce 500 series
View on Wikipedia
A Nvidia GeForce GTX 590 released in 2011, the series' flagship model; this one from
EVGA | |
| Release date | November 9, 2010 |
|---|---|
| Codename | GF11x |
| Architecture | Fermi |
| Models | GeForce series
|
| Transistors | 292M 40 nm (GF119)
|
| Cards | |
| Entry-level | 510 GT 520 GT 530 |
| Mid-range | GT 545 GTX 550 Ti GTX 555 GTX 560 GTX 560 Ti GTX 560 SE |
| High-end | GTX 570 GTX 580 |
| Enthusiast | GTX 590 |
| API support | |
| Direct3D | Direct3D 12.0 (feature level 11_0)[1] Shader Model 5.1 |
| OpenCL | OpenCL 1.1 |
| OpenGL | OpenGL 4.6 |
| History | |
| Predecessor | GeForce 400 series |
| Successor | GeForce 600 series |
| Support status | |
| Unsupported | |
The GeForce 500 series is a series of graphics processing units developed by Nvidia, as a refresh of the Fermi based GeForce 400 series. It was first released on November 9, 2010 with the GeForce GTX 580.
Its direct competitor was AMD's Radeon HD 6000 series; they were launched approximately a month apart.
Overview
[edit]The Nvidia Geforce 500 series graphics cards are significantly modified versions of the GeForce 400 series graphics cards, in terms of performance and power management. Like the Nvidia GeForce 400 series graphics cards, the Nvidia GeForce 500 series supports Direct3D 12.0 (feature level 11.0), OpenGL 4.6, and OpenCL 1.1.
The refreshed Fermi chip includes 512 stream processors, grouped in 16 stream multiprocessors clusters (each with 32 CUDA cores), and is manufactured by TSMC in a 40 nm process.
The Nvidia GeForce GTX 580 graphics card is the first in the Nvidia GeForce 500 series to use a fully enabled chip based on the refreshed Fermi architecture, with all 16 stream multiprocessors clusters and all six 64-bit memory controllers active. The new GF110 GPU was enhanced with full speed FP16 filtering (the previous generation GF100 GPU could only do half-speed FP16 filtering) and improved z-culling units.
On January 25, 2011, Nvidia launched the GeForce GTX 560 Ti, to target the "sweet spot" segment where price/performance ratio is considered important. With its more than 30% improvement over the GTX 460, and performance in between the Radeon HD 6870 and 6950 1GB, the GTX 560 Ti directly replaced the GeForce GTX 470.
On February 17, 2011, it was reported that the GeForce GTX 550 Ti would be launching on March 15, 2011. Although the GTX 550 Ti is a GF116 mainstream chip, Nvidia chose to name its new card the GTX 550 Ti, and not the GTS 550. Performance was shown to be at least comparable and up to 12% faster than the current Radeon HD 5770. Price-wise, the new card trod into the range occupied by the GeForce GTX 460 (768 MB) and the Radeon HD 6790.[2]
On March 24, 2011, the GTX 590 was launched as the flagship graphics card for Nvidia. The GTX 590 is a dual-GPU card, similar to past releases such as the GTX 295, and boasted the potential to handle Nvidia's 3D Vision technology by itself.[3]
On April 13, 2011, the GT 520 was launched as the bottom-end card in the range, with lower performance than the equivalent number cards in the two previous generations, the GT 220 and the GT 420.[citation needed] However, it supported DirectX 11 and was more powerful than the GeForce 210, the GeForce 310, and the integrated graphics options on Intel CPUs.
On May 17, 2011, Nvidia launched a less expensive (non-Ti) version of the GeForce GTX 560 to strengthen Nvidia's price-performance in the $200 range. Like the faster GTX 560 Ti that came before it, this video card was also faster than the GeForce GTX 460. Standard versions of this card performed comparably to the AMD Radeon HD 6870, and would eventually replace the GeForce GTX 460. Premium versions of this card operate at higher speed (factory overclocked), and are slightly faster than the Radeon 6870, approaching the performance of basic versions of the Radeon HD 6950 and the GeForce GTX 560 Ti.
On November 28, 2011, Nvidia launched the "GTX 560 Ti With 448 Cores".[4] However, it does not use the silicon of the GTX 560 series: it is a GF110 chip with two shader blocks disabled. The most powerful version of the 560 series, this card was widely known to be a "limited production" card and was used as a marketing tool making use of the popularity of the GTX 560 brand for the 2011 Holiday season. The performance of the card resides between the regular 560 Ti and 570.
Products
[edit]GeForce 500 (5xx) series
[edit]- 1 Unified shaders: Texture mapping units: Render output units
- 2 Each Streaming Multiprocessor (SM) in the GPU of GF110 architecture contains 32 SPs and 4 SFUs. Each Streaming Multiprocessor (SM) in the GPU of GF114/116/118 architecture contains 48 SPs and 8 SFUs. Each SP can fulfil up to two single precision operations FMA per clock. Each SFU can fulfil up to four operations SF per clock. The approximate ratio of operations FMA to operations SF is equal 4:1. The theoretical shader performance in single-precision floating point operations (FMA)[FLOPSsp, GFLOPS] of the graphics card with shader count [n] and shader frequency [f, GHz], is estimated by the following: FLOPSsp ˜ f × n × 2. Alternative formula: FLOPS sp ˜ f × m × (32 SPs × 2(FMA)). [m] – SM count. Total Processing Power: FLOPSsp ˜ f × m × (32 SPs × 2(FMA) + 4 × 4 SFUs) or FLOPSsp ˜ f × n × 2.5.
- 3 Each SM in the GF110 contains 4 texture filtering units for every texture address unit. The complete GF110 die contains 64 texture address units and 256 texture filtering units.[5] Each SM in the GF114/116/118 architecture contains 8 texture filtering units for every texture address unit but has doubled both addressing and filtering units.
All products are produced using a 40 nm fabrication process. All products support DirectX 12.0, OpenGL 4.6 and OpenCL 1.1.
GeForce 500M (5xxM) series
[edit]The GeForce 500M series for notebook architecture.
| Model | Launch | Code name | Fab (nm) | Bus interface | Core config1 | Clock speed | Fillrate | Memory | API support (version) | Processing Power2 (GFLOPS) |
TDP (watts) | Notes | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Core (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Size (MiB) | Bandwidth (GB/s) | Bus type | Bus width (bit) | DirectX | OpenGL | OpenCL | Vulkan | |||||||||
| GeForce GT 520M | January 5, 2011 | GF119 | 40 | PCIe 2.0 x16 | 48:8:4 | 740 | 1480 | 1600 | 2.96 | 5.92 | 1024 | 12.8 | DDR3 | 64 | 12.0 (11_0) |
4.6 | 1.1 | — | 142.08 | 12 | |
| GeForce GT 520M | GF108 | 40 | PCIe 2.0 x16 | 96:16:4 | 515 | 1030 | 1600 | 2.06 | 8.24 | 1024 | 12.8 | DDR3 | 64 | 197.76 | 20 | Noticed in Lenovo laptops | |||||
| GeForce GT 520MX | May 30, 2011 | GF119 | 40 | PCIe 2.0 x16 | 48:8:4 | 900 | 1800 | 1800 | 3.6 | 7.2 | 1024 | 14.4 | DDR3 | 64 | 172.8 | 20 | |||||
| GeForce GT 525M | January 5, 2011 | GF108 | 40 | PCIe 2.0 x16 | 96:16:4 | 600 | 1200 | 1800 | 2.4 | 9.6 | 1024 | 28.8 | DDR3 | 128 | 230.4 | 20-23 | |||||
| GeForce GT 540M | January 5, 2011 | GF108 | 40 | PCIe 2.0 x16 | 96:16:4 | 672 | 1344 | 1800 | 2.688 | 10.752 | 1024 | 28.8 | DDR3 | 128 | 258.048 | 32-35 | |||||
| GeForce GT 550M | January 5, 2011 | GF108 | 40 | PCIe 2.0 x16 | 96:16:4 | 740 | 1480 | 1800 | 2.96 | 11.84 | 1024 | 28.8 | DDR3 | 128 | 284.16 | 32-35 | |||||
| GeForce GT 555M | January 5, 2011 | GF106 GF108 |
40 | PCIe 2.0 x16 | 144:24:24 144:24:16 96:16:4 |
590 650 753 |
1180 1300 1506 |
1800 1800 3138 |
14.6 10.4 3 |
14.6 15.6 12 |
1536 2048 1024 |
43.2 28.8 50.2 |
DDR3 DDR3 GDDR5 |
192 128 128 |
339.84 374.4 289.15 |
30-35 | |||||
| GeForce GTX 560M | May 30, 2011 | GF116 | 40 | PCIe 2.0 x16 | 192:32:16 192:32:24 |
775 | 1550 | 2500 | 18.6 | 24.8 | 2048 1536, 3072 |
40.0 60.0 |
GDDR5 | 128 192 |
595.2 | 75 | |||||
| GeForce GTX 570M[6] | June 28, 2011 | GF114 | 40 | PCIe 2.0 x16 | 336:56:24 | 575 | 1150 | 3000 | 13.8 | 32.2 | 1536 | 72.0 | GDDR5 | 192 | 772.8 | 75 | |||||
| GeForce GTX 580M | June 28, 2011 | GF114 | 40 | PCIe 2.0 x16 | 384:64:32 | 620 | 1240 | 3000 | 19.8 | 39.7 | 2048 | 96.0 | GDDR5 | 256 | 952.3 | 100 | |||||
Chipset table
[edit]GeForce 500 (5xx) series
[edit]| Model | Launch | Code name | Fab (nm) | Transistors (million) | Die size (mm2) | Bus interface | SM count | Core config[a][b] | Clock rate | Fillrate | Memory configuration | Supported API version | Processing power (GFLOPS)[c] | TDP (watts)[d] | Release price (USD) | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Core (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Size (MB) | Bandwidth (GB/s) | DRAM type | Bus width (bit) | Vulkan | Direct3D | OpenGL | OpenCL8 | Single precision | Double precision | |||||||||||
| GeForce 510 | September 29, 2011 | GF119 | TSMC 40 nm | 292 | 79 | PCIe 2.0 x16 | 1 | 48:8:4 | 523 | 1046 | 1800 | 2.1 | 4.5 | 1024 2048 |
14.4 | DDR3 | 64 | n/a[9] | 12 (11_0) | 4.6 | 1.1 | 100.4 | Unknown | 25 | OEM |
| GeForce GT 520 | April 12, 2011 | PCIe 2.0 x16 PCIe 2.0 x1 PCI |
810 | 1620 | 3.25 | 6.5 | 14.4 | 155.5 | Unknown | 29 | 59 | ||||||||||||||
| GeForce GT 530[10] | May 14, 2011 | GF108-220 | 585 | 116 | PCIe 2.0 x16 | 2 | 96:16:4 | 700 | 1400 | 2.8 | 11.2 | 28.8 | 128 | 268.8 | 22.40 | 50 | OEM | ||||||||
| GeForce GT 545 | GF116 | ~1170 | ~238 | 3 | 144:24:16 | 720 | 1440 | 11.52 | 17.28 | 1536 3072 |
43 | 192 | 415.07 | Unknown | 70 | 149 | |||||||||
| 870 | 1740 | 3996 | 13.92 | 20.88 | 1024 | 64 | GDDR5 | 128 | 501.12 | Unknown | 105 | OEM | |||||||||||||
| GeForce GTX 550 Ti | March 15, 2011 | GF116-400 | 4 | 192:32:24 | 900 | 1800 | 4104 | 21.6 | 28.8 | 768+256 1536 |
65.7+32.8 98.5 |
128+64[e] 192 |
691.2 | Unknown | 116 | 149 | |||||||||
| GeForce GTX 555 | May 14, 2011 | GF114 | 1950 | 332 | 6 | 288:48:24 | 736 | 1472 | 3828 | 17.6 | 35.3 | 1024 | 91.9 | 128+64[e] | 847.9 | Unknown | 150 | OEM | |||||||
| GeForce GTX 560 SE | February 20, 2012[11] | GF114-200-KB-A1[f] | Unknown | ||||||||||||||||||||||
| GeForce GTX 560 | May 17, 2011 | GF114-325-A1[f] | 7 | 336:56:32 | 810 | 1620 | 4008 | 25.92 | 45.36 | 1024 2048 | 128.1 | 256 | 1088.6 | Unknown | 199 | ||||||||||
| GeForce GTX 560 Ti | January 25, 2011 | GF114-400-A1[f] | 8 | 384:64:32 | 822 | 1645 | 26.3 | 52.61 | 128.26 | 1263.4 | 110 | 170 | 249 | ||||||||||||
| May 30, 2011 | GF110[g] | 3000[13] | 520[13] | 11 | 352:44:40 | 732 | 1464 | 3800 | 29.28 | 32.21 | 1280 2560 |
152 | 320 | 1030.7 | 128.83 | 210[d] | OEM | ||||||||
| GeForce GTX 560 Ti 448 Cores | November 29, 2011 | GF110-270-A1[g] | 14 | 448:56:40 | 40.99 | 1280 | 1311.7 | 163.97 | 289 | ||||||||||||||||
| GeForce GTX 570 | December 7, 2010 | GF110-275-A1[g] | 15 | 480:60:40 | 43.92 | 1280 2560 | 1405.4 | 175.68 | 219[d] | 349 | |||||||||||||||
| GeForce GTX 580 | November 9, 2010 | GF110-375-A1[g] | 16 | 512:64:48 | 772 | 1544 | 4008 | 37.05 | 49.41 | 1536 3072[h] |
192.384 | 384 | 1581.1 | 197.63 | 244[d][15] | 499 | |||||||||
| GeForce GTX 590 | March 24, 2011 | 2x GF110-351-A1 | 2x 3000 | 2x 520 | 2x16 | 2x 512:64:48 | 607 | 1215 | 3414 | 2x29.14 | 2x38.85 | 2x 1536 | 2x163.87 | 2x384 | 2488.3 | 311.04 | 365 | 699 | |||||||
| Model | Launch | Code name | Fab (nm) | Transistors (million) | Die size (mm2) | Bus interface | SM count | Core config[a][b] | Clock rate | Fillrate | Memory configuration | Supported API version | Processing power (GFLOPS)[c] | TDP (Watts)[d] | Release price (USD) | ||||||||||
| Core (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Size (MB) | Bandwidth (GB/s) | DRAM type | Bus width (bit) | Vulkan | Direct3D | OpenGL | OpenCL8 | Single precision | Double precision | |||||||||||
- ^ a b Unified shaders: texture mapping units: render output units
- ^ a b Each SM in the GF110 contains 4 texture filtering units for every texture address unit. The complete GF110 die contains 64 texture address units and 256 texture filtering units.[7] Each SM in the GF114/116/118 architecture contains 8 texture filtering units for every texture address unit but has doubled both addressing and filtering units.
- ^ a b To calculate the processing power see Fermi (microarchitecture)#Performance.
- ^ a b c d e Similar to previous generation, GTX 580 and most likely future GTX 570,[needs update] while reflecting its improvement over GF100, still have lower rated TDP and higher power consumption, e.g. GTX 580 (243W TDP) is slightly less power hungry than GTX 480 (250W TDP). This is managed by clock throttling through drivers when a dedicated power hungry application is identified that could breach card TDP. Application name changing will disable throttling and enable full power consumption, which in some cases could be close to that of GTX 480.[8]
- ^ a b 1024 MB RAM on 192-bit bus assemble with 4 x (128 MB) + 2 x (256 MB).
- ^ a b c Internally referred to as GF104B[12]
- ^ a b c d Internally referred to as GF100B[12]
- ^ Some companies have announced that they will be offering the GTX 580 with 3GB RAM.[14]
Support
[edit]Nvidia announced that after Release 390 drivers, it will no longer release 32-bit drivers for 32-bit operating systems.[16]
Nvidia announced in April 2018 that Fermi will transition to legacy driver support status and be maintained until January 2019.[17]
Counterfeit usage
[edit]The cards of this generation, particularly the smaller length 550 Ti model, are common cards of choice by counterfeit resellers, who take the cards and illicitly modify the firmware to have them report as more modern cards such as the GTX 1060 and 1050 Ti models. These cards are then sold via eBay, Taobao, Aliexpress and Wish.com by scammers. They may have a minimum of functionality to ensure at a first glance they appear legitimate, but defects caused by the fake BIOS, manufacturing and software issues will almost always cause crashes in modern games and applications, and if not, the performance will still be extremely poor.[18]
See also
[edit]Notes
[edit]- David Kanter (September 30, 2009). "Inside Fermi: Nvidia's HPC Push". realworldtech.com. Retrieved December 16, 2010.
References
[edit]- ^ Killian, Zak (July 3, 2017). "Nvidia finally lets Fermi GPU owners enjoy DirectX 12". Tech Report. Retrieved July 4, 2017.
- ^ "GeForce GTX 550 Ti To Launch On 15 March". Archived from the original on February 18, 2011. Retrieved February 18, 2011.
- ^ "ASUS NVIDIA GeForce GTX 590 graphics card reviewed and rated". Hexus. Retrieved May 29, 2018.
- ^ Smith, Ryan (November 9, 2011). "NVIDIA's GeForce GTX 560 Ti w/448 Cores: GTX 570 On A Budget". Archived from the original on May 22, 2013.
- ^ Nvidia's GeForce GTX 580: Fermi Refined
- ^ "Graphics Cards, Gaming, Laptops, and Virtual Reality from NVIDIA GeForce".
- ^ Smith, Ryan (November 9, 2010). "GF110: Fermi Learns Some New Tricks - Nvidia's GeForce GTX 580: Fermi Refined". AnandTech. Archived from the original on January 13, 2016. Retrieved December 11, 2015.
- ^ Smith, Ryan (November 9, 2010). "Power, Temperature, and Noise - Nvidia's GeForce GTX 580: Fermi Refined". AnandTech. Archived from the original on January 13, 2016. Retrieved December 11, 2015.
- ^ "The Khronos Group". May 31, 2022.
- ^ "NVIDIA GeForce GT 530 OEM Specs". TechPowerUp. Retrieved September 25, 2022.
- ^ "NVIDIA GeForce GTX 560 SE Specs". Retrieved August 6, 2018.[dead link]
- ^ a b "...and GF110s real name is: GF100B". GPU-Tech.org. Archived from the original on January 13, 2016. Retrieved December 11, 2015.
- ^ a b Ryan Smith (November 9, 2010). "Nvidia's GeForce GTX 580: Fermi Refined". AnandTech. Archived from the original on November 10, 2010. Retrieved November 9, 2010.
- ^ "Products - Featured Products". EVGA. Archived from the original on March 24, 2012. Retrieved December 11, 2015.
- ^ "Nvidia GeForce GTX 580 1536 MB Review". TechPowerUp. November 9, 2010. Archived from the original on December 22, 2015. Retrieved December 11, 2015.
- ^ "Support Plan for 32-bit and 64-bit Operating Systems | NVIDIA".
- ^ "Support Plan for Fermi series GeForce GPUs | NVIDIA".
- ^ Burke, Steve. "Fake "GTX 1050 1GB" Scam GPU Benchmark & Review". www.gamersnexus.net. Retrieved December 19, 2020.
External links
[edit]- GeForce GTX 590
- GeForce GTX 580
- GeForce GTX 570
- GeForce GTX 560 Ti
- GeForce GTX 560 Ti (OEM)
- GeForce GTX 560
- GeForce GTX 560 (OEM)
- GeForce GTX 550 Ti
- GeForce GT 545 GDDR5 (OEM)
- GeForce GT 545 DDR3
- GeForce GT 530 (OEM)
- GeForce GT 520
- GeForce GT 520 (OEM)
- GeForce 510 (OEM)
- GeForce GTX 580M
- GeForce GTX 570M
- GeForce GTX 560M
- GeForce GT 555M
- GeForce GT 550M
- GeForce GT 540M
- GeForce GT 525M
- GeForce GT 520MX
- GeForce GT 520M
- Nvidia Nsight
- techPowerUp! GPU Database
GeForce 500 series
View on GrokipediaDevelopment and Background
Announcement and Release Timeline
NVIDIA unveiled the GeForce 500 series as part of its broader Fermi architecture announcement during a keynote at the Consumer Electronics Show (CES) 2010 on January 7, 2010, marking the introduction of a new generation of graphics processing units focused on enhanced parallel computing capabilities.[7] This reveal highlighted the architecture's potential for both gaming and general-purpose computing, setting the stage for subsequent product launches.[8] While the initial Fermi-based consumer products were released under the GeForce 400 series in 2010, the core GeForce 500 series proper emerged later in 2010, beginning with the flagship GeForce GTX 580 on November 9, 2010, followed by the GeForce GTX 570 on December 7, 2010.[1][9] These high-end models addressed efficiency issues from the 400 series. Subsequent releases included the GeForce GTX 560 Ti on January 25, 2011, positioned as a high-value performance option.[10] Further expansions like the GeForce GTX 550 Ti on March 15, 2011, the GeForce GTX 560 on May 17, 2011, and entry-level GeForce GT 520 on April 13, 2011, populated the lineup for mainstream and budget segments.[11][12][13] The dual-GPU GeForce GTX 590 launched on March 24, 2011, as the enthusiast flagship.[3] For mobile platforms, the GeForce 500M series was announced on January 5, 2011, at CES 2011, introducing high-performance models like the GeForce GTX 485M along with mainstream options such as GT 540M, GT 550M, and GT 555M, with availability starting in early 2011.[5][14] Additional 500M variants followed through 2011, powering premium notebooks from major OEMs. Major releases for the GeForce 500 series concluded by mid-2011, as NVIDIA shifted focus toward the successor Kepler architecture, which debuted in the GeForce 600 series in March 2012.[15]| Model | Release Date | Segment |
|---|---|---|
| GeForce GTX 580 | November 9, 2010 | High-end Desktop |
| GeForce GTX 570 | December 7, 2010 | High-end Desktop |
| GeForce GTX 560 Ti | January 25, 2011 | Performance Desktop |
| GeForce GT 545 | February 28, 2011 | Entry-level Desktop |
| GeForce GTX 550 Ti | March 15, 2011 | Mainstream Desktop |
| GeForce GTX 590 | March 24, 2011 | Enthusiast Desktop |
| GeForce GT 520 | April 13, 2011 | Entry-level Desktop |
| GeForce GTX 560 | May 17, 2011 | Mainstream Desktop |
| GeForce GTX 485M | January 5, 2011 | High-end Mobile |
Architectural Foundations
The GeForce 500 series is built on NVIDIA's Fermi architecture, fabricated using TSMC's 40 nm process node, which enabled a significant increase in transistor density with the initial GF100 GPU featuring 3 billion transistors. This design marked NVIDIA's evolution toward a more advanced unified shader model, where graphics and compute workloads are processed through the same flexible processing units, contrasting with earlier fixed-function pipelines. The architecture organizes processing power into 16 streaming multiprocessors (SMs) per GPU, each containing 32 CUDA cores for a total of up to 512 cores in high-end configurations, allowing for enhanced parallel execution of shaders and general-purpose computing tasks.[16][17] A key innovation in Fermi was the introduction of hardware support for double-precision floating-point operations, aimed at compute-intensive applications, though in consumer GeForce variants, the peak double-precision throughput is limited to 1/32 the rate of single-precision performance to differentiate from professional Tesla products. This throttling ensures compatibility with IEEE 754-2008 standards but prioritizes single-precision efficiency for gaming and graphics rendering. Additionally, the architecture includes support for error-correcting code (ECC) memory, which protects data integrity in DRAM, caches, and registers using single-error correction and double-error detection (SECDED); however, ECC is disabled by default in consumer GeForce models to maximize bandwidth and performance, while it remains optional or enabled in professional variants for reliability in scientific computing.[18][16] Development of the initial GF100 chip encountered significant challenges, including high power draw exceeding 250 W TDP and excessive heat generation, which necessitated aggressive cooling solutions and contributed to launch delays for the GeForce 400 series precursors. These issues stemmed from the large die size of 529 mm² and dense transistor integration, leading NVIDIA to refine the design in subsequent iterations like the GF110, which reduced the die size slightly to 520 mm² through architectural optimizations and leakage current reductions, improving efficiency without sacrificing core counts. Compared to the predecessor Tesla architecture (e.g., GT200 with up to 240 CUDA cores), Fermi substantially enhanced parallelism by doubling the core count in flagship models and introducing features like concurrent kernel execution and faster atomic operations, enabling up to 16 kernels to run simultaneously for better utilization in parallel computing workloads.[16]Technical Architecture
Core Design and Processing Units
The GeForce 500 series GPUs, built on NVIDIA's Fermi microarchitecture, center their processing capabilities around streaming multiprocessors (SMs), which serve as the fundamental execution units for both graphics rendering and general-purpose computing tasks. Each SM incorporates 32 CUDA cores dedicated to scalar floating-point and integer arithmetic operations, enabling parallel execution of shader programs. Complementing these are 16 load/store units per SM for handling memory accesses, 4 special function units (SFUs) optimized for complex mathematical operations such as reciprocals, square roots, and trigonometric functions, and a dual warp scheduler that simultaneously dispatches instructions to two independent warps of 32 threads each, enhancing instruction-level parallelism and occupancy. This structure allows for efficient handling of diverse workloads, from vertex shading to compute kernels.[19][20] Texture processing units (TPs) are tightly integrated within the SMs to accelerate texture mapping and filtering operations, with each SM featuring 4 TPs capable of performing bilinear filtering, trilinear filtering, and anisotropic filtering up to 16x, while supporting both 32-bit floating-point (FP32) and integer data types for versatile texture fetches. In high-end configurations, such as the GeForce GTX 580 with 16 SMs, this results in up to 64 TPs across the GPU, enabling high-throughput texture sampling that contributes to improved rendering performance in complex scenes. The raster operating units (ROPs), responsible for final pixel output tasks including depth testing, alpha blending, and multisample anti-aliasing (MSAA), vary by model to balance cost and capability; for instance, the GeForce GTX 480 employs 48 ROPs, each capable of processing multiple samples per clock cycle to support advanced anti-aliasing modes like 8x MSAA.[21] Clock speeds in the GeForce 500 series are tailored to model positioning, with base GPU clocks ranging from 810 MHz in the entry-level GeForce GT 520 to 772 MHz in the flagship GeForce GTX 580, while shader clocks operate at approximately double the base frequency for enhanced computational throughput. These configurations reflect the Fermi architecture's emphasis on balanced execution pipelines, where unified shader cores process instructions through dedicated floating-point and integer pipelines, achieving greater efficiency in instruction dispatch and resource utilization compared to preceding generations.[22][13][19]Memory Systems and Interfaces
The GeForce 500 series GPUs predominantly utilize GDDR5 memory, providing high-bandwidth access for graphics and compute workloads, with effective data rates up to 4.0 Gbps in flagship models such as the GTX 580.[1] This memory standard enables efficient handling of large textures and framebuffers, marking an advancement over prior DDR3 options in lower-tier segments. While some entry-level variants like the GT 520 employed DDR3 at lower speeds around 1.8 Gbps effective, the core lineup shifted to GDDR5 for improved performance in DirectX 11-era applications.[13] Memory bus widths vary by model tier to balance cost and performance, featuring 384-bit interfaces in high-end GPUs like the GTX 480 and GTX 580 for maximum throughput, 256-bit in mid-range options such as the GTX 560, and narrower 128-bit or 192-bit buses in entry- and mid-tier cards like the GTX 550 Ti.[21] These configurations allow the series to scale bandwidth appropriately, with wider buses supporting intensive rasterization and shading tasks. The bandwidth is calculated using the formula: memory bandwidth (GB/s) = (effective memory clock in MHz × bus width in bits × 2 for double data rate) / 8 / 1000, though GDDR5's specifics often reference the effective data rate directly; for instance, the GTX 480 achieves 177.4 GB/s with its 3.7 Gbps effective clock and 384-bit bus.[21] All GeForce 500 series GPUs employ a PCI Express 2.0 x16 interface as the standard connection to the host system, delivering up to 8 GB/s bidirectional bandwidth per direction to facilitate data transfer between the GPU and CPU memory.[19] These GPUs are compatible with PCIe 3.0 slots but operate at PCIe 2.0 speeds, with minimal real-world performance impact due to the bandwidth available.[23] The memory hierarchy includes per-streaming multiprocessor (SM) L1 caches integrated with shared memory resources, totaling 64 KB configurable as 16 KB L1 cache plus 48 KB shared memory or vice versa, to accelerate local loads, stores, and texture fetches within each SM.[19] A unified L2 cache, varying by model (e.g., 768 KB in high-end GPUs like the GTX 580 or 512 KB in mid-range models like the GTX 560), spans the entire GPU, servicing all global memory requests from the SMs and interfacing with the external GDDR5 DRAM, thereby reducing latency for coherent data access across processing units. This design enhances overall efficiency by caching frequently accessed data closer to the compute cores, integrating seamlessly with the Fermi architecture's execution hardware.[19][12]Product Lineup
Desktop GeForce 500 Series GPUs
The desktop GeForce 500 series GPUs formed NVIDIA's second-generation Fermi-based graphics cards for personal computers, targeting gamers and content creators seeking DirectX 11 support and advanced visual effects. These cards emphasized improved efficiency over the initial Fermi lineup while maintaining compatibility with multi-GPU setups via Scalable Link Interface (SLI) technology, allowing up to three cards in high-end configurations for enhanced performance. The high-end segment featured flagship models optimized for demanding applications and extreme resolutions. The GeForce GTX 580, launched on November 9, 2010, included 512 CUDA cores and 1.5 GB of GDDR5 memory on a 384-bit interface, positioning it as the series' top performer with full hardware tessellation capabilities.[24] The GeForce GTX 570, released on December 7, 2010, offered 480 CUDA cores and 1.25 GB of GDDR5 memory, serving as a slightly more power-efficient alternative while retaining similar architectural strengths. The dual-GPU GeForce GTX 590, released on March 24, 2011, featured 1,024 CUDA cores and 3 GB of GDDR5 across two GF110 chips, targeting enthusiast-class multi-GPU setups with a TDP of 365 W.[3] Mid-range options balanced cost and capability for mainstream gaming rigs. The GeForce GTX 560 Ti, introduced on January 25, 2011, provided 384 CUDA cores and 1 GB of GDDR5 memory, aimed at 1080p gaming with strong DirectX 11 feature support.[25] Following it, the GeForce GTX 560 arrived on May 17, 2011, with 336 CUDA cores and 1 GB GDDR5, targeting budget-conscious enthusiasts.[26] The GeForce GTX 550 Ti, launched March 15, 2011, featured 192 CUDA cores and 1 GB GDDR5, focusing on entry-to-mid-level DirectX 11 acceleration.[27][11] Entry-level cards catered to basic multimedia and light gaming needs. The GeForce GT 520, released April 13, 2011, had 48 CUDA cores and 1 GB of DDR3 memory, emphasizing low power draw for integrated upgrades.[13] Overall, the lineup positioned NVIDIA competitively against AMD's Radeon HD 6000 series, prioritizing enthusiast features like SLI scalability for multi-monitor and high-fidelity setups.Mobile GeForce 500M Series GPUs
The Mobile GeForce 500M Series GPUs were designed specifically for laptops, adapting the Fermi architecture to balance performance with power efficiency and thermal limitations inherent to portable devices. Announced on January 5, 2011, at CES, this lineup emphasized optimizations for battery life and heat dissipation, including integration with NVIDIA Optimus technology, which seamlessly switches between the discrete GPU and integrated graphics to extend runtime during light tasks.[5] These GPUs featured reduced clock speeds compared to their desktop counterparts to manage thermal output in confined laptop chassis. The high-end models targeted gaming and professional workloads in premium laptops. The GeForce GTX 580M, with 384 CUDA cores, up to 2 GB of GDDR5 memory on a 256-bit bus, and a TDP of 100 W, was positioned as the flagship for high-resolution gaming, launching on June 28, 2011.[28] The GeForce GTX 570M offered 336 CUDA cores, 1.5 GB of GDDR5 memory on a 192-bit bus, and a TDP up to 100 W, released on August 18, 2011, providing strong performance for 3D rendering and video editing while supporting dynamic clock throttling to prevent overheating.[29] Mid-range options focused on mainstream laptops, delivering capable graphics for 1080p gaming without excessive power draw. The GeForce GTX 560M, with 192 CUDA cores, 1.5 GB of GDDR5 on a 192-bit bus, and a TDP of 75 W, arrived on May 30, 2011, incorporating improved efficiency for sustained performance under thermal constraints.[30][31] The GeForce GT 550M, featuring 144 CUDA cores, 1-2 GB of GDDR5 memory on a 128-bit bus, and a TDP of 30 W, was released in January 2011.[5] Entry-level GPUs in the series catered to ultrabooks and budget portables, prioritizing low power over peak performance. The GeForce GT 555M included 144 CUDA cores, 1-2 GB of DDR3 or GDDR5 memory on a 128-bit bus, and a TDP of 35 W, released in 2011.[32][33] The GeForce GT 540M had 96 CUDA cores, 1-2 GB of DDR3 or GDDR5 on a 128-bit bus, and a 32 W TDP, also launching in 2011.[34][35] Completing the tier, the GeForce GT 525M featured 64-96 CUDA cores, 1 GB of DDR3 on a 128-bit bus, and a 23 W TDP, introduced in 2011.[36][37] The GeForce GT 520M had 48 CUDA cores, 1 GB of DDR3 on a 64-bit bus, and a 12 W TDP, released in 2011.[38] Across the series, thermal and power management was critical, with TDPs ranging from 12 W for low-power models like the GT 520M to 100 W for high-end models like the GTX 580M.[39] Dynamic clock throttling adjusted frequencies in real-time based on temperature and battery status, ensuring stability in varied laptop environments while leveraging Optimus for up to 2x battery life gains during non-GPU-intensive use.[5]| Model | CUDA Cores | Memory | Bus Width | TDP (W) | Release Date |
|---|---|---|---|---|---|
| GTX 580M | 384 | 2 GB GDDR5 | 256-bit | 100 | Jun 28, 2011 |
| GTX 570M | 336 | 1.5 GB GDDR5 | 192-bit | 100 | Aug 18, 2011 |
| GTX 560M | 192 | 1.5 GB GDDR5 | 192-bit | 75 | May 30, 2011 |
| GT 550M | 144 | 1-2 GB GDDR5 | 128-bit | 30 | Jan 2011 |
| GT 555M | 144 | 1-2 GB DDR3/GDDR5 | 128-bit | 35 | 2011 |
| GT 540M | 96 | 1-2 GB DDR3/GDDR5 | 128-bit | 32 | 2011 |
| GT 525M | 64-96 | 1 GB DDR3 | 128-bit | 23 | 2011 |
| GT 520M | 48 | 1 GB DDR3 | 64-bit | 12 | 2011 |
