Recent from talks
Nothing was collected or created yet.
GeForce 6 series
View on Wikipedia
Top: Logo of the series Bottom: A Nvidia GeForce 6800 Ultra released in 2004, one of the series' highest-end models | |
| Release date | April 14, 2004 |
|---|---|
| Codename | NV4x |
| Architecture | Curie |
| Models | GeForce nForce series
|
| Cards | |
| Entry-level | 6100 6150 6200 6500 |
| Mid-range | 6600 6700 |
| High-end | 6800 |
| Enthusiast | 6800 Ultra / Ultra Extreme |
| API support | |
| Direct3D | Direct3D 9.0c Shader Model 3.0 |
| OpenGL | OpenGL 2.1 |
| History | |
| Predecessor | GeForce 5 series |
| Successor | GeForce 7 series |
| Support status | |
| Unsupported | |
The GeForce 6 series (codename NV40) is the sixth generation of Nvidia's GeForce line of graphics processing units. Launched on April 14, 2004, the GeForce 6 family introduced PureVideo post-processing for video, SLI technology, and Shader Model 3.0 support (compliant with Microsoft DirectX 9.0c specification and OpenGL 2.0).
GeForce 6 series features
[edit]
SLI
[edit]The Scalable Link Interface (SLI) allows two GeForce 6 cards of the same type to be connected in tandem. The driver software balances the workload between the cards. SLI-capability is limited to select members of the GeForce 6 family; 6500 and above. SLI is only available for cards utilizing the PCI-Express bus.
Nvidia PureVideo Technology
[edit]Nvidia PureVideo technology is the combination of a dedicated video processing core and software which decodes H.264, VC-1, WMV, and MPEG-2 videos with reduced CPU utilization.[1]
Shader Model 3.0
[edit]Nvidia was the first to deliver Shader Model 3.0 (SM3) capability in its GPUs. SM3 extends SM2 in a number of ways: standard FP32 (32-bit floating-point) precision, dynamic branching, increased efficiency and longer shader lengths are the main additions.[2] Shader Model 3.0 was quickly adopted by game developers because it was quite simple to convert existing shaders coded with SM 2.0/2.0A/2.0B to version 3.0, and it offered noticeable performance improvements across the entire GeForce 6 line.
Caveats
[edit]PureVideo functionality varies by model, with some models lacking WMV9 and/or H.264 acceleration.[3]
In addition, motherboards with some VIA and SIS chipsets and an AMD Athlon XP processor seemingly have compatibility problems with the GeForce 6600 and 6800 GPUs. Problems that have been known to arise are freezing, artifacts, reboots, and other issues that make gaming and use of 3D applications almost impossible. These problems seem to happen only on Direct3D based applications and do not affect OpenGL.[citation needed]
GeForce 6 series comparison
[edit]

Here is how the released versions of the "GeForce 6" series family compare to Nvidia's previous flagship GPU, the GeForce FX 5950 Ultra, in addition to the comparable units of ATI's newly released for the time Radeon X800 and X850 series:
| GeForce FX 5950 Ultra | GeForce 6200 TC-32 | GeForce 6600 GT | GeForce 6800 Ultra | ATI Radeon X800 XT PE | ATI Radeon X850 XT PE | |
|---|---|---|---|---|---|---|
| Transistor count | 135 million | 77 million | 146 million | 222 million | 160 million | 160 million |
| Manufacturing process | 0.13 μm | 0.11 μm | 0.11 μm | 0.13 μm | 0.13 μm low-k | 0.13 μm low-k |
| Die Area (mm²) | ~200 | 110 | 156 | 288 | 288 | 297 |
| Core clock speed (MHz) | 475 | 350 | 500 | 400 | 520 | 540 |
| Number of pixel shader processors | 4 | 4 | 8 | 16 | 16 | 16 |
| Number of pixel pipes | 4 | 4 | 8 | 16 | 16 | 16 |
| Number of texturing units | 8(16*) | 4 | 8 | 16 | 16 | 16 |
| Number of vertex pipelines | 3* | 3 | 3 | 6 | 6 | 6 |
| Peak pixel fill rate (theoretical) | 1.9 Gigapixel/s | 700 Megapixel/s | 2.0 Gigapixel/s | 6.4 Gigapixel/s | 8.32 Gigapixel/s | 8.64 Gigapixel/s |
| Peak texture fill rate (theoretical) | 3.8 Gigatexel/s | 1.4 Gigatexel/s | 4.0 Gigatexel/s | 6.4 Gigatexel/s | 8.32 Gigatexel/s | 8.64 Gigatexel/s |
| Memory interface | 256-bit | 64-bit | 128-bit | 256-bit | 256-bit | 256-bit |
| Memory clock speed | 950 MHz DDR | 700 MHz DDR2 | 1.0 GHz / 950 MHz** GDDR3 | 1.1 GHz GDDR3 | 1.12 GHz GDDR3 | 1.18 GHz GDDR3 |
| Peak memory bandwidth (GiB/s) | 30.4 | 5.6 | 16.0 / 14.4** | 35.2 | 35.84 | 37.76 |
(*) GeForce FX series has an Array-based Vertex Shader.
(**) AGP 6600 GT variant.
GeForce 6800 series
[edit]
The first family in the GeForce 6 product-line, the 6800 series catered to the high-performance gaming market. As the very first GeForce 6 model, the 16 pixel pipeline GeForce 6800 Ultra (NV40) was 2 to 2.5 times faster than Nvidia's previous top-line product (the GeForce FX 5950 Ultra), packed four times the number of pixel pipelines, twice the number of texture units and added a much improved pixel-shader architecture. Yet, the 6800 Ultra was fabricated on the same (IBM) 130 nanometer process node as the FX 5950, and it consumed slightly less power.
Like all of Nvidia's GPUs up until 2004, initial 6800 members were designed for the AGP bus. Nvidia added support for the PCI Express (PCIe) bus in later GeForce 6 products, usually by use of an AGP-PCIe bridge chip. In the case of the 6800 GT and 6800 Ultra, Nvidia developed a variant of the NV40 chip called the NV45. The NV45 shares the same die core as the NV40, but embeds an AGP-PCIe bridge on the chip's package. (Internally, the NV45 is an AGP NV40 with added bus-translation logic, to permit interfacing with a PCIe motherboard. Externally, the NV45 is a single package with two separate silicon dies clearly visible on the top.) NV48 is a version of NV45 which supports 512MiB RAM.
The use of an AGP-PCIe bridge chip initially led to fears that natively-AGP GPUs would not be able to take advantage of the additional bandwidth offered by PCIe and would therefore be at a disadvantage relative to native PCIe chips.[citation needed] However, benchmarking reveals that even AGP 4× is fast enough that most contemporary games do not improve significantly in performance when switched to AGP 8×, rendering the further bandwidth increase provided by PCIe largely superfluous.[citation needed] Additionally, Nvidia's on-board implementations of AGP are clocked at AGP 12× or 16×, providing bandwidth comparable to PCIe for the rare situations when this bandwidth is actually necessary.[citation needed]
The use of a bridge chip allowed Nvidia to release a full complement of PCIe graphics cards without having to redesign them for the PCIe interface. Later, when Nvidia's GPUs were designed to use PCIe natively, the bidirectional bridge chip allowed them to be used in AGP cards. ATI, initially a critic of the bridge chip, eventually designed a similar solution (known as Rialto[4]) for their own cards.[citation needed]
Nvidia's professional Quadro line contains members drawn from the 6800 series: Quadro FX 4000 (AGP) and the Quadro FX 3400, 4400 and 4400g (both PCI Express). The 6800 series was also incorporated into laptops with the GeForce Go 6800 and Go 6800 Ultra GPUs.
PureVideo and the AGP GeForce 6800
[edit]PureVideo expanded the level of multimedia-video support from decoding of MPEG-2 video to decoding of more advanced codecs (MPEG-4, WMV9), enhanced post-processing (advanced de-interlacing), and limited acceleration for encoding. But perhaps ironically, the first GeForce product(s) to offer PureVideo, the AGP GeForce 6800/GT/Ultra, failed to support all of PureVideo's advertised features.
Media player software (WMP9) with support for WMV-acceleration did not become available until several months after the 6800's introduction. User and web reports showed little if any difference between PureVideo enabled GeForces and non-Purevideo cards. The prolonged public silence of Nvidia, after promising updated drivers, and test benchmarks gathered by users led the user community to conclude that the WMV9 decoder component of the AGP 6800's PureVideo unit is either non-functional or intentionally disabled. [citation needed]
In late 2005, an update to Nvidia's website finally confirmed what had long been suspected by the user community: WMV-acceleration is not available on the AGP 6800.
Of course, today's computers are fast enough to play and decode WMV9 video and other sophisticated codecs like MPEG-4, H.264 or VP8 without hardware acceleration, thus negating the need for something like PureVideo.
GeForce 6 series general features
[edit]
- 4, 8, 12, or 16 pixel-pipeline GPU architecture
- Up to 8x more shading performance compared to the previous generation
- CineFX 3.0 engine - DirectX 9 Shader Model 3.0 support
- On Chip Video processor (PureVideo)
- Full MPEG-2 encoding and decoding at GPU level (PureVideo)
- Advanced Adaptive De-Interlacing (PureVideo)
- DDR and GDDR-3 memory on a 256-bit wide Memory interface
- UltraShadow II technology - 3x to 4x faster than NV35 (GeForce FX 5900)
- High Precision Dynamic Range (HPDR) technology
- 128-bit studio precision through the entire pipeline - Floating-point 32-bit color precision
- IntelliSample 4.0 Technology - 16x Anisotropic Filtering, Rotating Grid Antialiasing and Transparency Antialiasing (see here)
- Maximum display resolution of 2048x1536@85 Hz
- Video Scaling and Filtering - HQ filtering techniques up to HDTV resolutions
- Integrated TV Encoder - TV-output up to 1024x768 resolutions
- OpenGL 2.0 Optimizations and support
- DVC 3.0 (Digital Vibrance Control)
- Dual 400 MHz RAMDACs which support QXGA displays up to 2048x1536 @ 85 Hz
- Dual DVI outputs on select members (implementation depends on the card manufacturer)
6800 chipset table
[edit]| Board Name | Core Type | Core (MHz) |
Memory (MHz) |
Pipeline Config |
Vertex Processors |
Memory Interface |
|---|---|---|---|---|---|---|
| 6800 Ultra | NV40/NV45/NV48 | 400 | 550 | 16 | 6 | 256-bit |
| 6800 GT | NV40/NV45/NV48 | 350 | 500 | 16 | 6 | 256-bit |
| 6800 XT | NV42 | 450 | 600 | 12 | 5 | 256-bit |
| 6800 GTS | NV42 | 425 | 500 | 12 | 5 | 256-bit |
| 6800 GS | NV40 | 350 | 500 | 12 | 5 | 256-bit |
| 6800 GTO | NV40/NV45 | 350 | 450 | 12 | 5 | 256-bit |
| 6800 | NV40/NV41/NV42 | 325 | 300/350 | 12 | 5 | 256-bit |
| 6800 Go | NV41M | 300 | 300 | 12 | 5 | 256-bit |
| 6800 Go Ultra | NV41M(0.13u)/NV42M(0.11u) | 450 | 600 | 12 | 5 | 256-bit |
| 6800 XE | NV40 | 275/300/325 | 266/350 | 8 | 3 | 128-bit |
| 6800 LE | NV40 | 300 | 350 | 8 | 4 | 256-bit |
Notes
[edit]- The limited-supply GeForce 6800 Ultra Extreme Edition[5] was shipped with a 450 MHz core clock and (usually) a 1200 MHz memory clock,[6] but was otherwise identical to a common 6800 Ultra.
- The GeForce 6800 GS is cheaper to manufacture and has a lower MSRP than the GeForce 6800 GT because it has fewer pipelines and is fabricated on a smaller process (110 vs 130 nm), but performance is similar because it has a faster core clock. The AGP version, however, uses the original NV40 chip and 6800 GT circuit board and the inactive pixel and vertex pipelines may potentially be unlockable. However, the PCI Express version lacks them entirely, preventing such modifications.
- The 6800 GTO (which was produced only as an OEM card) contains four masked pixel pipelines and one masked vertex shader, which are potentially unlockable.
- The GeForce 6800 is often unofficially called the "GeForce 6800 Vanilla" or the "GeForce 6800 NU" (for Non-Ultra) to distinguish it from the other models. Recent PCIe variants have the NV41 (IBM 0.13 micrometre) or NV42 (TSMC 0.11 micrometer) cores, which are native PCIe implementations and do not have an integrated AGP bridge chip. The AGP version of the video card contains four masked pixel pipelines and one masked vertex shader, which are potentially unlockable through software mods. PCI-Express 6800 cards are incapable of such modifications, because the masked pixel pipelines and vertex buffers are nonexistent.
- The 6800 XT varies greatly depending on manufacturer. It is produced using three cores (NV40/NV41/NV42), four memory configurations (128 MiB DDR, 256 MiB DDR, 128 MiB GDDR3, 256 MiB GDDR3, and 512 MiB GDDR2), and has clock speeds ranging from 300 to 425 MHz (core) and 600-1000 MHz (memory). 6800 XT cards based on the NV40 core contain eight masked pixel pipelines and two masked vertex shaders, and those based on the NV42 core contain four masked pipelines and one masked shader (for some reason, the NV42 cards are almost never unlockable. It is speculated that the pipelines are being laser-cut).
- The 6800 LE contains eight masked pixel pipelines and two masked vertex shaders, which are potentially unlockable.
- The AGP version of the 6800 series does not have support for 2D acceleration in Adobe Reader/Acrobat 9.0 even though the GeForce AGP 6600, and PCI-e 6800 versions do.[7]
GeForce 6600 series
[edit]

The GeForce 6600 (NV43) was officially launched on August 12, 2004, several months after the launch of the 6800 Ultra. With half the pixel pipelines and vertex shaders of the 6800 GT, and a smaller 128-bit memory bus, the lower-performance and lower-cost 6600 is the mainstream product of the GeForce 6 series. The 6600 series retains the core rendering features of the 6800 series, including SLI. Equipped with fewer rendering units, the 6600 series processes pixel data at a slower rate than the more powerful 6800 series. However, the reduction in hardware resources, and migration to TSMC's 110 nm manufacturing process (versus the 6800's 130 nm process), make the 6600 both less expensive for Nvidia to manufacture and less expensive for customers to purchase.
Their 6600 series currently has three variants: the GeForce 6600LE, the 6600, and the 6600GT (in order from slowest to fastest.) The 6600 GT performs quite a bit better than the GeForce FX 5950 Ultra or Radeon 9800 XT, with the 6600 GT scoring around 8000 in 3DMark03, while the GeForce FX 5950 Ultra scored around 6000, and it is also much cheaper. Notably, the 6600 GT offered identical performance to ATI's high-end X800 PRO graphics card with drivers previous December 2004, when running the popular game Doom 3. It was also about as fast as the higher-end GeForce 6800 when running games without anti-aliasing in most scenarios.
At introduction, the 6600 family was only available in PCI Express form. AGP models became available roughly a month later, through the use of Nvidia's AGP-PCIe bridge chip. A majority of the AGP GeForce 6600GTs have their memory clocked at 900 MHz, which is 100 MHz slower than the PCI-e cards, on which the memory operates at 1000 MHz. This can contribute to a performance decline when playing certain games. However, it was often possible to "overclock" the memory to its nominal frequency of 1000 MHz and there are AGP cards (for example from XFX) that use 1000 MHz by default.
6600 chipset table
[edit]| Board Name | Core Type | Core (MHz) |
Memory (MHz) |
Pipeline Config |
Vertex Processors |
Memory Interface |
|---|---|---|---|---|---|---|
| 6700 XL | NV43 | 525 | 1100 | 8 | 3 | 128-bit |
| 6600 GT GDDR3 | NV43 | 500 | 900/1000 | 8 | 3 | 128-bit |
| 6600 XL | NV43 | 400 | 800 | 8 | 3 | 128-bit |
| 6600 DDR2 | NV43 | 350 | 800 | 8 | 3 | 128-bit |
| 6600 | NV43 | 300 | 500/550 | 8 | 3 | 128-bit |
| 6600 LE | NV43 | 300 | 500 | 4 | 3 | 128-bit |

Other data for PCI Express based cards:
- Memory Interface: 128-bit
- Memory Bandwidth: 16.0 GiB/s.
- Fill Rate (pixels/s.): 4.0 billion
- Vertices per Second: 375 million
- Memory Data Rate: 1000 MHz
- Pixels per Clock (peak): 8
- RAMDACs: 400 MHz
Other data for AGP based cards:
- Memory Interface: 128-bit
- Memory Bandwidth: 14.4 GiB/s.
- Fill Rate (pixels/s.): 4.0 billion
- Vertices per Second: 375 million
- Memory Data Rate: 900 MHz
- Pixels per Clock (peak): 8
- RAMDACs 400 MHz
GeForce 6500
[edit]The GeForce 6500 was released in October 2005 and is based on the same NV44 core as the value/budget (low-end or entry level) GeForce 6200TC, but with a higher GPU clock speed and more memory. The GeForce 6500 also supports SLI.
GeForce 6500
[edit]- Core Clock: 450 MHz
- Memory Clock: 700 MHz
- Pixel Pipelines: 4
- Number of ROPs: 2
- Vertex Processors: 3
- Memory: 128/256 MiB DDR on a 64-bit interface
- Fill Rate (pixels/s): 1.6 billion
- Vertices per Second: 300 million
- Effective Memory Bandwidth (GiB/s): 13.44
GeForce 6200
[edit]
With just 4 pixel pipelines, the 6200 series forms Nvidia's value/budget (low-end or entry level) product. The 6200 omits memory compression and SLI support, but otherwise offers similar rendering features as the 6600s. The later 6200 boards were based on the NV44 core(s), which is the final production silicon for the 6200 series. The 6200 is the only card in the series to feature keying for 3.3V AGP slots (barring some rare exceptions of higher-end cards from vendors like PNY).
However, at introduction, production silicon was not yet ready. Nvidia fulfilled 6200 orders by shipping binned/rejected 6600 series cores (NV43V). The rejects were factory-modified to disable four pixel pipelines, thereby converting the native 6600 product into a 6200 product. Some users were able to "unlock" early 6200 boards through a software utility (effectively converting the 6200 back into a 6600 with the complete set of eight pixel pipelines total) if they owned boards with an NV43 A2 or earlier revision of the core. Thus, not all NV43-based 6200 boards could successfully be unlocked (specifically those with a core revision of A4 or higher), and as soon as NV44 production silicon became available, Nvidia discontinued shipments of downgraded NV43V cores.
GeForce 6200 chip specifications
[edit]GeForce 6200
[edit]GeForce 6200 TurboCache / AGP
[edit]The GeForce 6200 TurboCache / AGP (NV44/NV44a) is a natively four-pipeline version of the NV43. GeForce 6200 TurboCache cards only have a very small (by modern standards) amount of memory, but attempt to make up for this by using system memory accessed through the PCI-Express bus.
GeForce 6200 TurboCache / AGP chip specifications
[edit]GeForce 6200 PCI-Express (NV44) TurboCache
[edit]- Core Clock: 350 MHz
- Memory Clock: 700 MHz
- Pixel Pipelines: 4
- Number of ROPs: 2
- Vertex Processors: 3
- Memory: 16/32/64/128 MiB DDR on a 32-bit/64-bit/128-bit interface
- GeForce 6200 w/ TurboCache supporting 128 MiB, including 16 MiB of local TurboCache (32-bit)
- GeForce 6200 w/ TurboCache supporting 128 MiB, including 32 MiB of local TurboCache (64-bit)
- GeForce 6200 w/ TurboCache supporting 256 MiB, including 64 MiB of local TurboCache (64-bit)
- GeForce 6200 w/ TurboCache supporting 256 MiB, including 128 MiB of local TurboCache (128-bit)
GeForce 6200 AGP (NV44a) without TurboCache
[edit]- Core Clock: 350 MHz
- Memory Clock: 500 MHz
- Pixel Pipelines: 4
- Number of ROPs: 2
- Vertex Processors: 3
- Memory: 128/256/512 MiB DDR or DDR2 on a 64-bit interface
GeForce 6200 AGP (NV44a2) without TurboCache
[edit]- Core Clock: 350 MHz
- Memory Clock: 540 MHz
- Pixel Pipelines: 4
- Number of ROPs: 2
- Vertex Processors: 3
- Memory: 128/256 MiB DDR2 with a 128-bit interface
- Cooling: Passive heatsink
GeForce 6200 PCI (NV44) without TurboCache
[edit]BFG Technologies originally introduced a unique PCI variant of the GeForce 6200 via its namesake B.F.G. and 3D Fuzion product lines. Subsequently, PNY (GeForce 6200 256 MiB PCI), SPARKLE Computer (GeForce 6200 128 MiB PCI and GeForce 6200 256 MiB PCI), and eVGA (e-GeForce 6200 256 MiB PCI and e-GeForce 6200 512 MiB PCI) released their own PCI versions of the GeForce 6200 featuring higher memory clocks and resultant memory bandwidth.
Until the release of the ATI X1300 PCI, these were the only PCI DirectX 9 capable cards not based on previous generation GeForce FX technology or discontinued XGI Technology Volari V3XT chipsets.
Excluding SPARKLE's GeForce 8400 and 8500 series, Zotac GT 610 cards and Club 3D HD 5450 cards in late 2012, the enhanced 512 MiB GeForce 6200 PCI variants[9] remain among the most powerful PCI based systems available, making these cards desired by users lacking the option of upgrading to an AGP or PCI Express based discrete video card.
- Core Clock: 350 MHz
- Memory Clock: 400 MHz (BFG Technologies 6200 OC 410 MHz, PNY and EVGA 533 MHz)
- Pixel Pipelines: 4
- Memory: 512 (EVGA e-GeForce 6200 512 MiB PCI) / 256 (BFG Technologies 6200 OC PCI and EVGA e-Ge-Force 6200 PCI) / 128 (BFG Technologies 3DFuzion GeForce 6200 PCI) MiB DDR on a 64-bit interface
GeForce 6100 and 6150 series
[edit]In late 2005 Nvidia introduced a new member to the GeForce family, the 6100 series, also known as C51. The term GeForce 6100/6150 actually refers to an nForce4-based motherboard with an integrated NV44 core, as opposed to a standalone graphics card. Nvidia released this product both to follow up its immensely popular GeForce4 MX based nForce and nForce2 boards and to compete with ATI's RS480/482 and Intel's GMA 900/950 in the integrated graphics space. The 6100 series is very competitive, usually tying with or just edging out the ATI products in most benchmarks.
The motherboards use two different types of southbridges - the nForce 410 and the nForce 430. They are fairly similar in features to the nForce4 Ultra motherboards that were on the market before them. Both feature PCI Express and PCI support, eight USB 2.0 ports, integrated sound, two Parallel ATA ports, and Serial ATA 3.0 Gibit/s with Native Command Queuing (NCQ) – two SATA ports in the case of the 410, four in the 430. The 430 southbridge also supports Gigabit Ethernet with Nvidia's ActiveArmor hardware firewall, while the 410 supports standard 10/100 Ethernet only.
GeForce 6100 and 6150 series chip specifications
[edit]Both the 6100 and 6150 support Shader Model 3.0 and DirectX 9.0c. The 6150 also features support for High-Definition video decoding of H.264/VC1/MPEG2, PureVideo Processing, DVI, and video-out. The 6100 only supports SD decoding of MPEG2/WMV9.[10] Maximum supported resolution is 1920 × 1440 pixels (@75 Hz) for RGB display and 1600 × 1200 pixels (@65 Hz) for DVI-D display
GeForce 61XX abnormally high failure rate in notebook computers
[edit]In 2008, Nvidia took a $150 to 250M charge against revenue because the GPUs were failing at "higher than normal rates."[11] HP provided an extension to their warranty of up to 24 months for notebooks affected by this issue. A class action suit was filed against HP and Nvidia by Whatley Drake & Kallas LLC.[citation needed]
GeForce 6100
[edit]- Manufacturing process: 90 nm
- Core Clock: 425 MHz
- Vertex Processors: 1
- Pixel Pipelines: 2
- Shader Model: 3
- DirectX support: v9.0c
- Video playback acceleration: SD video acceleration of MPEG2/WMV9 (HD video acceleration not supported)
- Outputs: VGA only
- Memory: Shared DDR/DDR2 (socket 754/939/AM2) system memory (selectable through BIOS - usually 16/32/64/128/256 MiB)
GeForce 6150
[edit]- Manufacturing process: 90 nm
- Core clock: 475 MHz[12]
- Vertex processors: 1
- Pixel pipelines: 2
- Shader model: 3
- DirectX support: v9.0c
- Video playback acceleration: HD video acceleration of H.264/VC1/MPEG2
- Outputs: VGA, DVI, RCA (Video)
- Memory: Shared DDR2 (socket 939/AM2) system memory (selectable through BIOS - usually 16/32/64/128/256 MiB)
- HT Bus (Bandwidth) = 2000 MT/s max
GeForce 6150LE
[edit]The GeForce 6150LE was primarily featured in the 2006 lineup of the Nvidia Business Platform.[13] The chip is used by Fujitsu-Siemens in its Esprimo green desktop, HP in its Pavilion Media Center a1547c Desktop PC and Compaq Presario SR1915 Desktop, and Dell in its Dimension C521 and E521 desktop PCs.
GeForce 6150SE
[edit]GeForce 6150SE (MCP61, also known as C61) is an updated, single-chip version of the Nvidia GeForce 6100. The MCP61 uses less power than the original C51 2-chip version of 6100. Its onboard video outperforms the 6150 in many 3D benchmarks despite its lower core frequency (425 MHz), because of added hardware Z-culling.
MCP61 introduced a bug in the SATA NCQ implementation. As a result, Nvidia employees have contributed code to disable NCQ operations under Linux.[14]
- Manufacturing process: 90 nm
- Core Clock: 425 MHz
- HT Bus = 2000 MT/s max
- Vertex Processors: 1
- Pixel Pipelines: 2
- Shader Model: 3
- DirectX support: v9.0c
- Outputs: VGA only
IntelliSample 4.0 and the GeForce 6 GPUs
[edit]Upon launch of the GeForce 7 family of graphics processing units, IntelliSample 4.0 was considered to be an exclusive feature of the GeForce 7 series of GPUs. However, version 91.47 (and subsequent versions) of the Nvidia ForceWare drivers enable the features of IntelliSample 4.0 on the GeForce 6 GPUs. IntelliSample 4.0 introduces two new antialiasing modes, known as Transparency Supersampling Antialiasing and Transparency Multisampling Antialiasing. These new antialiasing modes enhance the image quality of thin-lined objects such as fences, trees, vegetation and grass in various games.
One possible reason for the enabling of IntelliSample 4.0 for GeForce 6 GPUs might be the fact that the GeForce 7100 GS GPUs are based on NV44 chips, the same as the GeForce 6200 models. Because of this, Nvidia had to backport IntelliSample 4.0 features to the NV4x GPUs, and as a result, the entire GeForce 6 family is able to enjoy the benefits of Transparency Antialiasing.
It was already well known across various communities that Transparency Antialiasing could be used on GeForce 6 GPUs by using some third party tweak tools. As of Nvidia ForceWare drivers 175.16, GeForce 6 IntelliSample 4.0 support has been removed.
GeForce 6 model information
[edit]- All models support Transparency AA (starting with version 91.47 of the ForceWare drivers) and PureVideo
| Model | Launch | Transistors (million)
Die size (mm2) |
Core clock (MHz)
|
Memory clock (MHz)
|
Core config[a]
|
Fillrate | Memory | Performance (GFLOPS)
|
TDP (Watts)
| |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MOperations/s
|
MPixels/s
|
MTexels/s
|
MVertices/s
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
| |||||||||||
| GeForce 6100 + nForce 410 | October 20, 2005 | MCP51 | TSMC 90 nm | HyperTransport | 425 | 100–200 (DDR) 200–533 (DDR2) |
2:1:2:1 | 850 | 425 | 850 | 106.25 | Up to 256 system RAM | 1.6–6.4 (DDR) 3.2–17.056 (DDR2) |
DDR DDR2 |
64 128 |
? | ? | |
| GeForce 6150 SE + nForce 430 | June 2006 | MCP61 | 200 400[citation needed] |
3.2 16.0[citation needed] |
DDR2 | ? | ? | |||||||||||
| GeForce 6150 LE + nForce 430 | MCP61 | 100–200 (DDR) 200–533 (DDR2) |
1.6–6.4 (DDR) 3.2–17.056 (DDR2) |
DDR DDR2 |
? | ? | ||||||||||||
| GeForce 6150 + nForce 430 | October 20, 2005 | MCP51 | 475 | 950 | 475 | 950 | 118.75 | 1.6–6.4 (DDR) 3.2–17.056 (DDR2) |
? | ? | ||||||||
| GeForce 6200 LE | April 4, 2005 | NV44 | TSMC 110 nm | 75 110[16] |
AGP 8x PCIe x16 |
350 | 266 | 700 | 700 | 700 | 87.5 | 128 256 |
4.256 | DDR | 64 | ? | ? | |
| GeForce 6200A | April 4, 2005 | NV44A | 75 110[17] |
AGP 8x
PCI |
300
350[18] |
250 (DDR) 250-333 (DDR2)[18] |
4:3:4:2 | 1,400[18] | 700[18] | 1400[18] | 175 225[19] |
128 256[18] 512[18] |
4 4-5.34 (DDR2)[20] |
DDR DDR2[19] |
64[19] | ? | ? | |
| GeForce 6200 | October 12, 2004 (PCIe) January 17, 2005 (AGP) |
NV43 | 146 154[21] |
AGP 8x PCI PCIe x16 |
300 | 275 | 4:3:4:4 | 1,200 | 1,200 | 1,200 | 225 | 128 256 |
8.8 | DDR | 128 | 1.2 | 20 | |
| GeForce 6200 TurboCache | December 15, 2004 | NV44 | 75 110[16] |
PCIe x16 | 350 | 200 275 350 |
4:3:4:2 | 1,400 | 700 | 1,400 | 262.5 | 128–256 System RAM incl.16/32–64/128 onboard | 3.2 4.4 5.6 |
DDR | 64 | 1.4 | 25 | |
| GeForce 6500 | October 1, 2005 | 400 | 333 | 1,600 | 800 | 1,600 | 300 | 128 256 |
5.328 | ? | ? | |||||||
| GeForce 6600 LE | 2005 | NV43 | 146 154[21] |
AGP 8x PCIe x16 |
300 | 200 | 4:3:4:4 | 1,200 | 1,200 | 1,200 | 225 | 6.4 | 128 | 1.3 | ? | |||
| GeForce 6600 | August 12, 2004 | 275 400 |
8:3:8:4 | 2,400 | 2,400 | 8.8 12.8 |
DDR DDR2 |
2.4 | 26 | |||||||||
| GeForce 6600 GT | August 12, 2004 (PCIe) November 14, 2004 (AGP) |
500 | 475 (AGP) 500 (PCIe) |
4,000 | 2,000 | 4,000 | 375 | 15.2 (AGP)[22] 16 (PCIe) |
GDDR3 | 4.0 | 47 | |||||||
| GeForce 6800 LE | July 22, 2004 (AGP) January 16, 2005 (PCIe) |
NV40 (AGP) NV41, NV42 (PCIe) |
IBM 130 nm | 222 287 (NV40)[23] 222 225 (NV41)[24] 198 222 (NV42)[25] |
320 (AGP) 325 (PCIe) |
350 | 8:4:8:8 | 2,560 (AGP) 2,600 (PCIe) |
2,560 (AGP) 2,600 (PCIe) |
2,560 (AGP) 2,600 (PCIe) |
320 (AGP) 325 (PCIe) |
128 | 22.4 | DDR | 256 | 2.6 | ? | |
| GeForce 6800 XT | September 30, 2005 | 300 (64 Bit) 325 |
266 (64 Bit) 350 500 (GDDR3) |
2,400 2,600 |
2,400 2,600 |
2,400 2,600 |
300 325 |
256 | 4.256 11.2 22.4 32 (GDDR3) |
DDR DDR2 GDDR3 |
64[26] 128[27] 256 |
2.6 | 36 | |||||
| GeForce 6800 | April 14, 2004 (AGP) November 8, 2004 (PCIe) |
325 | 350 | 12:5:12:12 | 3,900 | 3,900 | 3,900 | 406.25 | 128 256 |
22.4 | DDR | 256 | 3.9 | 40 | ||||
| GeForce 6800 GTO | April 14, 2004 | NV45 | 222 287 (NV45)[28] |
PCIe x16 | 450 | 4,200 | 4,200 | 4,200 | 437.5 | 256 | 28.8 | GDDR3 | ? | ? | ||||
| GeForce 6800 GS | December 8, 2005 (AGP) November 7, 2005 (PCIe) |
NV40 (AGP) NV42 (PCIe) |
TSMC 110 nm | 222 287 (NV40)[23] 198 222 (NV42)[25] |
AGP 8x PCIe x16 |
350 (AGP) 425 (PCIe) |
500 | 5,100 | 5,100 | 5,100 | 531.25 | 128 256 |
32 | 4.2 5.1 |
59 | |||
| GeForce 6800 GT | May 4, 2004 (AGP) June 28, 2004 (PCIe) |
NV40 (AGP) NV45 (PCIe) |
IBM 130 nm | 222 287 (NV40)[23] 222 287 (NV45)[28] |
AGP 8x PCIe x16 |
350 | 16:6:16:16 | 5,600 | 5,600 | 5,600 | 525 | 5.6 | 67 | |||||
| GeForce 6800 Ultra | May 4, 2004 (AGP) June 28, 2004 (PCIe) March 14, 2005 (512 MB) |
400 | 525 (512 MB) 550 (256 MB) |
6,400 | 6,400 | 6,400 | 600 | 256 512 |
33.6 (512 MB) 35.2 (256 MB) |
6.4 | 105 | |||||||
| GeForce 6800 Ultra Extreme Edition | May 4, 2004 | NV40 | 222 287 (NV40)[23] |
AGP 8x | 450 | 600 | 7,200 | 7,200 | 7,200 | 675 | 256 | 35.2 | ? | ? | ||||
| Model | Launch | Transistors (million)
Die size (mm2) |
Core clock (MHz)
|
Memory clock (MHz)
|
Core config[a]
|
Fillrate | Memory | Performance (GFLOPS)
|
TDP (Watts)
| |||||||||
MOperations/s
|
MPixels/s
|
MTexels/s
|
MVertices/s
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
| |||||||||||
Features
[edit]| Model | Features | |||
|---|---|---|---|---|
| OpenEXR HDR | Scalable Link Interface (SLI) | TurboCache | PureVideo WMV9 Decoding | |
| GeForce 6100 | No | No | No | Limited |
| GeForce 6150 SE | No | No | Driver-Side Only | Limited |
| GeForce 6150 | No | No | No | Yes |
| GeForce 6150 LE | No | No | Driver-Side Only | Yes |
| GeForce 6200 | No | No | Yes (PCIe only) | Yes |
| GeForce 6500 | No | Yes | Yes | Yes |
| GeForce 6600 LE | Yes | Yes (No SLI Connector) | No | Yes |
| GeForce 6600 | Yes | Yes (SLI Connector or PCIe Interface) | No | Yes |
| GeForce 6600 DDR2 | Yes | Yes (SLI Connector or PCIe Interface) | No | Yes |
| GeForce 6600 GT | Yes | Yes | No | Yes |
| GeForce 6800 LE | Yes | No | No | No |
| GeForce 6800 XT | Yes | Yes (PCIe only) | No | Yes (NV42 only) |
| GeForce 6800 | Yes | Yes (PCIe only) | No | Yes (NV41, NV42 only) |
| GeForce 6800 GTO | Yes | Yes | No | No |
| GeForce 6800 GS | Yes | Yes (PCIe only) | No | Yes (NV42 only) |
| GeForce 6800 GT | Yes | Yes (PCIe only) | No | No |
| GeForce 6800 Ultra | Yes | Yes (PCIe only) | No | No |
GeForce Go 6 (Go 6xxx) series
[edit]| Model | Launch | Fab (nm)
|
Core clock (MHz)
|
Memory clock (MHz)
|
Core config1
|
Fillrate | Memory | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MOperations/s
|
MPixels/s
|
MTexels/s
|
MVertices/s
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
| ||||||||
| GeForce Go 6100 + nForce Go 430 | Unknown | C51M | 110 | HyperTransport | 425 | System memory | 2:1:2:1 | 850 | 425 | 850 | 106.25 | Up to 128 MB system | System memory | DDR2 | 64/128 |
| GeForce Go 6150 + nForce Go 430 | February 1, 2006 | ||||||||||||||
| GeForce Go 6200 | February 1, 2006 | NV44M | PCIe x16 | 300 | 600 | 4:3:4:2 | 1200 | 600 | 1200 | 225 | 16 | 2.4 | DDR | 32 | |
| GeForce Go 6400 | February 1, 2006 | 400 | 700 | 1600 | 800 | 1600 | 250 | 5.6 | 64 | ||||||
| GeForce Go 6600 | September 29, 2005 | NV43M | 300 | 8:3:8:4 | 3000 | 1500 | 3000 | 281.25 | 128 | 11.2 | 128 | ||||
| GeForce Go 6800 | November 8, 2004 | NV41M | 130 | 700 1100 |
12:5:12:12 | 375 | 22.4 35.2 |
DDR, DDR2 DDR3 |
256 | ||||||
| GeForce Go 6800 Ultra | February 24, 2005 | 450 | 5400 | 3600 | 5400 | 562.5 | 256 | ||||||||
Support
[edit]Nvidia has ceased driver support for GeForce 6 series. The GeForce 6 series is the last to support the Windows 9x family of operating systems, as well as Windows NT 4.0. The successor GeForce 7 series only supports Windows 2000 and later (the Windows 8 drivers also support Windows 10).
- Windows 95: 66.94 released on December 16, 2004; Download
- Windows NT 4.0: 77.72 released on June 22, 2005; Download
- Windows 98 & Me: 81.98 released on December 21, 2005; Download
- Windows 2000: 94.24 released on May 17, 2006; Download
- Windows XP 32-bit & Media Center Edition: 307.83 released on February 25, 2013; Download
- Windows XP 64-bit: 307.83 released on February 25, 2013; Download
- Windows Vista, 7 & 8 32-bit: 309.08 released on February 24, 2015; Download
- Windows Vista, 7 & 8 64-bit: 309.08 released on February 24, 2015; Download
See also
[edit]Notes and references
[edit]- ^ "PureVideo: Digital Home Theater Video Quality for Mainstream PCs with GeForce 6 and 7 GPUs" (PDF). NVIDIA. p. 9. Retrieved August 31, 2024.
- ^ "GPU timeline". Timetoast timelines. October 11, 1999. Retrieved April 13, 2023.
- ^ Nvidia PureVideo – Product Comparison
- ^ Smith, Tony (October 6, 2004). "ATI readies 'Rialto' PCIE-to-AGP bridge chip". The Register.com. Retrieved June 20, 2021.
- ^ "GeForce 6800 Ultra Extreme Edition". EVGA.com. Archived from the original on March 29, 2008. Retrieved October 7, 2012.
- ^ Chiappetta, Marco (August 11, 2004). "eVGA GeForce 6800 Ultra Extreme Edition". HotHardware. Retrieved October 7, 2012. The eVGA contest page still says 1100 MHz, but the cards shipped by eVGA were in fact 1200 MHz.
- ^ Adobe Knowledge Base - 2D Graphics Acceleration (GPU) support in Acrobat and Adobe Reader (9.0 on Windows)
- ^ Newegg.com - CHAINTECH SA62A-512 GeForce 6200 512 MiB 64-bit DDR2 SDRAM AGP 4X/8X Video Card - Retail
- ^ eVGA NVIDIA GeForce 6200 512-P1-N402 PCI Archived April 12, 2022, at the Wayback Machine.
- ^ PureVideo Support Table
- ^ "NVIDIA Provides Second Quarter Fiscal 2009 Business Update" (Press release). NVIDIA. July 2, 2008.
- ^ Tech Specs
- ^ "NVIDIA Business Platform Certified Motherboards". NVIDIA.com.
- ^ Zolnierkiewicz, Bartlomiej (October 26, 2007). "Re: [PATCH] ata: sata_nv fix mcp51 timeout with SWNCQ". linux-ide (Mailing list).
- ^ a b "3D accelerator database". Vintage 3D. Archived from the original on October 23, 2018. Retrieved September 1, 2024.
- ^ a b "NVIDIA NV44 GPU Specs". TechPowerUp. Retrieved September 1, 2024.
- ^ "NVIDIA NV44B GPU Specs". TechPowerUp. Retrieved September 1, 2024.
- ^ a b c d e f g "GeForce 6 Series-6200 - XFXforce.com". Archived from the original on September 16, 2008.
- ^ a b c "GeForce 6200:Model PV-T44A-WANG - XFXforce.com". Archived from the original on April 12, 2008.
- ^ "Products GeForce 6200:Features - XFXforce.com". Archived from the original on October 12, 2007.
- ^ a b "NVIDIA NV43 GPU Specs". TechPowerUp. Retrieved September 1, 2024.
- ^ "Nvidia GeForce 6600 GT AGP". Techpowerup.com. Archived from the original on April 16, 2015. Retrieved September 1, 2024.
- ^ a b c d "NVIDIA NV40 GPU Specs". TechPowerUp. Retrieved September 1, 2024.
- ^ "NVIDIA NV41 GPU Specs". TechPowerUp. Retrieved September 1, 2024.
- ^ a b "NVIDIA NV42 GPU Specs". TechPowerUp. Retrieved September 1, 2024.
- ^ "映泰集团 :: V6802XA16 :: 产品规格". www.biostar.com.tw. Archived from the original on October 12, 2022. Retrieved September 1, 2024.
- ^ "VGA/GPU Manufacturer - BIOSTAR Group". www.biostar.com.tw. Archived from the original on October 12, 2022. Retrieved September 1, 2024.
- ^ a b "NVIDIA NV45 GPU Specs". TechPowerUp. Retrieved September 1, 2024.
External links
[edit]- Nvidia: GeForce 6 Series Product Overview
- Nvidia PureVideo Technology page
- Inside nVidia NV40
- Transparency Antialiasing on GeForce 6 GPUs
- Nvidia: Integrated GPU Technical Specifications
- techPowerUp! GPU Database
Reviews
[edit]- nvNews/ a nvnews review of the GeForce 6800
- Beyond 3D/ preview of nv45 core architecture
- Guru3D Nvidia GeForce 6 Series and ATi X Series/comparison - 6-series v. Radeon X-series
- X-bit Labs Second Generation of Integrated Graphics from Nvidia: GeForce 6150 and GeForce 6100/ Page on GeForce 6100, 6150
- GeForce 6600GT review/ review- GeForce 6600GT
GeForce 6 series
View on GrokipediaOverview
Introduction
The GeForce 6 series represents NVIDIA's sixth generation of GeForce graphics processing units (GPUs) for both desktop and mobile platforms, built on the NV4x microarchitecture and succeeding the GeForce FX series (NV3x).[2] The core chips in this lineup carry the codename NV40, introducing Shader Model 3.0 support that enhanced programmability and efficiency over previous generations.[2] This series marked NVIDIA's first implementation of full DirectX 9.0c compliance, enabling advanced rendering techniques for contemporary games and applications. The flagship models, such as the GeForce 6800, launched on April 14, 2004, with subsequent mid-range and entry-level variants like the GeForce 6600 and 6200 following through late 2004 and into 2005 to complete the series.[3][4] NVIDIA positioned the GeForce 6 series primarily toward gamers and multimedia enthusiasts, emphasizing superior shader performance to address limitations in prior architectures and compete in the evolving PC graphics market.[5] Key specifications across the series include configurations with up to 16 pixel pipelines in high-end models, a 256-bit memory interface for premium variants, and compatibility with both AGP 8x and emerging PCI Express interfaces to support diverse system integrations.[4][2] The architecture's transition to Shader Model 3.0 further bolstered its capabilities for dynamic lighting and complex effects in real-time rendering.Release Timeline and Market Context
The GeForce 6 series marked NVIDIA's return to leadership in high-end graphics processing following challenges with the prior GeForce FX lineup, with development focusing on the NV40 core to support emerging DirectX 9 standards and compete directly against ATI's Radeon X800 series, which launched around the same period with comparable 16-pixel pipeline architectures.[6] Initial prototypes of the NV40 were demonstrated at CES 2004, showcasing early capabilities in advanced shading and multi-GPU configurations, ahead of full production ramp-up by mid-2004 to align with the growing demand for 3D gaming and high-definition video playback in consumer PCs.[7] The series' release was staged to cover premium and mainstream segments, emphasizing innovations like Shader Model 3.0 to capitalize on the widespread adoption of DirectX 9 games such as Doom 3. The flagship models, GeForce 6800 Ultra and GeForce 6800, debuted on April 14, 2004, with the Ultra variant priced at $499 USD for its 256 MB GDDR3 configuration, positioning it as a premium option for enthusiasts seeking superior performance in titles optimized for next-generation effects.[8][9] The GeForce 6800 LE followed in July 2004 as a more accessible variant, initially for AGP interfaces, with PCIe support added in early 2005 to broaden compatibility amid the transition to PCI Express motherboards.[10] NVIDIA expanded the lineup with the mid-range GeForce 6600 GT on August 12, 2004, launched at $199 USD to target value-conscious gamers, delivering strong performance in DirectX 9 environments while undercutting higher-end competitors.[11] Lower-end offerings like the GeForce 6200 arrived in October 2004, incorporating TurboCache technology for budget systems focused on integrated HD video decoding and light gaming.[12] Market positioning emphasized the series' role in revitalizing NVIDIA's share against ATI's aggressive X800 push, with the GeForce 6 lineup gaining traction through features tailored to rising HD content consumption and multi-monitor setups in professional and gaming workflows.[13] Production of the GeForce 6 series wound down by 2006 as NVIDIA shifted to the GeForce 7 and 8 architectures, though legacy driver support continued into the 2010s to maintain compatibility for older systems.[14]Core Architecture and Technologies
Graphics Pipeline and NV4x Architecture
The NV4x microarchitecture, underpinning the GeForce 6 series graphics processing units (GPUs), marked NVIDIA's transition to a more programmable rendering paradigm, emphasizing enhanced shader capabilities while retaining distinct processing stages for vertices and pixels. Fabricated initially on a 130 nm process node by TSMC, the architecture later saw shrinks to 110 nm for select variants like the NV43 core in the GeForce 6600 series, enabling higher densities and efficiency in mid-range models. The flagship NV40 core, powering the GeForce 6800, integrated 222 million transistors across a 287 mm² die, balancing complexity with manufacturability.[15][15] At its core, the NV4x pipeline featured a scalable design with 12 to 16 pixel pipelines and 5 to 8 vertex processing units, depending on the model, alongside up to 16 texture mapping units (TMUs) for efficient sampling and filtering operations. This structure represented a shift from fixed-function hardware toward greater programmability, with the pixel pipelines processing data in quads (groups of four pixels) to optimize SIMD efficiency, while vertex units handled transformations and skinning with support for up to 65,536 instructions per shader. The architecture's superscalar pixel shader units, each capable of dual operations per clock cycle, doubled throughput compared to prior generations, facilitating complex effects like dynamic branching and vertex texturing—though full unification of shader types would await later architectures. For instance, the NV40's 16 pixel pipelines and 6 vertex units enabled peak rates of up to 6.8 billion pixels per second in fill operations.[2][15][16] Clock speeds for NV4x cores varied by model but typically ranged from 300 MHz to 400 MHz for the core, with memory controllers operating at 500–700 MHz effective for GDDR3 modules; the GeForce 6800 Ultra, for example, ran at 400 MHz core and 550 MHz memory. Interface support included initial compatibility with AGP 8x for legacy systems, alongside native PCI Express x16 for newer platforms, providing up to 4 GB/s bidirectional bandwidth to reduce bottlenecks in data transfer. Memory configurations utilized GDDR3 at 256 MB to 512 MB capacities, with 256-bit interfaces delivering bandwidths up to 35.2 GB/s in high-end implementations.[17][2][17] The NV4x design also introduced elevated power demands, with flagship models like the GeForce 6800 Ultra exhibiting a thermal design power (TDP) of up to 110 W under load, necessitating auxiliary power connectors (e.g., dual 6-pin) and more robust cooling solutions such as active heatsinks with fans. This increase stemmed from the denser transistor integration and higher clock rates, pushing total board power draw to around 100–110 W during intensive rendering, a notable step up from the 50–70 W of prior series and influencing system-level thermal management.[17]Shader Model 3.0 Implementation
The GeForce 6 series, utilizing the NV4x architecture, provided full hardware support for DirectX 9.0's Shader Model 3.0 (SM 3.0), marking a pivotal advancement in programmable shading capabilities. SM 3.0 mandates dynamic branching and looping to enable conditional code execution within shaders, implemented in NV4x with minimal performance penalties—typically incurring 2 cycles for vertex shaders and 4-6 cycles for fragment shaders per branch. This allows developers to create more adaptive programs that skip unnecessary operations, contrasting with the static flow of SM 2.0. Additionally, SM 3.0 requires extended instruction counts to handle complex algorithms, with NV4x supporting up to 65,535 instructions for fragment shaders and 65,536 dynamic instructions for vertex shaders, vastly surpassing the 512-instruction limit of prior models. Full 32-bit floating-point (FP32) precision is enforced across both vertex and fragment operations, ensuring accurate computations for lighting and texturing, while optional 16-bit floating-point (FP16) modes offer pathways for higher throughput in bandwidth-sensitive scenarios.[16] NV4x enables these features through specialized enhancements in its shader units, positioning pixel shaders to complement vertex processing within the fixed-function pipeline while introducing vertex texture fetch as a foundational capability. Vertex shaders can access up to four unique 2D textures using nearest-neighbor filtering, facilitating data-driven deformations and serving as an early precursor to geometry shaders in subsequent architectures by allowing texture-based vertex displacement. Geometry instancing is further supported via programmable vertex stream frequency divisors, which replicate vertex data across instances efficiently without full geometry shader primitives. These elements integrate seamlessly with the broader NV4x graphics pipeline, where separate vertex and fragment processors maintain distinct roles, though without the unified shader model that would emerge later. Integer operations remain a relative weakness, as the hardware prioritizes FP32/FP16 arithmetic, with only limited 16-bit integer texture formats available—insufficient for applications requiring broad dynamic range in non-floating-point data.[16][18] In terms of performance, SM 3.0 compliance delivered notable uplifts in shader-intensive workloads over the GeForce FX series, particularly in scenes leveraging dynamic control flow. For instance, soft shadow rendering demos achieved more than double the frame rate on GeForce 6 GPUs by using branching to evaluate only relevant samples, avoiding the overhead of exhaustive computations common in SM 2.0 implementations. Broader benchmarks reflect this efficiency; the GeForce 6800 Ultra scored approximately 7,200 in 3DMark 05 at stock settings, more than quadrupling the GeForce FX 5950 Ultra's typical results around 1,500 under similar conditions.[19][20][21] This support profoundly influenced developers by unlocking effects previously constrained by hardware limits, such as parallax mapping for simulating surface depth via vertex texture displacement and intricate water simulations through multi-pass rendering with multiple render targets (up to four simultaneous outputs). In titles like Doom 3, SM 3.0 enabled enhanced mods incorporating parallax occlusion for more realistic bump mapping and dynamic fluid effects, expanding beyond the game's baseline SM 2.0 requirements to leverage NV4x's extended instruction budgets for deferred shading techniques. Multiple render targets proved especially valuable for scalar data processing, allowing up to 16 values per pass to streamline complex simulations.[16][22]Memory and Bus Interfaces
The GeForce 6 series introduced GDDR3 SDRAM as the standard memory type for its high-end and upper mid-range GPUs, marking a shift from the DDR memory used in prior generations to achieve higher bandwidth efficiency through improved signaling and prefetch mechanisms.[17] This memory operated at effective clock speeds ranging from 500 to 700 MHz, enabling faster data transfer rates suitable for demanding 3D rendering tasks.[1] Capacities varied across the lineup, starting at 128 MB for mid-range models like the GeForce 6600 GT and scaling up to 512 MB in select high-end variants of the GeForce 6800 series, providing ample framebuffer space for textures and frame buffers in games and applications of the era.[23][17] Memory bus widths were tailored to performance tiers, with 256-bit interfaces on high-end cards such as the GeForce 6800 Ultra delivering theoretical bandwidths up to 38.4 GB/s under optimal conditions, which supported high-resolution textures and complex shaders without significant bottlenecks.[17] In contrast, mid-range options like the GeForce 6600 series employed 128-bit buses, yielding bandwidths around 16 GB/s, sufficient for mainstream gaming while maintaining cost-effectiveness.[23] These configurations balanced memory throughput with power and die area constraints inherent to the NV4x architecture. The series supported both legacy AGP 8x interfaces, offering up to 2.1 GB/s of unidirectional bandwidth for compatibility with older systems, and the emerging PCI Express x16 standard, which provided 4 GB/s of bidirectional throughput to future-proof adoption in new motherboards.[1] This dual-interface approach allowed broad market penetration, with AGP versions ensuring usability in pre-PCIe setups. Some AGP models incorporated bridge chips to adapt native PCI Express designs, enabling performance close to native implementations without full redesigns.[24] A notable innovation in the entry-level segment was TurboCache, featured on select GeForce 6200 variants with 16-32 MB of embedded DRAM acting as a high-speed cache to supplement lower onboard VRAM capacities, effectively simulating higher memory pools by dynamically sharing system RAM over the bus.[25] This technology reduced the need for large dedicated memory chips on budget cards, improving cost and power efficiency while maintaining playable frame rates in lighter workloads.[1]Key Features
SLI Multi-GPU Technology
NVIDIA's Scalable Link Interface (SLI) technology, originally developed for professional Quadro graphics cards, was revived for consumer gaming with the GeForce 6 series to enable multi-GPU configurations.[26] This revival shifted SLI from its professional roots to high-performance gaming, allowing two compatible GPUs to work together for improved frame rates in supported applications.[26] SLI in the GeForce 6 series operates through two primary rendering modes: Alternate Frame Rendering (AFR), where each GPU renders alternating frames for sequential processing, and Split Frame Rendering (SFR), which divides the screen into top and bottom sections for parallel rendering by each GPU.[27] Data synchronization between GPUs occurs via a dedicated SLI bridge connector, providing 1 GB/s of bandwidth, though this introduces some overhead due to inter-GPU communication. In supported games, SLI typically delivers 1.5 to 1.9 times the performance of a single GPU, with scaling varying by title, resolution, and rendering mode—AFR often excels in CPU-bound scenarios, while SFR benefits fill-rate intensive workloads.[26][27] Implementation requires two identical SLI-certified GeForce 6 series GPUs from the same manufacturer, installed in a high-end motherboard featuring two PCI Express x16 slots spaced for dual-slot cards.[28][27] For AGP-based variants, an NVBridge adapter converts the interface to PCIe compatibility, while native PCIe cards use a direct SLI connector.[24] NVIDIA certifies specific profiles for optimal load balancing in games, managed through driver settings.[27] SLI debuted with the high-end GeForce 6800 series in October 2004, coinciding with the launch of the GeForce 6800 Ultra SLI configuration.[29] Driver support arrived via ForceWare release 66.93 in November 2004, enabling SLI functionality alongside GeForce 6 optimizations like 512 MB frame buffer handling. Key limitations include bandwidth constraints from the SLI bridge, which can reduce scaling in bandwidth-sensitive scenarios, and lack of support for entry-level models like the GeForce 6200 series.[28] Configurations up to two GPUs double power consumption, necessitating robust power supplies exceeding 500 W with high 12 V rail amperage.[30] SLI was primarily targeted at high-end setups, with brief application in GeForce 6800 series models for maximum gaming performance.[28]Nvidia PureVideo Video Decoding
Nvidia PureVideo is a video processing technology developed by Nvidia, combining dedicated hardware acceleration within the GeForce 6 series GPUs and accompanying software to enhance video decoding and post-processing for improved playback quality on PCs.[31] Introduced in December 2004 via driver updates and the Nvidia DVD Decoder software, it leverages a programmable video processor embedded in the NV4x architecture of the GeForce 6 series to offload tasks from the CPU, enabling smoother handling of standard and high-definition video content.[31][32] At its core, PureVideo provides hardware-accelerated decoding for MPEG-2 video streams, including inverse quantization (IQ), inverse discrete cosine transform (IDCT), and motion compensation, which significantly reduces CPU utilization during playback of DVDs and broadcast content.[32] It also supports Windows Media Video (WMV) formats, extending to high-definition resolutions such as 720p and 1080i, facilitated by a 16-way vector processor for efficient handling of compressed video data.[31] This acceleration is compliant with ISO MPEG-1 and MPEG-2 standards, supporting various aspect ratios like full frame, anamorphic widescreen, letterbox, and pan-and-scan, while enabling real-time processing for ATSC high-definition tuners.[32] Post-processing capabilities form a key aspect of PureVideo, utilizing spatial and temporal adaptive de-interlacing to convert interlaced video into progressive scan for sharper images on modern displays, alongside inverse telecine (3:2 pulldown) correction to eliminate judder from film-to-video transfers and bad edit detection to mitigate playback artifacts.[31][32] Additional enhancements include flicker reduction through multi-stream scaling and display gamma correction for accurate color reproduction, contributing to home-theater-like quality with reduced stuttering and improved clarity in DVD, HDTV, and recorded video scenarios.[31] Implementation across the GeForce 6 series varies by model, with higher-end variants like the GeForce 6800 featuring a more robust video processor for advanced features, while entry-level options such as the GeForce 6200 provide baseline MPEG-2 acceleration and de-interlacing modes (e.g., adaptive, blend fields, weave).[32] Overall, PureVideo marked a significant advancement in GPU-assisted media playback, lowering system requirements for high-quality video and paving the way for integrated multimedia in graphics hardware.[31]IntelliSample 4.0 Antialiasing
IntelliSample 4.0 represented NVIDIA's fourth-generation antialiasing technology, integrated into the GeForce 6 series GPUs based on the NV4x architecture, to enhance image quality by reducing jagged edges and improving rendering of complex scenes. This version introduced advanced techniques for handling transparency effects, which were particularly challenging in games featuring foliage, fences, or other alpha-blended textures. By leveraging hardware acceleration, it aimed to deliver smoother visuals without excessively impacting frame rates. Full IntelliSample 4.0 support was enabled starting with ForceWare driver version 91.47 in October 2005.[33] At its core, IntelliSample 4.0 employed adaptive transparency antialiasing, including transparency adaptive supersampling (TAA) and transparency adaptive multisampling (TSAA), which dynamically adjusted sampling based on alpha channel data to focus higher sample rates on partially transparent pixels while using fewer samples for opaque or fully transparent areas. It also supported rotated grid multisampling, a pattern that rotated sample points to provide more uniform coverage and reduce aliasing artifacts compared to standard grid methods, with capabilities up to 16x coverage for superior detail in high-resolution renders. These methods significantly improved the rendering of alpha-tested textures, such as leaves or wireframes, by minimizing shimmering and moiré patterns that plagued earlier implementations.[33][34][35] Compared to IntelliSample 3.0 in prior generations, version 4.0 offered better handling of alpha-tested textures through its adaptive modes, which could achieve up to 4x the performance of traditional supersampling while maintaining or exceeding image quality. This efficiency stemmed from selective sampling, reducing the computational overhead in transparency-heavy scenes and allowing for higher antialiasing levels without proportional frame rate drops. Rotated grid improvements further enhanced overall antialiasing performance, providing cleaner edges in general rendering.[33][36] The technology was hardware-accelerated within the NV4x core, supporting modes such as 4x and 8x multisampling with gamma correction to ensure accurate color representation across varying lighting conditions. Gamma-corrected antialiasing (GCAA) adjusted for nonlinear gamma curves, preventing washed-out or overly dark results in shadowed areas. These features were accessible via NVIDIA's ForceWare drivers starting from version 91.47, enabling full IntelliSample 4.0 functionality on GeForce 6 GPUs. It synergized with Shader Model 3.0 capabilities to refine antialiasing in shader-driven effects.[36][37] IntelliSample 4.0 was compatible with DirectX 8 through 9 APIs, making it suitable for a wide range of contemporary games and applications. In foliage-heavy scenes, such as those in titles like Far Cry or Doom 3, it delivered noticeable quality uplifts, with adaptive modes reducing visible aliasing on vegetation by selectively applying higher sampling where needed.[33] As a foundational advancement, IntelliSample 4.0 served as a precursor to later NVIDIA technologies like Coverage Sampling Antialiasing (CSAA) and Sparse Grid Supersampling (SSAA), influencing subsequent generations' approaches to efficient, high-quality antialiasing. It was prominently featured in the GeForce 6 and 7 series before evolving in later architectures.[33][36]Model Lineup
High-End: GeForce 6800 Series
The GeForce 6800 series represented NVIDIA's flagship high-end graphics cards in the GeForce 6 lineup, targeting enthusiasts seeking top-tier performance in DirectX 9-era gaming. Launched in April 2004, these cards were built on the NV40 graphics processor, featuring 16 pixel pipelines and a 256-bit memory interface that enabled high bandwidth for complex rendering tasks. The series marked NVIDIA's return to dominance in the high-end market after challenges with the prior GeForce FX generation, emphasizing Shader Model 3.0 support for advanced effects in titles like dynamic shadows and complex water simulations. The model lineup began with the GeForce 6800 Ultra, equipped with a 400 MHz core clock and 256 MB of GDDR3 memory running at 1.1 GHz effective speed (550 MHz clock), delivering robust frame rates at high resolutions.[38] A later refresh, the GeForce 6800 GT, arrived in July 2004 with reduced clocks at 350 MHz core and 256 MB of GDDR3 memory running at 1.0 GHz effective speed (500 MHz clock), offering near-identical architecture at a more accessible price point while maintaining the 16 pipelines for consistent high-end throughput.[39] Variants like the GeForce 6800 LE further diversified the offerings, operating at a lowered 300 MHz core clock with 128 MB or 256 MB GDDR3 options, often produced by add-in-board partners to target slightly less demanding users without sacrificing core features.[40] These models positioned the 6800 series as premium solutions, with the Ultra aimed at maximum performance and the GT providing value for 1600x1200 gaming. A key unique aspect of the GeForce 6800 series was its status as the first consumer-grade cards certified for SLI multi-GPU technology, allowing two cards to combine for up to nearly double the performance in supported games via PCI Express bridging. This innovation, previously limited to professional Quadro setups, broadened high-end scalability for gamers, with the 16 pipelines and 256-bit bus ensuring efficient load balancing and minimal bottlenecks in SLI configurations. The series also supported NVIDIA's PureVideo HD briefly referenced for enhanced video decoding on AGP variants. For legacy systems, NVIDIA introduced the GeForce 6800 AGP adaptation based on the NV41 chip, featuring reduced core clocks at 325 MHz and DDR memory instead of GDDR3 to accommodate the AGP 8x interface's power and thermal constraints.[4] This version maintained the 12 active pipelines (with potential for unlocking) and 256-bit bus but operated at lower overall speeds, making it a viable upgrade for pre-PCIe motherboards without the full performance of PCIe counterparts.[4] In benchmarks, the GeForce 6800 series demonstrated superiority over ATI's Radeon X800 in Shader Model 3.0 titles, such as Half-Life 2, where the 6800 GT achieved 2-3% higher average frame rates at 1024x768 and 1280x1024 resolutions compared to the X800 Pro, thanks to optimized pixel shader execution.[41] This edge highlighted the NV40's efficiency in dynamic lighting and vertex processing, establishing the series as a benchmark for next-generation gaming. Production of the GeForce 6800 series began with the Ultra's release on April 14, 2004, at a suggested retail price of $499, while the GT followed in July at around $399, later dropping to $349 amid competition.[38][42] These cards solidified NVIDIA's high-end leadership through 2004, powering early adopters in titles leveraging SM 3.0 until the GeForce 7 transition.Upper Mid-Range: GeForce 6600 Series
The GeForce 6600 GT served as the flagship of NVIDIA's upper mid-range offerings in the GeForce 6 series, providing strong performance for gamers seeking high frame rates without the premium cost or power requirements of the 6800 series. Launched on August 12, 2004, at a manufacturer-suggested retail price of $199, the card featured the NV43 GPU with a 400 MHz core clock, 1.0 ns GDDR3 memory running at 1000 MHz effective speed across a 128-bit bus, and 256 MB of frame buffer in many configurations. It included 16 pixel pipelines, enabling robust rendering capabilities while supporting full Shader Model 3.0 for advanced effects in DirectX 9.0 games. This model was exclusively available in PCI Express format initially, marking NVIDIA's push toward the new interface standard.[23][43][44] Key variants expanded accessibility for legacy systems, including the GeForce 6600 XT and 6600 LE, which were AGP 8x adaptations of the GT core with slightly reduced memory speeds at 900 MHz effective to accommodate the older bus. These AGP models retained the core's essential features but targeted users unable to upgrade to PCI Express motherboards. Additionally, the GeForce 6600 A emerged as an OEM-exclusive variant with optimized clocks for system integrators, while non-GT 6600 models featured reduced shader units and lower clocks—typically 300 MHz core and 550 MHz effective memory—for more budget-conscious builds. The 6600 GT's design emphasized efficiency, drawing under 70W without requiring auxiliary power connectors, making it suitable for a wide range of PCs. It also supported SLI multi-GPU configurations for enhanced performance in compatible setups.[45][46] The 6600 series excelled in delivering excellent gaming at 1024x768 resolution, handling titles like Doom 3 and Half-Life 2 at high settings with smooth frame rates, thanks to its complete Shader Model 3.0 implementation that avoided the power-hungry demands of the 6800 Ultra. Market reception highlighted its value, positioning it as a bestseller in late 2004 due to superior price-to-performance ratios compared to competitors; for instance, it consistently outperformed the ATI Radeon X700 Pro by 20-30% in key benchmarks such as Doom 3 at 1024x768. This success stemmed from its balanced specs, enabling mainstream adoption without compromising on features like programmable shaders or high-precision texture filtering.[47][48]Mid-Range: GeForce 6500
The GeForce 6500, released by NVIDIA on October 1, 2005, served as a budget-oriented mid-range graphics card within the GeForce 6 series, aimed at casual gamers upgrading from the aging GeForce FX lineup. Priced at an MSRP of $129 to $149, it offered an entry point into DirectX 9 gaming without the premium cost of higher-end models like the GeForce 6600 series.[49][50] Built on the NV44 graphics core fabricated at 110 nm, the GeForce 6500 featured 4 pixel pipelines and 3 vertex shaders, paired with a 64-bit memory bus supporting up to 256 MB of DDR2 memory. Core clock speeds typically ranged from 300 to 400 MHz, with memory at approximately 533 MHz effective for DDR2 variants, delivering bandwidth around 3.2 to 4.25 GB/s depending on configuration.[50][51] The card provided partial support for Shader Model 3.0, limited by its reduced pipeline count compared to flagship NV40-based GPUs, enabling basic pixel and vertex shader effects but not full utilization in complex scenes. It lacked SLI multi-GPU support, focusing instead on single-card affordability. A TurboCache variant used as little as 16-32 MB of on-board VRAM supplemented by system RAM for effective 128-256 MB operation, suitable for memory-constrained builds.[50] In performance, the GeForce 6500 handled DirectX 9 titles adequately at low to medium settings and 1024x768 resolution, achieving playable frame rates in games like Half-Life 2, but exhibited weaknesses in bandwidth-heavy scenarios such as high-resolution textures or antialiasing due to its narrow bus.[49][50] Variants included the GeForce 6500 LE, which operated at reduced clocks (around 300 MHz core) for lower power draw and cost, and a PCI interface version for legacy systems lacking PCIe slots, maintaining core specs but with adjusted memory options.[50]Entry-Level: GeForce 6200 Series
The GeForce 6200 series, based on the NV44 graphics processor fabricated on a 110 nm process, served as NVIDIA's entry-level discrete GPU offering in the GeForce 6 lineup, targeting budget-conscious users seeking basic graphics acceleration. It featured 4 pixel pipelines and typically operated at core clock speeds of 300 to 350 MHz, paired with 64 to 128 MB of DDR2 memory on a 64-bit or 128-bit bus depending on the variant.[12][52] This configuration provided modest performance suitable for everyday computing tasks, with memory bandwidth reaching up to 8.8 GB/s in higher-end configurations.[12] Key variants included the GeForce 6200 TurboCache editions, which utilized only 16 MB of onboard DDR2 memory supplemented by system RAM via NVIDIA's TurboCache technology on a 64-bit bus, aimed at reducing costs for low-profile or legacy systems; standard models offered fuller 128 MB DDR2 setups on 128-bit buses. These were available in PCIe x16, AGP 8x, and PCI interfaces to support a range of PC builds, including older motherboards.[25] Launched in March 2005 at a manufacturer's suggested retail price of $79 for the AGP model, the GeForce 6200 was positioned for home theater PCs (HTPCs) and office systems requiring reliable 2D/3D acceleration without demanding high-end gaming capabilities.[53] It supported DirectX 9.0 with Shader Model 3.0, marking the first low-end GPU to include this advanced programmable shading standard, though its limited 4 shaders constrained performance in complex scenes. Basic NVIDIA PureVideo hardware decoding enabled smooth playback of standard-definition video, but the card struggled with contemporary shader-intensive games even at launch, often failing to maintain playable frame rates beyond 800x600 resolution in titles like Doom 3.[54][55][56]Integrated: GeForce 6100 and 6150 Series
The GeForce 6100 and 6150 series integrated graphics processors (IGPs) were NVIDIA's entry into onboard GPU solutions for budget desktop systems, integrated directly into the northbridge component of nForce chipsets to provide cost-effective multimedia and basic 3D acceleration without requiring a separate graphics card. Announced in September 2005 and launched shortly thereafter, these IGPs were primarily bundled with AMD-compatible nForce 4 series motherboards, such as those using the MCP51 chipset, and later extended to Intel platforms via the nForce 400 series for broader compatibility.[57][58] Valued at approximately $50-100 in added motherboard cost, they targeted entry-level PCs for home users, emphasizing affordability over high performance.[57] Architecturally, the GeForce 6100 utilized the C61 core, while the GeForce 6150 employed the C51 core, both fabricated on a 90 nm process as part of NVIDIA's Curie architecture—a scaled-down derivative of the broader GeForce 6 family's Rankine design, but with severely limited rendering capabilities compared to discrete counterparts. Each featured 2 pixel pipelines, 1 vertex shader unit, 1 texture mapping unit, and 1 render output unit, operating at core clocks of 425 MHz for the 6100 and 475 MHz for the 6150, while relying on shared system RAM (up to 256 MB allocatable) rather than dedicated VRAM for all memory operations.[59][60] This integration into the nForce 4 northbridge (MCP51) or later MCP61 variants enabled efficient use of platform resources, including HyperTransport links for AMD systems, but prioritized power efficiency and space savings over raw compute power.[57] Key features included support for Shader Model 3.0, enabling basic DirectX 9.0c compatibility for light gaming titles of the era, alongside a lite version of NVIDIA PureVideo (VP1) for hardware-accelerated decoding of MPEG-2, WMV9, and H.264 video formats to enhance multimedia playback with minimal CPU overhead.[58][60] The GeForce 6100 was typically paired with the nForce 430 MCP for standard OEM and retail boards, while the GeForce 6150 appeared in variants like the 6150LE and 6150SE tailored for OEM integrations, often in single-chip nForce 410 or 430 configurations.[57] These IGPs excelled in budget use cases such as video playback, web browsing, and casual gaming at low resolutions, but lacked SLI multi-GPU support due to their integrated nature and limited bandwidth.[58]Variants and Derivatives
AGP and TurboCache Adaptations
To support legacy systems with AGP interfaces, NVIDIA developed specific chip variants for the GeForce 6 series, including the NV41 for high-end models like the GeForce 6800 GS AGP and the NV44a for entry-level cards such as the GeForce 6200 AGP. These chips were designed to operate on the AGP 8x bus, which runs at a base clock of 66 MHz with effective signaling up to 533 MHz for data transfer rates of approximately 2.1 GB/s, though core and memory clocks were often throttled compared to PCIe counterparts to manage power and thermal constraints— for instance, the NV41 in the 6800 GS AGP ran at 450 MHz core and 1.2 GHz effective memory clock with 256 MB GDDR3 on a 256-bit bus.[61][52] Later AGP adaptations, particularly for mid-range models like the GeForce 6600 GT AGP, incorporated NVIDIA's AGP-to-PCIe bridge chips (such as the BR02 High-Speed Interconnect) to allow native PCIe GPUs to function in AGP slots, enabling broader compatibility without full redesigns but introducing minor latency overhead from the bridging process.[24] These adaptations were released primarily in 2005, targeting pre-2004 motherboards and extending the usability of GeForce 6 features like Shader Model 3.0 in older setups.[62] Complementing AGP support, NVIDIA introduced TurboCache technology in memory-constrained GeForce 6 models to optimize performance in systems with limited dedicated VRAM, utilizing 16-32 MB of on-board memory as a cache while dynamically allocating system RAM to emulate effective capacities of 64-128 MB or higher. This hardware-software solution leverages the PCI Express bus (or AGP in adapted variants) to share bandwidth between dedicated video memory and available system memory, reducing costs for entry-level cards by minimizing soldered VRAM while maintaining reasonable 3D rendering capabilities. Affected models included the GeForce 6200 TurboCache (with variants like 16-TC/128 MB at 350 MHz core and 700 MHz DDR memory on a 64-bit bus) and GeForce 6500 TurboCache, as well as the integrated GeForce 6150 with optional 32 MB sideport memory for similar caching.[63] While TurboCache extended compatibility to budget and legacy upgrades—allowing GeForce 6 performance in systems without high-end VRAM support—it incurred a typical 20-30% performance penalty due to increased latency from system RAM access and shared bandwidth, with effective throughput limited to around 4 GB/s on PCIe compared to full VRAM configurations. For example, the GeForce 6200 TurboCache 64-TC variant emulated 256 MB total but showed reduced efficiency in bandwidth-intensive tasks like antialiasing, making it suitable for basic gaming and multimedia at resolutions up to 1024x768 but less ideal for demanding applications. Overall, these adaptations prioritized market reach over peak performance, enabling end-users with AGP-based pre-2004 PCs to access advanced features like DirectX 9 support without full platform overhauls.[62]Mobile GeForce Go 6 Series
The GeForce Go 6 series represented NVIDIA's mobile implementation of the GeForce 6 architecture, tailored for laptop environments with modifications to core clocks, power draw, and thermal profiles to accommodate battery life and heat dissipation constraints in portable systems. Launched starting in late 2004, these GPUs targeted gaming-oriented notebooks, providing desktop-like performance while operating within typical mobile thermal design power (TDP) limits of 45-89 W for high-end models.[64][65] The series shared key architectural features with its desktop counterparts, including support for Shader Model 3.0, which enabled programmable vertex and pixel shading for enhanced visual effects in games.[66] The lineup consisted of high-end, mid-range, and entry-level variants, all utilizing variants of the NV40 architecture but with mobile-specific dies like NV41M for the top tier and NV43/NV44M for lower models, manufactured on 110-130 nm processes. High-end options included the GeForce Go 6800 and its Ultra variant (often denoted as UHP for Ultra High Performance), which featured 12 pixel pipelines and up to 256 MB of GDDR3 memory. Mid-range models encompassed the GeForce Go 6600 TE (Turbo Edition) and GT, offering balanced performance with 128-bit memory interfaces. The entry-level GeForce Go 6200 rounded out the series, emphasizing efficiency for lighter workloads.[64][65][67][68][69]| Model | Core Clock (MHz) | Memory (Max) | TDP (W) | Process (nm) | Release Date | Key Die |
|---|---|---|---|---|---|---|
| Go 6800 | 300 | 256 MB GDDR3 | 45 | 130 | Nov 2004 | NV41M |
| Go 6800 Ultra (UHP) | 450 | 256 MB GDDR3 | 89 | 130 | Feb 2005 | NV41M |
| Go 6600 TE/GT | 375 | 256 MB DDR2 | 25 | 110 | Sep 2005 | NV43 |
| Go 6200 | 300 | 64 MB DDR2 | 25 | 110 | Feb 2006 | NV44M |
Performance and Legacy
Chipset Specifications and Comparisons
The GeForce 6 series introduced NVIDIA's NV4x architecture, featuring scalable designs across high-end to integrated solutions, with core dies ranging from the full-featured NV40 to more compact variants like the NV43 and NV44. All models supported DirectX 9.0c with Shader Model 3.0 and Intellisample 3.0 for antialiasing, but differed in pipeline counts, clock speeds, and memory configurations to target various market segments.[2] The following table summarizes key specifications for representative models in the series, focusing on desktop variants; integrated options like the GeForce 6150 share similar architectural traits but use system memory.| Model | Core | Pipelines (Pixel/Vertex) | Memory Bus/Type | Core Clock (MHz) | Memory Clock (MHz effective) | TDP (W) | Launch Date |
|---|---|---|---|---|---|---|---|
| GeForce 6800 Ultra | NV40 | 16/6 | 256-bit GDDR3 | 400 | 1000 | 110 | Apr 2004 |
| GeForce 6800 GT | NV40 | 16/6 | 256-bit GDDR3 | 350 | 1000 | 89 | Jul 2004 |
| GeForce 6800 | NV40 | 12/5 | 256-bit GDDR3 | 325 | 1100 | 65 | Oct 2004 |
| GeForce 6600 GT | NV43 | 8/3 | 128-bit GDDR3 | 500 | 1000 | 58 | Aug 2004 |
| GeForce 6600 | NV43 | 8/3 | 128-bit DDR | 300 | 500 | 40 | Apr 2005 |
| GeForce 6500 | NV44 | 4/3 | 64-bit DDR2 | 400 | 666 | 25 | Apr 2005 |
| GeForce 6200 | NV44 | 4/3 | 64-bit DDR2 | 300 | 600 | 20 | Jun 2004 |
| GeForce 6150 (integrated) | C51 | 4/3 | Shared DDR2 | 425 | System-dependent | 20 | Apr 2005 |
