Recent from talks
Nothing was collected or created yet.
GeForce 3 series
View on Wikipedia
![]() | |
| Release date | February 27, 2001 |
|---|---|
| Codename | NV20 |
| Architecture | Kelvin |
| Models |
|
| Cards | |
| Mid-range | Ti 200 |
| High-end | GeForce 3 (original), Ti 500 |
| API support | |
| Direct3D | Direct3D 8.0 Vertex Shader 1.1 Pixel Shader 1.1 |
| OpenGL | OpenGL 1.3 |
| History | |
| Predecessor | GeForce 2 (NV15) |
| Successor | GeForce 4 Ti (NV25) |
| Support status | |
| Unsupported | |
The GeForce 3 series (NV20) is the third generation of Nvidia's GeForce line of graphics processing units (GPUs). Introduced in February 2001,[1] it advanced the GeForce architecture by adding programmable pixel and vertex shaders, multisample anti-aliasing and improved the overall efficiency of the rendering process.
The GeForce 3 was unveiled during the 2001 Macworld Conference & Expo/Tokyo 2001 in Makuhari Messe and powered realtime demos of Pixar's Junior Lamp and id Software's Doom 3. Apple would later announce launch rights for its new line of computers.
The GeForce 3 family comprises 3 consumer models: the GeForce 3, the GeForce 3 Ti200, and the GeForce 3 Ti500. A separate professional version, with a feature-set tailored for computer aided design, was sold as the Quadro DCC. A derivative of the GeForce 3, known as the NV2A, is used in the Microsoft Xbox game console.
Architecture
[edit]
The GeForce 3 was introduced three months after Nvidia acquired the assets of 3dfx. It was marketed as the nFinite FX Engine, and was the first Microsoft Direct3D 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft Shader language 1.1. It is believed that the fixed-function T&L hardware from GeForce 2 was still included on the chip for use with Direct3D 7.0 applications, as the single vertex shader was not fast enough to emulate it yet.[2] With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line.
To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed Lightspeed Memory Architecture (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the z-buffer (depth buffer) and better manage interaction with the DRAM.
Other architectural changes include EMBM support[3][4] (first introduced by Matrox in 1999) and improvements to anti-aliasing functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and Quincunx anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the Z-buffers and stencil buffers, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's texture sampling units were upgraded to support 8-tap anisotropic filtering, compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper.
A derivative of the GeForce 3, known as the NV2A, is used in the Microsoft Xbox game console. It is clocked the same as the original GeForce 3 but features an additional vertex shader.[5][6][7][8][9]
Performance
[edit]The GeForce 3 GPU (NV20) has the same theoretical pixel and texel throughput per clock as the GeForce 2 (NV15). The GeForce 2 Ultra is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200; this means that in select instances, like Direct3D 7 T&L benchmarks, the GeForce 2 Ultra and sometimes even GTS can outperform the GeForce 3 and Ti200, because the newer GPUs use the same fixed-function T&L unit, but are clocked lower.[10] The GeForce 2 Ultra also has considerable raw memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when comparing anti-aliasing performance the GeForce 3 is clearly superior because of its MSAA support and memory bandwidth/fillrate management efficiency.
When comparing the shading capabilities to the Radeon 8500, reviewers noted superior precision with the ATi card.[11]
Product positioning
[edit]Nvidia refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the Radeon 7500 and Radeon 8500. The Ti500 has higher core and memory clocks (240 MHz core/250 MHz RAM) than the original GeForce 3 (200 MHz/230 MHz), and generally matches the Radeon 8500 in performance. The Ti200 is clocked lower (175 MHz/200 MHz) making it the lowest-priced GeForce 3 release, but it still surpasses the Radeon 7500 in speed and feature set although lacking dual-monitor implementation.
The original GeForce3 and Ti500 were only released in 64 MiB configurations, while the Ti200 was also released as 128 MiB versions.
The GeForce 4 Ti (NV25), introduced in April 2002, was a revision of the GeForce 3 architecture.[12][13] The GeForce 4 Ti was very similar to the GeForce 3; the main differences were higher core and memory speeds, a revised memory controller, improved vertex and pixel shaders, hardware anti-aliasing and DVD playback. Proper dual-monitor support was also brought over from the GeForce 2 MX. With the GeForce 4 Ti 4600 as the new flagship product, this was the beginning of the end of the GeForce 3 Ti 500 which was already difficult to produce due to poor yields, and it was later completely replaced by the much cheaper but similarly performing GeForce 4 Ti 4200.[14] Also announced at the same time was the GeForce 4 MX (NV17), which despite the name was closer in terms of architecture and feature set to the GeForce 2 (NV 11 and NV15).[15] The GeForce 3 Ti200 was still kept in production for a short while as it occupied a niche spot between the (delayed) GeForce 4 Ti4200 and GeForce 4 MX460, with performance equivalent to the DirectX 7.0 compliant MX460 while also having full DirectX 8.0 support, although lacking the ability to support dual-monitors. [16] However, ATI released the Radeon 8500LE (a slower clocked version of the 8500) which outperformed both the Ti200 and MX460. ATI's move in turn compelled Nvidia to roll out the Ti4200 earlier than planned, also at a similar price to the MX 460, and soon afterwards discontinuing the Ti200 by summer 2002 due to naming confusion with the GeForce 4 MX and Ti lines. The GeForce 3 Ti200 still outperforms the Radeon 9000 (RV250) that was introduced around the time of the Ti200's discontinuation; as unlike the 8500LE which was just a slower-clocked 8500, the 9000 was a major redesign to reduce production cost and power usage; the Radeon 9000's performance was equivalent to the GeForce 4 MX440. [17]
Specifications
[edit]- All models are made via TSMC 150 nm fabrication process
- All models support Direct3D 8.0 and OpenGL 1.3
- All models support 3D Textures, Lightspeed Memory Architecture (LMA), nFiniteFX Engine, Shadow Buffers
| Model | Launch | Transistors (million)
|
Die size (mm2)
|
Core clock (MHz)
|
Memory clock (MHz)
|
Core config[a]
|
Fillrate | Memory | TDP (Watts)
| |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MOperations/s
|
MPixels/s
|
MTexels/s
|
MVertices/s
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
| |||||||||||
| GeForce3 Ti200 | October 1, 2001 | NV20 | 57 | 128 | AGP 4x, PCI | 175 | 200 | 4:1:8:4 | 700 | 700 | 1400 | 43.75 | 64 128 |
6.4 | DDR | 128 | 8.750 | ? |
| GeForce3 | February 27, 2001 | 200 | 230 | 800 | 800 | 1600 | 50 | 64 | 7.36 | 10.00 | ? | |||||||
| GeForce3 Ti500 | October 1, 2001 | 240 | 250 | 960 | 960 | 1920 | 60 | 64 128 |
8.0 | 12.00 | 29 | |||||||
Support
[edit]Nvidia has ceased driver support for GeForce 3 series.

Final drivers
[edit]- Windows 9x & Windows Me: 81.98 released on December 21, 2005; Download;
- Product Support List Windows 95/98/Me – 81.98.
- Driver version 81.98 for Windows 9x/Me was the last driver version ever released by Nvidia for these systems; no new official releases were later made for these systems.
- Windows 2000, 32-bit Windows XP & Media Center Edition: 93.71 released on November 2, 2006; Download.
- Also available: 93.81 (beta) released on November 28, 2006; Download.
- Linux 32-bit: 96.43.23 released on September 14, 2012; Download.
The drivers for Windows 2000/XP can also be installed on later versions of Windows such as Windows Vista and 7; however, they do not support desktop compositing or the Aero effects of these operating systems.
Note: Despite claims in the documentation that 94.24 (released on May 17, 2006) supports the Geforce 3 series, it does not (94.24 actually supports only GeForce 6 and GeForce 7 series).[18]
- (Products supported list also on this page)
Windows 95/98/Me Driver Archive
Windows XP/2000 Driver Archive
Unix Driver Archive
See also
[edit]References
[edit]- ^ Witheiler, Matthew (July 25, 2001). "NVIDIA GeForce3 Roundup - July 2001". Anandtech. Archived from the original on September 14, 2010.
- ^ McKesson, Jason. "Programming at Last".
- ^ Labs, iXBT. "April 2002 3Digest - NVIDIA GeForce3". iXBT Labs.
- ^ "GeForce RTX 20 Series Graphics Cards and Laptops".
- ^ Parker, Sam (May 22, 2001). "Inside the Xbox GPU". GameSpot. Retrieved February 13, 2025.
- ^ "Microsoft clarify NV2A". Eurogamer.net. March 28, 2001. Archived from the original on March 20, 2020. Retrieved October 10, 2024.
- ^ Graphics Processor Specifications, IGN, 2001
- ^ "Anandtech Microsoft's Xbox". Anandtech.com. Archived from the original on November 4, 2010. Retrieved October 10, 2024.
- ^ Smith, Tony (February 14, 2001). "TSMC starts fabbing Nvidia Xbox chips". The Register. Retrieved October 10, 2024.
- ^ "NVIDIA's GeForce3 graphics processor". techreport.com. June 26, 2001. Archived from the original on March 12, 2017. Retrieved June 25, 2017.
- ^ "ATI's Radeon 8500: Off the beaten path". techreport.com. December 31, 2001. Archived from the original on March 12, 2017. Retrieved June 25, 2017.
- ^ "A look at NVIDIA's GeForce4 chips". The Tech Report. February 6, 2002. Archived from the original on March 29, 2019. Retrieved October 1, 2024.
- ^ "ActiveWin.Com: NVIDIA GeForce 4 Ti 4600 - Review". www.activewin.com.
- ^ Thomas Pabst (February 6, 2002). "PC Graphics Beyond XBOX - NVIDIA Introduces GeForce4". Tom's Hardware.
- ^ Lal Shimpi, Anand (February 6, 2002). "Nvidia GeForce4 - NV17 and NV25 Come to Life". AnandTech. Archived from the original on May 3, 2010. Retrieved October 1, 2024.
- ^ Thomas Pabst (February 6, 2002). "PC Graphics Beyond XBOX - NVIDIA Introduces GeForce4". Tom's Hardware.
- ^ Worobyew, Andrew.; Medvedev, Alexander. "Nvidia GeForce4 Ti 4400 and GeForce4 Ti 4600 (NV25) Review". Pricenfees. Archived from the original on October 12, 2018. Retrieved October 1, 2024.
- ^ "Driver Details". NVIDIA.
External links
[edit]GeForce 3 series
View on GrokipediaDevelopment and Release
Announcement and Launch
NVIDIA officially announced the GeForce 3 series on February 21, 2001, during a keynote presentation at the Macworld Conference & Expo in Tokyo, where Apple CEO Steve Jobs showcased the GPU powering real-time demos of Pixar's Luxo Jr. lamp and id Software's Doom 3 engine.[5] The event marked the first public reveal of the series, emphasizing its programmable shader capabilities as a breakthrough in consumer graphics processing.[6] This announcement highlighted NVIDIA's collaboration with Apple, positioning the GeForce 3 for early integration into Macintosh systems as a build-to-order option.[7] The GeForce 3 series launched on February 27, 2001, with initial retail availability beginning in March 2001 through major board partners and OEMs.[2] NVIDIA's production codename for the core GPU was NV20, built on the Kelvin architecture.[2] The initial model carried a manufacturer's suggested retail price (MSRP) of $499, reflecting its positioning as a high-end graphics solution at the time.[2][8] Early adoption was supported by partnerships with prominent OEMs such as Dell and Gateway, who integrated the GeForce 3 into their PC systems, alongside add-in card manufacturers like ELSA producing reference designs such as the Gladiac 920.[1][9] These collaborations ensured broad market entry, with NVIDIA shipping the GPU to top PC and graphics board OEMs shortly after launch.[1] The series' debut set the stage for subsequent variants, including the Ti 500 model introduced later in October 2001 at an MSRP of $349.[4]Design Goals
The GeForce 3 series was engineered to achieve full compliance with Microsoft Direct3D 8.0, becoming the first consumer-grade graphics processing unit (GPU) to incorporate programmable vertex and pixel shaders, enabling developers to implement custom shading effects previously limited to high-end workstations. This design choice stemmed from NVIDIA's collaboration with Microsoft on key DirectX 8 technologies, aiming to empower more sophisticated rendering techniques such as procedural textures and dynamic lighting in games.[10][11] Building on the fixed-function pipeline of the preceding GeForce 2 series, the GeForce 3 sought to deliver a more adaptable and future-proof architecture tailored for the rise of shader-centric applications and games. Engineering motivations included enhancing overall flexibility by distributing complex algorithms between the CPU and GPU, while targeting 2-5 times the performance of the GeForce 2 through optimizations like Z occlusion culling to reduce rendering overhead. Development efforts, which built directly on GeForce 2 advancements, focused on DirectX 8 compatibility alongside full support for prior DirectX 6 and 7 feature sets, positioning the series as a bridge to programmable graphics in mainstream consumer hardware.[12] A primary engineering challenge was integrating the new programmable shader units while maintaining cost-effective die sizes, culminating in a target of 57 million transistors fabricated on a 150 nm process node to balance enhanced capabilities with power efficiency and manufacturability. This approach allowed NVIDIA to avoid disproportionate increases in chip complexity compared to the GeForce 2's 25 million transistors on a 180 nm process, ensuring viability for both high-end and emerging mainstream variants. Strategically, the GeForce 3 aimed to drive annual performance improvements of 2-3 times and migrate advanced features like multisample anti-aliasing and high-resolution anisotropic filtering to broader markets, solidifying NVIDIA's leadership in visual quality innovations amid intensifying competition from rivals like ATI.[2][12][13]Architecture
Kelvin Microarchitecture
The Kelvin microarchitecture, codenamed NV20, served as the foundational design for NVIDIA's GeForce 3 series graphics processing units, marking a significant evolution from the preceding Celsius architecture by introducing programmable shading capabilities to consumer GPUs.[14] This microarchitecture was fabricated using a 150 nm process at TSMC, resulting in a die size of 128 mm² and an integration of 57 million transistors, which enabled enhanced computational density for vertex and pixel processing while maintaining power efficiency for its era.[2] The core structure adopted a quad-pipeline configuration with 4 pixel pipelines, each supporting 2 texture mapping units for a total of 8 TMUs, alongside 4 render output units (ROPs) to handle final pixel operations.[14] Complementing this was a single programmable vertex shader unit, capable of executing up to 128 instructions per shader program, facilitating advanced geometry transformations.[12] Central to the Kelvin design's efficiency was its memory subsystem, embodied in NVIDIA's Lightspeed Memory Architecture (LMA), a suite of optimizations including a crossbar-based controller to mitigate bandwidth bottlenecks in texture fetching and frame buffer access.[15] LMA featured an integrated 128-bit DDR SDRAM memory controller, supporting memory speeds up to 230 MHz effective, which improved data throughput by dynamically allocating channels and reducing latency in multi-texture scenarios without requiring external chips. This architecture allowed for seamless handling of high-resolution textures and complex effects, contributing to the overall balance between rendering performance and memory utilization.[16] Clock speeds in the Kelvin microarchitecture varied by model, with the base GeForce 3 operating at a core frequency of 200 MHz, while premium variants such as the GeForce 3 Ti 500 accelerated to 240 MHz to boost pixel fill rates up to 960 megapixels per second. The interface supported AGP 4x for high-bandwidth host communication, delivering up to 1.06 GB/s transfer rates, with backward compatibility to PCI for broader system integration.[2] These elements collectively positioned Kelvin as a versatile foundation for DirectX 8.0-compliant shader programming, enabling developers to customize vertex and pixel effects within hardware constraints.[11]Rendering Pipeline
The GeForce 3 series, based on NVIDIA's Kelvin microarchitecture, introduced the first consumer-grade graphics processing unit to support Shader Model 1.1 as defined in DirectX 8.0, enabling programmable vertex and pixel shading for enhanced rendering flexibility.[12] This marked a shift from purely fixed-function pipelines to one incorporating limited programmability, allowing developers to customize vertex transformations and per-pixel operations beyond traditional texture mapping and lighting.[17] Vertex shaders operated on 4-component vectors (position, normals, colors, and texture coordinates), supporting up to 128 arithmetic instructions per shader program, while pixel shaders were more constrained, limited to up to 8 arithmetic instructions and 4 texture address instructions.[18][17] The rendering pipeline began with a vertex transform and lighting (T&L) unit, which could operate in fixed-function mode for compatibility or invoke programmable vertex shading for custom effects such as procedural deformation or advanced lighting models.[12] Following vertex processing, the pipeline proceeded to primitive setup and rasterization, generating fragments for each pixel covered by triangles or other primitives. These fragments then entered the pixel shading stage, where programmable texture shaders applied operations like dependent texture reads and dot products to compute final pixel colors.[19] The instruction set for both shader types included basic arithmetic operations such as multiply-accumulate (MAD), 3-component dot product (DP3), and multiplies (MUL), alongside texture lookup instructions (TEX) for sampling from up to four textures per rendering pass.[18][19] To ensure backward compatibility, the pipeline provided fixed-function fallbacks aligned with Direct3D 7 specifications, allowing legacy applications to render without shader support by defaulting to hardware-accelerated T&L and multi-texturing without programmability.[12] However, pixel shaders faced significant limitations, restricted to simple, linear sequences of operations without conditional branching or loops, which confined them to effects like basic per-pixel lighting or texture blending rather than complex procedural generation.[19] Vertex shaders, while more capable, also lacked dynamic flow control, emphasizing deterministic execution for consistent performance across geometry.[17] This design prioritized real-time efficiency on early 2000s hardware, laying foundational concepts for subsequent shader evolution in graphics APIs.[12]Products and Specifications
Model Lineup
The GeForce 3 series lineup included three primary consumer desktop models based on the NV20 graphics processor, each targeting different performance tiers within NVIDIA's strategy to maintain market leadership in 2001. The base model, GeForce 3 (NV20), served as the high-end flagship upon its release in February 2001, introducing programmable shading capabilities to mainstream graphics cards.[2] This was followed by mid-range and high-end refreshes later in the year to address competitive pressures and extend the series' viability. The mid-range GeForce 3 Ti 200 (NV20 Ti200) launched in October 2001, offering a more accessible entry point into the series' advanced features while maintaining compatibility with AGP 4x interfaces.[20] The high-end GeForce 3 Ti 500 (NV20 Ti500) arrived in October 2001, positioned as the pinnacle of the lineup with optimizations for demanding applications.[4] OEM variants of the GeForce 3 were integrated directly into pre-built systems by manufacturers such as Dell and Compaq, allowing for customized implementations in consumer PCs without standalone retail availability.[21] The series also included a mobile derivative, the GeForce 3 Go. Additionally, a derivative chip known as the NV2A, customized from the GeForce 3 design, powered the Microsoft Xbox console with a 233 MHz core clock and 200 MHz DDR memory configuration.[22] Production of the GeForce 3 series was phased out by mid-2002, supplanted by the GeForce 4 lineup as NVIDIA shifted to refined architectures for next-generation performance.| Model | Chip Variant | Release Date | Market Segment |
|---|---|---|---|
| GeForce 3 | NV20 | February 2001 | High-end |
| GeForce 3 Ti 200 | NV20 Ti200 | October 2001 | Mid-range |
| GeForce 3 Ti 500 | NV20 Ti500 | October 2001 | High-end |
Technical Specifications
The GeForce 3 series GPUs were fabricated using a 150 nm TSMC process node as part of the Kelvin microarchitecture.[2] The series includes the base GeForce 3, the entry-level GeForce 3 Ti 200, and the high-end GeForce 3 Ti 500, each with distinct clock speeds and performance characteristics while sharing a 128-bit memory interface and DDR memory type. Power draw across the models typically ranged from 30-40 W, depending on board implementation and no auxiliary power connectors were required.[23][4]| Model | Core Clock | Memory Clock (DDR) | Bus Width | Bandwidth | Standard VRAM | Pixel Fill Rate | Texture Fill Rate |
|---|---|---|---|---|---|---|---|
| GeForce 3 | 200 MHz | 230 MHz | 128-bit | 7.36 GB/s | 64 MB | 800 MPixel/s | 1.60 GTexel/s |
| GeForce 3 Ti 200 | 175 MHz | 200 MHz | 128-bit | 6.40 GB/s | 64 MB | 700 MPixel/s | 1.40 GTexel/s |
| GeForce 3 Ti 500 | 240 MHz | 250 MHz | 128-bit | 8.00 GB/s | 64 MB | 960 MPixel/s | 1.92 GTexel/s |

