Recent from talks
Nothing was collected or created yet.
Radeon X1000 series
View on WikipediaThis article needs to be updated. The reason given is: Requiring reliable non-social media sources. (October 2024) |
ATI Radeon X1950XTX | |
| Release date | October 5, 2005 |
|---|---|
| Codename | Fudo (R520) Rodin (R580) |
| Architecture | Radeon R500 |
| Transistors | 107M 90nm (RV505)
|
| Cards | |
| Entry-level | X1300, X1550 |
| Mid-range | X1600, X1650 |
| High-end | X1800, X1900 |
| Enthusiast | X1950 |
| API support | |
| Direct3D | Direct3D 9.0c Shader Model 3.0 |
| OpenGL | OpenGL 2.0 |
| History | |
| Predecessor | Radeon X800 series |
| Successor | Radeon HD 2000 series |
| Support status | |
| Unsupported | |
The R520 (codenamed Fudo) is a graphics processing unit (GPU) developed by ATI Technologies and produced by TSMC. It was the first GPU produced using a 90 nm photolithography process.
The R520 is the foundation for a line of DirectX 9.0c and OpenGL 2.0 3D accelerator X1000 video cards. It is ATI's first major architectural overhaul since the R300 and is highly optimized for Shader Model 3.0. The Radeon X1000 series using the core was introduced on October 5, 2005, and competed primarily against Nvidia's GeForce 7 series. ATI released the successor to the R500 series with the R600 series on May 14, 2007.
ATI does not provide official support for any X1000 series cards for Windows 8 or Windows 10; the last AMD Catalyst for this generation is the 10.2 from 2010 up to Windows 7.[1] AMD stopped providing drivers for Windows 7 for this series in 2015.[2]
A series of open source Radeon drivers are available when using a Linux distribution.
The same GPUs are also found in some AMD FireMV products targeting multi-monitor set-ups.
Delay during the development
[edit]The Radeon X1800 video cards that included an R520 were released with a delay of several months because ATI engineers discovered a bug within the GPU in a very late stage of development. This bug, caused by a faulty 3rd party 90 nm chip design library, greatly hampered clock speed ramping, so they had to "respin" the chip for another revision (a new GDSII had to be sent to TSMC). The problem had been almost random in how it affected the prototype chips, making it difficult to identify.
Architecture
[edit]The R520 architecture is referred to by ATI as an "Ultra Threaded Dispatch Processor", which refers to ATI's plan to boost the efficiency of their GPU, instead of going with a brute force increase in the number of processing units. A central pixel shader "dispatch unit" breaks shaders down into threads (batches) of 16 pixels (4×4) and can track and distribute up to 128 threads per pixel "quad" (4 pipelines each). When a shader quad becomes idle due to a completion of a task or waiting for other data, the dispatch engine assigns the quad with another task to do in the meantime. The overall result is theoretically a greater utilization of the shader units. With a large number of threads per quad, ATI created a very large processor register array that is capable of multiple concurrent reads and writes, and has a high-bandwidth connection to each shader array, providing the temporary storage necessary to keep the pipelines fed by having work available as much as possible. With chips such as RV530 and R580, where the number of shader units per pipeline triples, the efficiency of pixel shading drops off slightly because these shaders still have the same level of threading resources as the less endowed RV515 and R520.[3]
The next major change to the core is to its memory bus. R420 and R300 had nearly identical memory controller designs, with the former being a bug fixed release designed for higher clock speeds. R520's memory bus differs with its central controller (arbiter) that connects to the "memory clients". Around the chip are two 256-bit ring buses running at the same speed as the DRAM chips, but in opposite directions to reduce latency. Along these ring buses are four "stop" points where data exits the ring and goes into or out of the memory chips. There is a fifth, significantly less complex stop that is designed for the PCI Express interface and video input. This design allows memory accesses to be quicker though lower latency from the smaller distance the signals need to move through the GPU, and by increasing the number of banks per DRAM. The chip can spread out memory requests faster and more directly to the RAM chips. ATI claimed a 40% improvement in efficiency over older designs. Smaller cores such as RV515 and RV530 received cutbacks due to their smaller, less costly designs. RV530, for example, has two internal 128-bit buses instead. This generation has support for all recent memory types, including GDDR4. In addition to a ring bus, each memory channel has the granularity of 32-bits, which improves memory efficiency when performing small memory requests.[3]
The vertex shader engines were already at the required FP32 precision in ATI's older products. Changes necessary for SM3.0 included longer instruction lengths, dynamic flow control instructions, with branches, loops and subroutines and a larger temporary register space. The pixel shader engines are actually quite similar in computational layout to their R420 counterparts, although they were heavily optimized and tweaked to reach high clock speeds on the 90 nm process. ATI has been working for years on a high-performance shader compiler in their driver for their older hardware, so staying with a similar basic design that is compatible offered obvious cost and time savings.[3]
At the end of the pipeline, the texture addressing processors are decoupled from pixel shaders, so any unused texturing units can be dynamically allocated to pixels that need more texture layers. Other improvements include 4096x4096 texture support and ATI's 3Dc normal map compression saw an improvement in compression ratio for more specific situations.[3]
The R5xx family introduced a more advanced onboard motion-video engine. Like the Radeon cards since the R100, the R5xx can offload almost the entire MPEG-1/2 video pipeline. The R5xx can also assist in Microsoft WMV9/VC-1 and MPEG H.264/AVC decoding, by a combination of the 3D/pipeline's shader-units and the motion-video engine. Benchmarks show only a modest decrease in CPU-utilization for VC-1 and H.264 playback.
A selection of real-time 3D demonstration programs was released at launch. ATI's development of their "digital superstar", Ruby, continued with a new demo named The Assassin. It showcased a highly complex environment, with high-dynamic-range lighting (HDR) and dynamic soft shadows. Ruby's latest competing program, Cyn, was composed of 120,000 polygons.[4]
The cards support dual-link DVI output and HDCP. However, using HDCP requires external ROM to be installed, which were not available for early models of the video cards. RV515, RV530, and RV535 cores include a single and a double DVI link; R520, RV560, RV570, R580, R580+ cores include two double DVI links.
AMD released the final Radeon R5xx Acceleration document.[5]
Drivers
[edit]The last AMD Catalyst version that officially supports the X1000 series is 10.2, display driver version 8.702.
Variants
[edit]X1300–X1550 series
[edit]
This series is the budget solution of the X1000 series and is based on the RV515 core. The chips have four texture units, four ROPs, four pixel shaders, and 2 vertex shaders, similar to the older X300 – X600 cards. These chips use one quad of an R520, whereas the faster boards use just more of these quads; for example, the X1800 uses four quads. This modular design allows ATI to build a "top to bottom" line-up using identical technology, saving research, development time, and money. Because of its smaller design, these cards offer lower power demands (30 watts), so they run cooler and can be used in smaller cases.[3] Eventually, ATI created the X1550 and discontinued the X1300. The X1050 was based on the R300 core and was sold as an ultra-low-budget part.
Early Mobility Radeon X1300 to X1450 are based around the RV515 core as well.[6][7][8][9]
Beginning in 2006, Radeon X1300 and X1550 products were shifted to the RV505 core, which had similar capabilities and features as the previous RV515 core, but was manufactured by TSMC using an 80 nm process (reduced from the 90 nm process of the RV515).[10]
X1600 series
[edit]X1600 uses the M56[1] core which is based on the RV530 core, a core similar but distinct from RV515.
The RV530 has a 3:1 ratio of pixel shaders to texture units. It possesses 12 pixel shaders while retaining RV515's four texture units and four ROPs. It also gains three extra vertex shaders, bringing the total to 5 units. The chip's single "quad" has 3 pixel shader processors per pipeline, similar to the design of R580's 4 quads. This means that RV530 has the same texturing ability as the X1300 at the same clock speed, but with its 12 pixel shaders it is on par with the X1800 in shader computational performance. Due to the programming content of available games, the X1600 is greatly hampered by lack of texturing power.[3]
The X1600 was positioned to replace Radeon X600 and Radeon X700 as ATI's mid-range GPU. The Mobility Radeon X1600 and X1700 are also based on the RV530.[11][12]
X1650 series
[edit]
The X1650 series has two parts: the X1650 Pro uses the RV535 core (which is a RV530 core manufactured on the newer 80 nm process), and has both a lower power consumption and heat output than the X1600.[13] The other part, the X1650XT/X1650GT, uses the newer RV570 core (also known as the RV560) though it has lower processing power (note that the fully equipped RV570 core powers the X1950Pro, a high-performance card) to match its main competitor, Nvidia's 7600GT.[14] There's also Radeon X1650, which technically belongs to the previous generation of X1600, because it uses old 90nm RV530 core. If you look closely at the specs, it's basically renamed Radeon X1600 Pro with DDR2 memory.
X1800 series
[edit]Originally the flagship of the X1000 series, the X1800 series was released with mild reception due to the rolling release and the gain by its competitor at that time, NVIDIA's GeForce 7 series. When the X1800 entered the market in late 2005, it was the first high-end video card with a 90 nm GPU. ATI opted to fit the cards with either 256 MB or 512 MB on-board memory (foreseeing a future of ever growing demands on local memory size). The X1800XT PE was exclusively on 512 MB on-board memory. The X1800 replaced the R480-based Radeon X850 as ATI's premier performance GPU.[3]
With R520's delayed release, its competition was far more impressive than if the chip had made its originally scheduled spring/summer release. Like its predecessor, the X850, the R520 chip carries 4 "quads", which means it has similar texturing capability at the same clock speed as its ancestor and the NVIDIA 6800 series. Unlike the X850, the R520's shader units are vastly improved: they are Shader Model 3 capable, and received some advancements in shader threading that can greatly improve the efficiency of the shader units. Unlike the X1900, the X1800 has 16 pixel shader processors and equal ratio of texturing to pixel shading capability. The chip also increases the vertex shader number from six on the X800 to eight. With the 90 nm low-K fabrication process, these high-transistor chips could still be clocked at very high frequencies, which allows the X1800 series to be competitive with GPUs with more pipelines but lower clock speeds, such as the NVIDIA 7800 and 7900 series that use 24 pipelines.[3]
The X1800 was quickly replaced by the X1900 because of its delayed release. The X1900 was not behind schedule, and was always planned as the "spring refresh" chip. However, due to the large quantity of unused X1800 chips, ATI decided to kill one quad of pixel pipelines and sell them off as the X1800GTO.
X1900 and X1950 series
[edit]The X1900 and X1950 series fixed several flaws in the X1800 design and added a significant pixel shading performance boost. The R580 core is pin-compatible with the R520 PCBs, which meant a redesign of the X1800 PCB was not needed. The boards carry either 256 MB or 512 MB of onboard GDDR3 memory depending on the variant. The primary change between the R580 and the R520 is that ATI changed the pixel shader processor-to-texture processor ratio. The X1900 cards have three pixel shaders on each pipeline instead of one, giving a total of 48 pixel shader units. ATI took this step with the expectation that future 3D software will be more pixel shader intensive.[15]
In the latter half of 2006, ATI introduced the Radeon X1950 XTX, which is a graphics board using a revised R580 GPU called R580+. R580+ is the same as R580 except it supports GDDR4 memory, a new graphics DRAM technology that offers lower power consumption per clock and offers a significantly higher clock rate ceiling. The X1950 XTX clocks its RAM at 1 GHz (2 GHz DDR), providing 64.0 GB/s of memory bandwidth, a 29% advantage over the X1900 XTX. The card was launched on August 23, 2006.[16]
The X1950 Pro was released on October 17, 2006, and was intended to replace the X1900GT in the competitive sub-$200 market segment. The X1950 Pro GPU is built off of the 80 nm RV570 core with only 12 texture units and 36 pixel shaders, and is the first ATI card that supports native Crossfire implementation by a pair of internal Crossfire connectors, which eliminates the need for the unwieldy external dongle found in older Crossfire systems.[17]
Radeon feature matrix
[edit]The following table shows features of AMD/ATI's GPUs (see also: List of AMD graphics processing units).
| Name of GPU series | Wonder | Mach | 3D Rage | Rage Pro | Rage 128 | R100 | R200 | R300 | R400 | R500 | R600 | RV670 | R700 | Evergreen | Northern Islands |
Southern Islands |
Sea Islands |
Volcanic Islands |
Arctic Islands/Polaris |
Vega | Navi 1x | Navi 2x | Navi 3x | Navi 4x | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Released | 1986 | 1991 | Apr 1996 |
Mar 1997 |
Aug 1998 |
Apr 2000 |
Aug 2001 |
Sep 2002 |
May 2004 |
Oct 2005 |
May 2007 |
Nov 2007 |
Jun 2008 |
Sep 2009 |
Oct 2010 |
Dec 2010 |
Jan 2012 |
Sep 2013 |
Jun 2015 |
Jun 2016, Apr 2017, Aug 2019 | Jun 2017, Feb 2019 | Jul 2019 |
Nov 2020 |
Dec 2022 |
Feb 2025 | ||
| Marketing Name | Wonder | Mach | 3D Rage |
Rage Pro |
Rage 128 |
Radeon 7000 |
Radeon 8000 |
Radeon 9000 |
Radeon X700/X800 |
Radeon X1000 |
Radeon HD 2000 |
Radeon HD 3000 |
Radeon HD 4000 |
Radeon HD 5000 |
Radeon HD 6000 |
Radeon HD 7000 |
Radeon 200 |
Radeon 300 |
Radeon 400/500/600 |
Radeon RX Vega, Radeon VII |
Radeon RX 5000 |
Radeon RX 6000 |
Radeon RX 7000 |
Radeon RX 9000 | |||
| AMD support | |||||||||||||||||||||||||||
| Kind | 2D | 3D | |||||||||||||||||||||||||
| Instruction set architecture | Not publicly known | TeraScale instruction set | GCN instruction set | RDNA instruction set | |||||||||||||||||||||||
| Microarchitecture | Not publicly known | GFX1 | GFX2 | TeraScale 1 (VLIW5) (GFX3) |
TeraScale 2 (VLIW5) (GFX4) |
TeraScale 2 (VLIW5) up to 68xx (GFX4) |
TeraScale 3 (VLIW4) in 69xx [18][19] (GFX5) |
GCN 1st gen (GFX6) |
GCN 2nd gen (GFX7) |
GCN 3rd gen (GFX8) |
GCN 4th gen (GFX8) |
GCN 5th gen (GFX9) |
RDNA (GFX10.1) |
RDNA 2 (GFX10.3) |
RDNA 3 (GFX11) |
RDNA 4 (GFX12) | |||||||||||
| Type | Fixed pipeline[a] | Programmable pixel & vertex pipelines | Unified shader model | ||||||||||||||||||||||||
| Direct3D | — | 5.0 | 6.0 | 7.0 | 8.1 | 9.0 11 (9_2) |
9.0b 11 (9_2) |
9.0c 11 (9_3) |
10.0 11 (10_0) |
10.1 11 (10_1) |
11 (11_0) | 11 (11_1) 12 (11_1) |
11 (12_0) 12 (12_0) |
11 (12_1) 12 (12_1) |
11 (12_1) 12 (12_2) | ||||||||||||
| Shader model | — | 1.4 | 2.0+ | 2.0b | 3.0 | 4.0 | 4.1 | 5.0 | 5.1 | 5.1 6.5 |
6.7 | 6.8 | |||||||||||||||
| OpenGL | — | 1.1 | 1.2 | 1.3 | 1.5[b][20] | 3.3 | 4.6[21][c] | ||||||||||||||||||||
| Vulkan | — | 1.1[c][d] | 1.3[22][e] | 1.4[23] | |||||||||||||||||||||||
| OpenCL | — | Close to Metal | 1.1 (not supported by Mesa) | 1.2+ (on Linux: 1.1+ (no Image support on Clover, with by Rusticl) with Mesa, 1.2+ on GCN 1.Gen) | 2.0+ (Adrenalin driver on Win7+) (on Linux ROCm, Mesa 1.2+ (no Image support in Clover, but in Rusticl with Mesa, 2.0+ and 3.0 with AMD drivers or AMD ROCm), 5th gen: 2.2 win 10+ and Linux RocM 5.0+ |
2.2+ and 3.0 Windows 8.1+ and Linux ROCm 5.0+ (Mesa Rusticl 1.2+ and 3.0 (2.1+ and 2.2+ wip))[24][25][26] | |||||||||||||||||||||
| HSA / ROCm | — | ? | |||||||||||||||||||||||||
| Video decoding ASIC | — | Avivo/UVD | UVD+ | UVD 2 | UVD 2.2 | UVD 3 | UVD 4 | UVD 4.2 | UVD 5.0 or 6.0 | UVD 6.3 | UVD 7 [27][f] | VCN 2.0 [27][f] | VCN 3.0 [28] | VCN 4.0 | VCN 5.0 | ||||||||||||
| Video encoding ASIC | — | VCE 1.0 | VCE 2.0 | VCE 3.0 or 3.1 | VCE 3.4 | VCE 4.0 [27][f] | |||||||||||||||||||||
| Fluid Motion [g] | ? | ||||||||||||||||||||||||||
| Power saving | ? | PowerPlay | PowerTune | PowerTune & ZeroCore Power | ? | ||||||||||||||||||||||
| TrueAudio | — | Via dedicated DSP | Via shaders | ||||||||||||||||||||||||
| FreeSync | — | 1 2 | |||||||||||||||||||||||||
| HDCP[h] | — | ? | 1.4 | 2.2 | 2.3 [29] | ||||||||||||||||||||||
| PlayReady[h] | — | 3.0 | 3.0 | ||||||||||||||||||||||||
| Supported displays[i] | 1–2 | 2 | 2–6 | ? | 4 | ||||||||||||||||||||||
| Max. resolution | ? | 2–6 × 2560×1600 |
2–6 × 4096×2160 @ 30 Hz |
2–6 × 5120×2880 @ 60 Hz |
3 × 7680×4320 @ 60 Hz [30] |
7680×4320 @ 60 Hz PowerColor |
7680x4320
@165 Hz |
7680x4320 | |||||||||||||||||||
/drm/radeon[j]
|
— | ||||||||||||||||||||||||||
/drm/amdgpu[j]
|
— | Optional [31] | |||||||||||||||||||||||||
- ^ The Radeon 100 Series has programmable pixel shaders, but do not fully comply with DirectX 8 or Pixel Shader 1.0. See article on R100's pixel shaders.
- ^ R300, R400 and R500 based cards do not fully comply with OpenGL 2+ as the hardware does not support all types of non-power of two (NPOT) textures.
- ^ a b OpenGL 4+ compliance requires supporting FP64 shaders and these are emulated on some TeraScale chips using 32-bit hardware.
- ^ Vulkan support is theoretically possible but has not been implemented in a stable driver.
- ^ Vulkan support in Linux relies on the amdgpu kernel driver which is incomplete and not enabled by default for GFX6 and GFX7.
- ^ a b c The UVD and VCE were replaced by the Video Core Next (VCN) ASIC in the Raven Ridge APU implementation of Vega.
- ^ Video processing for video frame rate interpolation technique. In Windows it works as a DirectShow filter in your player. In Linux, there is no support on the part of drivers and / or community.
- ^ a b To play protected video content, it also requires card, operating system, driver, and application support. A compatible HDCP display is also needed for this. HDCP is mandatory for the output of certain audio formats, placing additional constraints on the multimedia setup.
- ^ More displays may be supported with native DisplayPort connections, or splitting the maximum resolution between multiple monitors with active converters.
- ^ a b DRM (Direct Rendering Manager) is a component of the Linux kernel. AMDgpu is the Linux kernel module. Support in this table refers to the most current version.
Chipset table
[edit]Note that ATI X1000 series cards (e.g. X1900) do not have Vertex Texture Fetch, hence they do not fully comply with the VS 3.0 model. Instead, they offer a feature called "Render to Vertex Buffer (R2VB)" that provides functionality that is an alternative Vertex Texture Fetch.
| Model | Launch | Code name
|
Fab (nm)
|
Transistors (million)
|
Die size (mm2)
|
Bus interface
|
Clock rate
|
Core config
|
Fillrate | Memory | TDP (Watts)
|
API support (version) | Release Price (USD)
| ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz)
|
Memory (MHz)
|
MOperations/s
|
MPixels/s
|
MVertices/s
|
MTexels/s
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
|
Max.
|
Direct3D
|
||||||||||
| Radeon X1300 | October 5, 2005 (PCIe) December 1, 2005 (AGP) |
RV515 | 90 | 107 | 100 | AGP 8× PCI PCIe ×16 |
450 | 250 | 4:2:4:4 | 1800 | 1800 | 225 | 1800 | 128 256 |
8.0 | DDR DDR2 | 128 | 9.0c | 2.1 | $99 (128MB) $129 (256 MB) | |
| Radeon X1300 HyperMemory |
October 5, 2005 | PCIe ×16 | 128 256 512 |
4.0 | DDR2 | 64 | $ | ||||||||||||||
| Radeon X1300 PRO | October 5, 2005 (PCIe) November 1, 2006 (AGP) |
AGP 8× PCIe ×16 |
600 | 400 | 2400 | 2400 | 300 | 2400 | 128 256 |
12.8 | 128 | 31 | $149 | ||||||||
| Radeon X1300 XT | August 12, 2006 | RV530 | 157 | 150 | 500 | 12:5:4:4 | 6000 | 2000 | 625 | 2000 | 22 | $89 | |||||||||
| Radeon X1550 | January 8, 2007 | RV516 | 107 | 100 | AGP 8x PCI PCIe x16 |
4:2:4:4 | 2200 | 2200 | 275 | 2200 | 128 256 512 |
27 | $ | ||||||||
| Radeon X1600 PRO | October 10, 2005 | RV530 | 157 | 150 | AGP 8× PCIe ×16 |
390 390–690 |
12:5:4:4 | 6000 | 2000 | 625 | 2000 | 12.48 | DDR2 GDDR3 |
41 | $149 (128MB) $199 (256 MB) | ||||||
| Radeon X1600 XT | October 10, 2005 (PCIe) | 590 | 690 | 7080 | 2360 | 737.5 | 2360 | 256 512 |
22.08 | GDDR3 | 42 | $249 | |||||||||
| Radeon X1650 | February 1, 2007 | 500 | 400 | 6000 | 2000 | 625 | 2000 | 12.8 | DDR2 | $ | |||||||||||
| Radeon X1650 SE | RV516 | 105 | PCIe ×16 | 635 | 4:2:4:4 | 256 | DDR2 | $ | |||||||||||||
| Radeon X1650 GT | May 1, 2007 (PCIe) October 1, 2007 (AGP) |
RV560 | 80 | 330 | 230 | AGP 8× PCIe x16 |
400 | 24:8:8:8 | 9600 | 3200 | 800 | 3200 | 256 512 |
GDDR3 | $ | ||||||
| Radeon X1650 PRO | August 23, 2006 (PCIe) October 15, 2006 (AGP) |
RV535 | 131 | 600 | 700 | 12:5:4:4 | 7200 | 2400 | 750 | 2400 | 22.4 | 44 | $99 | ||||||||
| Radeon X1650 XT | October 30, 2006 | RV560 | 230 | 525 | 24:8:8:8 | 12600 | 4200 | 1050 | 4200 | 55 | $149 | ||||||||||
| Radeon X1700 FSC | November 5, 2007 (OEM) | RV535 | 131 | PCIe ×16 | 587 | 695 | 12:5:4:4 | 7044 | 2348 | 733 | 2348 | 256 | 22.2 | 44 | OEM | ||||||
| Radeon X1700 SE | November 30, 2007 | RV560 | 230 | 500 | 500 | 24:8:8:8 | 12000 | 4000 | 1000 | 4000 | 512 | 16.0 | 50 | $ | |||||||
| Radeon X1800 CrossFire Edition |
December 20, 2005 | R520 | 90 | 321 | 288 | 600 | 700 | 16:8:16:16 | 9600 | 9600 | 900 | 9600 | 512 | 46.08 | 256 | 113 | $ | ||||
| Radeon X1800 GTO | March 9, 2006 | 500 | 495 | 12:8:12:8 | 6000 | 6000 | 1000 | 6000 | 256 512 |
32.0 | 48 | $249 | |||||||||
| Radeon X1800 XL | October 5, 2005 | 500 | 16:8:16:16 | 8000 | 8000 | 1000 | 8000 | 256 | 70 | $449 | |||||||||||
| Radeon X1800 XT | 625 | 750 | 10000 | 10000 | 1250 | 10000 | 256 512 |
48.0 | 113 | $499 (256MB) $549 (512 MB) | |||||||||||
| Radeon X1900 CrossFire Edition |
January 24, 2006 | R580 | 384 | 352 | 625 | 725 | 48:8:16:16 | 30000 | 512 | 46.4 | 100 | $599 | |||||||||
| Radeon X1900 GT | May 5, 2006 | 575 | 600 | 36:8:12:12 | 20700 | 6900 | 1150 | 6900 | 256 | 38.4 | 75 | $ | |||||||||
| Radeon X1900 GT Rev. 2 | September 7, 2006 | 512 | 18432 | 6144 | 1024 | 6144 | 42.64 | ||||||||||||||
| Radeon X1900 XT | January 24, 2006 | 625 | 725 | 48:8:16:16 | 30000 | 10000 | 1250 | 10000 | 256 512 |
46.4 | 100 | $549 | |||||||||
| Radeon X1900 XTX | R580 | 650 | 775 | 31200 | 10400 | 1300 | 10400 | 512 | 49.6 | 135 | $649 | ||||||||||
| Radeon X1950 CrossFire Edition |
August 23, 2006 | R580+ | 80 | 1000 | 31200 | 10400 | 1300 | 10400 | 64 | GDDR4 | $449 | ||||||||||
| Radeon X1950 GT | January 29, 2007 (PCIe) February 10, 2007 (AGP) |
RV570 | 330 | 230 | AGP 8x PCIe x16 |
500 | 600 | 36:8:12:12 | 18000 | 6000 | 1000 | 6000 | 256 512 |
38.4 | GDDR3 | 57 | $140 | ||||
| Radeon X1950 PRO | October 17, 2006 (PCIe) October 25, 2006 (AGP) |
575 | 690 | 20700 | 6900 | 1150 | 6900 | 44.16 | 66 | $199 | |||||||||||
| Radeon X1950 XT | October 17, 2006 (PCIe) February 18, 2007 (AGP) |
R580+ | 384 | 352 | AGP 8x PCIe 1.0 x16 |
625 | 700 (AGP) 900 (PCIe) |
48:8:16:16 | 30000 | 10000 | 1250 | 10000 | 44.8 (AGP) 57.6 (PCIe) |
96 | $ | ||||||
| Radeon X1950 XTX | October 17, 2006 | PCIe 1.0 ×16 | 650 | 1000 | 31200 | 10400 | 1300 | 10400 | 512 | 64 | GDDR4 | 125 | $449 | ||||||||
| Model | Launch | Code name
|
Fab (nm)
|
Transistors (million)
|
Die size (mm2)
|
Bus interface
|
Core (MHz)
|
Memory (MHz)
|
Core config
|
MOperations/s
|
MPixels/s
|
MVertices/s
|
MTexels/s
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
|
Max.
|
Direct3D
|
Release price (USD)
| |
| Clock rate | Fillrate | Memory | TDP (Watts)
|
API support (version) | |||||||||||||||||
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
Mobility Radeon X1000 series
[edit]| Model | Launch | Model number
|
Code name
|
Fab (nm)
|
Core clock (MHz)
|
Memory clock (MHz)
|
Core config1
|
Fillrate | Memory | API compliance (version) | Notes
| ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pixel (GP/s)
|
Texture (GT/s)
|
Size (MB)
|
Bandwidth (GB/s)
|
Bus type
|
Bus width (bit)
|
||||||||||||
| Mobility Radeon X1300 | January 19, 2006 | M52 | RV515 | 90 | PCIe ×16 | 350 | 250 | 2:4:4:4 | 1.4 | 1.4 | 128 + shared | 8 | DDR DDR2 | 128 | 9.0c | 2.0 | |
| Mobility Radeon X1350 | September 18, 2006 | M62 | 470 | 350 | 1.88 | 1.88 | 11.2 | DDR2 GDDR3 | |||||||||
| Mobility Radeon X1400 | January 19, 2006 | M54 | 445 | 250 | 1.78 | 1.78 | 8 | DDR DDR2 | |||||||||
| Mobility Radeon X1450 | September 18, 2006 | M64 | 550 | 450 | 2.2 | 2.2 | 14.4 | DDR2 GDDR3 | |||||||||
| Mobility Radeon X1600 | February 1, 2006 | M56 | RV530 | 425 450 |
375 470 |
5:12:4:4 | 1.7 1.8 |
1.7 1.8 |
256 | 12.0 15.04 |
DDR2 GDDR3 | ||||||
| Mobility Radeon X1700 | M66 | RV535 | 475 | 400 550 |
1.9 | 1.9 | 11.2 17.6 |
strained silicon | |||||||||
| Mobility Radeon X1800 | March 1, 2006 | M58 | R520 | 450 | 500 | 8:12:12:12 | 5.4 | 5.4 | 32 | GDDR3 | 256 | ||||||
| Mobility Radeon X1800 XT | M58 | 550 | 650 | 8:16:16:16 | 8.8 | 8.8 | 41.6 | ||||||||||
| Mobility Radeon X1900 | January 11, 2007 | M68 | RV570 | 80 | 400 | 470 | 8:36:12:12 | 4.8 | 4.8 | 30.08 | PowerPlay 6.0 | ||||||
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.
See also
[edit]References
[edit]- ^ "Radeon X1K Real-Time Demos". Archived from the original on May 7, 2009.
- ^ "Download AMD Drivers".
- ^ a b c d e f g h Wasson, Scott. ATI's Radeon X1000 series graphics processors, Tech Report, October 5, 2005.
- ^ "AMD Catalyst™ Display Driver".
- ^ Advanced Micro Devices, Inc. Radeon R5xx Acceleration v. 1.5 Archived September 10, 2015, at the Wayback Machine, AMD website, October 2013.
- ^ Mobility Radeon X1300 Archived May 9, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
- ^ Mobility Radeon X1350 Archived March 25, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
- ^ Mobility Radeon X1400 Archived June 15, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
- ^ Mobility Radeon X1450 Archived June 3, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
- ^ The Inquirer, 16 November 2006: AMD samples 80nm RV505CE – finally (cited February 4, 2011)
- ^ Mobility Radeon X1700 Archived May 26, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
- ^ Mobility Radeon X1600 Archived June 22, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
- ^ Hanners. PowerColor Radeon X1650 PRO video card review Archived January 12, 2008, at the Wayback Machine, Elite Bastards, August 27, 2006.
- ^ Wasson, Scott. ATI's Radeon X1650 XT graphics card, Tech Report, October 30, 2006.
- ^ Wasson, Scott. ATI's Radeon X1900 series graphics cards, Tech Report, January 24, 2006.
- ^ Wasson, Scott. ATI's Radeon X1950 XTX and CrossFire Edition graphics cards, Tech Report, August 23, 2006.
- ^ Wilson, Derek. ATI Radeon X1950 Pro: CrossFire Done Right, AnandTech, October 17, 2006.
- ^ "AMD Radeon HD 6900 (AMD Cayman) series graphics cards". HWlab. hw-lab.com. December 19, 2010. Archived from the original on August 23, 2022. Retrieved August 23, 2022.
New VLIW4 architecture of stream processors allowed to save area of each SIMD by 10%, while performing the same compared to previous VLIW5 architecture
- ^ "GPU Specs Database". TechPowerUp. Retrieved August 23, 2022.
- ^ "NPOT Texture (OpenGL Wiki)". Khronos Group. Retrieved February 10, 2021.
- ^ "Mesamatrix". mesamatrix.net. Retrieved July 15, 2025.
- ^ "Conformant Products". Khronos Group. Retrieved December 2, 2024.
- ^ "radv: add Vulkan 1.4 support". Mesa. Retrieved December 2, 2024.
- ^ "AMD Radeon RX 6800 XT Specs". TechPowerUp. Retrieved January 1, 2021.
- ^ "AMD Launches The Radeon PRO W7500/W7600 RDNA3 GPUs". Phoronix. August 3, 2023. Retrieved September 4, 2023.
- ^ "AMD Radeon Pro 5600M Grafikkarte". TopCPU.net (in German). Retrieved September 4, 2023.
- ^ a b c Killian, Zak (March 22, 2017). "AMD publishes patches for Vega support on Linux". Tech Report. Retrieved March 23, 2017.
- ^ Larabel, Michael (September 15, 2020). "AMD Radeon Navi 2 / VCN 3.0 Supports AV1 Video Decoding". Phoronix. Retrieved January 1, 2021.
- ^ Edmonds, Rich (February 4, 2022). "ASUS Dual RX 6600 GPU review: Rock-solid 1080p gaming with impressive thermals". Windows Central. Retrieved November 1, 2022.
- ^ "Radeon's next-generation Vega architecture" (PDF). Radeon Technologies Group (AMD). Archived from the original (PDF) on September 6, 2018. Retrieved June 13, 2017.
- ^ "AMDGPU". Retrieved December 29, 2023.
External links
[edit]Radeon X1000 series
View on GrokipediaDevelopment and Release
Development History
Development of the Radeon X1000 series, based on the R500 microarchitecture, began in the early 2000s under ATI Technologies, with a primary focus on achieving full compliance with DirectX 9.0c and implementing Shader Model 3.0 capabilities in both vertex and pixel shaders.[7] This effort built on prior generations like the R300 and R400, aiming to enhance shader complexity and performance for next-generation gaming and graphics applications, including adaptations for the Xbox 360's Xenos GPU, which featured early concepts of unified shader processing.[8] ATI's engineering teams grappled with significant challenges in integrating advanced vertex and pixel shader units, including the need for efficient load balancing and dynamic instruction scheduling to handle the increased computational demands without compromising stability.[7] Key hurdles emerged during silicon prototyping, particularly around thermal management and clock speed reliability in early samples, necessitating multiple design iterations. The first tape-out for the flagship R520 chip occurred in November 2004, but subsequent respins were required in 2005 due to instability issues that prevented achieving target clock speeds.[9] A successful tape-out was finally achieved in July 2005 after three attempts, allowing progression toward production.[10] These technical setbacks were compounded by manufacturing difficulties, as the R500 series was fabricated on TSMC's 90 nm process, which suffered from low yields and production bottlenecks in mid-2005, contributing to delays in silicon validation.[11] The planned mid-2005 launch was postponed to November 2005 as a result of these yield problems and ongoing redesigns, allowing ATI additional time to resolve fabrication and performance issues.[12] Internally, ATI faced distractions from acquisition rumors circulating in mid-2005, with speculation about interest from AMD, Intel, and others potentially disrupting team focus and resource allocation during the critical final development phase.[13] These rumors, which culminated in AMD's actual acquisition of ATI in 2006, added pressure but did not directly alter the R500 engineering timeline.[13] Despite the challenges, the resolved prototypes paved the way for the series' eventual market entry, marking a pivotal advancement in ATI's graphics technology.Release Timeline and Market Impact
The Radeon X1000 series debuted with the high-end Radeon X1800 XT flagship model, launched on October 5, 2005, and widely available by November of that year.[4] This was followed by the introduction of mid-range offerings, including the Radeon X1600 series in January 2006, and the entry-level Radeon X1300 series in March 2006.[14] The lineup expanded further with the premium Radeon X1900 XTX on January 24, 2006, and concluded with the Radeon X1950 XTX on August 23, 2006, and the X1950 Pro in October 2006.[15] ATI's pricing strategy positioned the X1000 series to span budget to premium segments, with the X1800 XT launching at $549 USD and the X1900 XTX at $650 USD, while more affordable models like the X1300 series targeted $99 to $149 USD and the X1600 XT at $199 to $249 USD.[4][14] The series provided strong competition to NVIDIA's GeForce 7 lineup, particularly in Shader Model 3.0 performance, helping ATI achieve approximately 45% market share in discrete desktop graphics during Q1 2006, declining to 43% by Q3 2006.[16][17] However, the cards faced criticisms for elevated power consumption, with high-end models like the X1800 XT reaching a 110W TDP, contributing to higher heat output compared to predecessors.[4] The AMD-ATI merger, announced in July 2006 and completed in October 2006, facilitated unified branding under AMD for subsequent X1000 releases like the X1950 series and improved supply chain efficiency, enabling better production scaling amid competitive pressures.[18] Production of the X1000 series wound down by 2008 with the shift to the Radeon HD 2000 and later lines, though AMD provided legacy driver support through the Catalyst 10.2 release in 2010.[19]Architecture
R500 Microarchitecture
The R500 microarchitecture formed the foundation of the Radeon X1000 series graphics processing units, marking ATI's initial foray into a more flexible shader design optimized for DirectX 9-era workloads. Fabricated on TSMC's 90 nm process node, the high-end R520 variant featured a die size of 288 mm² and integrated 321 million transistors, enabling higher performance density compared to the preceding 130 nm R400 architecture.[7][20] Central to the R500 was its advanced shader architecture, which supported Shader Model 3.0 through 16 pixel shader units and 8 vertex shader units, allowing for improved versatility in handling complex effects like dynamic shadows and higher-order surface rendering without fully merging pixel and vertex processing as in later fully unified designs.[4][7] This setup provided 16 rendering pipelines, balancing geometry and fragment processing for enhanced efficiency in Shader Model 3.0-compliant applications. Memory access was managed via an innovative ring bus controller, featuring a 512-bit internal data path that efficiently distributed bandwidth to 16 render backends, supporting external interfaces like 256-bit GDDR3 for high-bandwidth operations while minimizing latency through a centralized crossbar design.[21] Typical operating parameters included core clock speeds up to 600 MHz on flagship implementations, complemented by GDDR3 memory clocked at an effective 1.4 GHz (700 MHz data rate), delivering substantial throughput for texture-heavy scenes.[4] Power efficiency was addressed through early dynamic clocking and voltage scaling mechanisms, which adjusted frequencies based on workload to curb idle consumption—such as dropping to lower states during desktop use—although these lacked the granular, per-shader control of subsequent architectures like TeraScale.[21][22]Graphics Pipeline and Features
The graphics pipeline of the Radeon X1000 series, based on the R500 microarchitecture, features a decoupled design that separates geometry processing from fragment processing to enhance scalability and efficiency. Geometry setup includes hardware transform and lighting (T&L) capabilities through dedicated vertex shader processors, with the X1800 variant employing 8 such processors capable of handling up to 512 simultaneous threads via Ultra-Threading technology. This setup supports complex vertex operations, including dynamic flow control and up to 512 instructions per shader, enabling efficient handling of 3D models before passing data to the rasterization stage.[23] The fragment processing stage utilizes 16 pixel pipelines in high-end models like the X1800, each equipped with arithmetic logic units (ALUs) for shader execution and texture mapping units (TMUs) for sampling; for instance, the X1900 extends this to 48 ALUs across 16 pipelines, providing three ALUs per pipeline for vector operations while maintaining one TMU and one render output unit (ROP) per pipeline. Texture processing involves advanced 16 texture units with fully associative caches to minimize bandwidth demands, supporting up to 128-tap filtering in high-quality modes. The series fully complies with DirectX 9.0c (Shader Model 3.0) and OpenGL 2.0, including partial support for high dynamic range (HDR) rendering via 128-bit floating-point precision throughout the pipeline, as well as anisotropic filtering up to 16x for improved texture clarity at oblique angles. Anti-aliasing is handled through Smoothvision 3.1, which incorporates adaptive modes such as multi-sample anti-aliasing (MSAA) up to 6x and supersampling options reaching 14x in CrossFire configurations, balancing quality and performance by dynamically adjusting sample patterns.[23][24][21] Key innovative features include HyperZ III, an evolution of ATI's hierarchical Z-buffer technology that achieves up to 8:1 lossless Z-compression, reducing memory bandwidth usage by eliminating redundant reads and writes for hidden surfaces and enabling up to 60% more efficient hidden pixel culling compared to prior generations. Complementing this is the Avivo video engine, a dedicated hardware block for high-definition video processing, which accelerates decoding of formats like MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264, offloading the CPU and supporting smooth playback of 1080p content with integrated de-interlacing and color space conversion. However, the pipeline's reliance on separate vertex and pixel shaders—without unified shading architecture—results in inefficiencies for workloads requiring frequent data sharing between stages, such as advanced displacement mapping, as vertex texture fetch is not fully supported.[23][21][25]Software Support
Proprietary Drivers
The proprietary drivers for the Radeon X1000 series were provided by AMD under the Catalyst brand, starting with the launch driver version 5.11 released on November 10, 2005, which introduced initial support for the series including the Radeon X1800 model.[26] This driver package included optimizations tailored to the R500 microarchitecture's requirements, such as enhanced DirectX 9.0c compatibility and initial CrossFire configurations for multi-GPU setups.[26] AMD continued issuing Catalyst updates for the X1000 series through version 10.2, released in February 2010, which provided the final major feature enhancements and stability improvements for Windows XP, Vista, and 7 operating systems.[27] Notable updates included Catalyst 6.11 in November 2006, which addressed stability issues specific to the X1900 series, such as fixing display corruption during resume from sleep states in CrossFire mode and resolving intermittent TV output rotation problems.[28] Another key release, Catalyst 7.1 in January 2007, expanded CrossFire support to the entire X1000 lineup under Windows Vista, enabling Alternate Frame Rendering for improved multi-GPU performance in compatible applications.[29] The Catalyst drivers offered full compatibility with Windows up to version 7, including certified support for 32-bit and 64-bit editions, but AMD provided no official drivers for Windows 8 or later; users could achieve partial functionality on Windows 10 through legacy installations of Catalyst 10.2 using compatibility modes or third-party wrappers.[30] Key features enabled by the Catalyst Control Center included GPU overclocking via the integrated Overdrive utility, allowing users to adjust core and memory clocks on supported X1000 models, as well as HydraVision for advanced multi-monitor management, which facilitated virtual desktop creation and hotkey-based display switching across up to six monitors.[31] Official support for the Catalyst drivers on the X1000 series ended with the 10.2 release in 2010, after which AMD discontinued new feature development and transitioned the hardware to legacy status without further security patches.[27]Open-Source Drivers
The open-source drivers for the Radeon X1000 series, based on the R500 microarchitecture, originated with AMD's strategic initiative announced in September 2007 to support community-driven development for R500 hardware, including the emerging radeonhd driver effort that provided initial Linux support, building on and eventually merging into the mainline radeon driver after its discontinuation around 2010.[32] This initiative marked AMD's early commitment to open-source graphics development, focusing initially on basic functionality for the X1000 series while leveraging reverse-engineering by developers.[32] Key milestones in driver maturity included the achievement of 2D acceleration by early 2008, enabling hardware-accelerated rendering through EXA and XAA backends in the X.Org server, which improved desktop performance over software rendering.[33] By 2010, 3D acceleration advanced significantly with the integration of the Gallium3D framework into the radeon driver, providing OpenGL support via the classic r500 Mesa driver state tracker, though performance lagged behind proprietary options for demanding workloads.[34] Full Kernel Mode Setting (KMS) enablement arrived with Linux kernel 3.0 in July 2011, allowing seamless mode switching and better power management for R500 GPUs without requiring user-space intervention.[35] Following AMD's acquisition of ATI in 2006 and subsequent shifts in strategy, the company increased contributions to open-source efforts starting around 2009, including documentation releases and code upstreaming for the radeon driver, though by this point the X1000 series was treated as legacy hardware compared to newer architectures like R600.[36] These contributions built on community reverse-engineering but prioritized forward-looking support, such as Gallium3D enhancements, leaving R500-specific optimizations to volunteers.[37] Despite progress, limitations persisted, including limited hardware-accelerated video decoding for the Avivo engine; VA-API support for the radeon driver provides basic MPEG-2 acceleration since around 2011, but more complex formats like VC-1 and H.264 rely on software fallbacks.[38] Additionally, open-source implementations offered incomplete Shader Model 3.0 compliance, particularly in vertex texture fetch and dynamic branching, constraining compatibility with certain DirectX 9.0c applications emulated via Wine or native ports.[39] As of 2025, support for the X1000 series remains in Mesa 25.x through the legacy radeon kernel module and classic r500 driver, providing stable 2D/3D operation and basic Vulkan compatibility via the Zink OpenGL-over-Vulkan layer, which translates older OpenGL calls to Vulkan for applications lacking native legacy support; recent updates in Mesa 25.3 include new OpenGL extensions for R500 GPUs.[40][41] This maintenance ensures usability in modern Linux distributions for lightweight tasks, though advanced features like ray tracing are unavailable due to hardware constraints.[42]Desktop Variants
Low-End Models (X1300–X1550)
The low-end models of the Radeon X1000 series, the X1300 and X1550, targeted budget users requiring discrete graphics for basic computing, light gaming, and home theater PC (HTPC) applications as replacements for integrated solutions. These cards supported both PCI Express x16 and AGP 8x interfaces to accommodate legacy systems. They emphasized low power consumption and compatibility over high-end capabilities, with typical power draw under 40 W and no need for auxiliary power connectors.[43][44][45] The Radeon X1300, released in 2005 and based on the RV515 chip fabricated on a 90 nm process with 107 million transistors, featured 4 pixel shader processors, 2 vertex shader processors, 4 texture mapping units, and 4 render output units. It utilized a 128-bit memory bus paired with 256 MB of DDR2 memory at an effective 400 MHz, alongside a 450 MHz core clock. Variants including the Pro, XT, GT, and SE offered adjustments in clock speeds and memory timings for varied performance levels; for instance, the XT model increased the core to 600 MHz and memory to an effective 500 MHz. Performance was adequate for era-specific tasks, delivering 20–30 fps in games like Half-Life 2 at 1024x768 resolution on high settings.[46][47][48] Introduced in 2007, the Radeon X1550 employed the RV516 chip, also on 90 nm with 105 million transistors, maintaining the same shader and pipeline configuration of 4 pixel shader processors, 2 vertex shader processors, 4 texture mapping units, and 4 render output units. It supported a 128-bit bus with options for DDR2 or GDDR3 memory up to 512 MB in some configurations, clocked at up to 800 MHz effective, and a core clock reaching 550 MHz. Variants such as XT and GT included clock and memory optimizations for enhanced efficiency, with power consumption rated at 27 W. These models provided similar entry-level capabilities, suitable for HTPC media playback and casual gaming at low resolutions.[49][50][44]Mid-Range Models (X1600–X1650)
The mid-range models in the Radeon X1000 series, including the X1600 and X1650, targeted mainstream gamers seeking a balance of performance and efficiency on desktop systems. The Radeon X1600, launched in 2006 and based on the RV530 GPU fabricated on a 90 nm process with 157 million transistors, featured 12 pixel shaders and 5 vertex shaders, along with 4 texture mapping units (TMUs) and 4 render output units (ROPs). It utilized a 128-bit memory interface with GDDR3 memory, typically configured at 256 MB, and operated at core clock speeds ranging from 500 MHz to 590 MHz depending on the variant, such as the X1600 Pro or XT.[51][52] Building on this foundation, the Radeon X1650 arrived in late 2006 with the RV560 GPU, which increased processing power through 24 pixel shaders and 8 vertex shaders, along with 8 TMUs and 8 ROPs on an 80 nm process. Available in Pro and XT variants, it supported memory configurations of 256 MB to 512 MB using DDR2 or GDDR3 on a 128-bit bus, with core clocks up to 700 MHz and memory speeds reaching 1.4 Gbps effective in higher-end models. These enhancements allowed the X1650 to handle more demanding rendering tasks compared to its predecessor.[53][54] Key features of these mid-range models included native CrossFire support for multi-GPU configurations from their initial release, enabling scalable performance in compatible systems without requiring specialized hardware bridges in later implementations. They also offered superior anti-aliasing (AA) capabilities relative to low-end X1000 variants, thanks to higher shader counts that improved edge smoothing and image quality in games. Performance-wise, the X1600 XT delivered 60-70 frames per second (fps) in Doom 3 at 1280x1024 resolution on high-quality settings, demonstrating solid playability for contemporary titles. With a thermal design power (TDP) around 50 W, these cards emphasized energy efficiency suitable for standard ATX power supplies.[55][56][57] In the market, the X1600 and X1650 series directly competed with NVIDIA's GeForce 7600 lineup, offering competitive rasterization and shader performance for 1024x768 to 1280x1024 gaming resolutions. Their popularity in original equipment manufacturer (OEM) systems stemmed from reliable integration, low power draw, and support for features like Avivo video processing, making them a staple in mid-tier pre-built PCs from vendors such as HP and Dell during the mid-2000s. Driver updates later enhanced CrossFire stability for these models, though primary optimizations were handled in the proprietary Catalyst suite.[58][59]High-End Models (X1800)
The Radeon X1800 series was powered by the R520 graphics processing unit, introduced in 2005, featuring 16 pixel pipelines, a 256-bit memory interface, and support for up to 512 MB of GDDR3 memory operating at clock speeds ranging from 520 to 625 MHz on the core.[4] This architecture marked a significant advancement in ATI's R500 family, emphasizing ultra-threaded shader processing with 48 simultaneous shader operations to handle complex DirectX 9.0c workloads more efficiently than prior generations.[60] The series targeted high-end desktop gaming, delivering robust performance for the era's demanding titles while introducing key hardware innovations. Key variants included the flagship Radeon X1800 XT, clocked at 625 MHz with 512 MB of GDDR3 memory clocked at 750 MHz (1500 MHz effective), positioning it as the performance leader; the cut-down Radeon X1800 GTO, limited to 12 pipelines at 500 MHz core and 256 MB of memory for a more affordable option; and the OEM-focused Radeon X1800 XL, running at 500 MHz with 256 MB of memory for system integrator builds.[61][62][63] Among its innovations, the X1800 XT became the first consumer graphics card to ship with 512 MB of VRAM, enhancing texture handling in high-resolution scenarios, and it mandated an external 6-pin power connector to meet its elevated power requirements, diverging from earlier PCIe designs that relied solely on slot power.[64][65] In performance testing, the X1800 XT achieved over 60 frames per second in games like Doom 3 at 1600x1200 resolution with 4x anti-aliasing enabled, often surpassing NVIDIA's GeForce 7800 GTX in shader-intensive tasks due to its superior parallel processing capabilities.[60] However, the series faced challenges with thermal management, drawing a thermal design power (TDP) of 113 W that necessitated robust cooling solutions, resulting in noticeable fan noise under load.[4] Multi-GPU CrossFire configurations provided scaling benefits of up to 1.8x in select titles, though real-world gains varied by application and averaged around 1.4x to 1.6x depending on scene complexity.[66]Enthusiast Models (X1900–X1950)
The Radeon X1900 XTX, released in January 2006, represented ATI's flagship enthusiast GPU based on the R580 graphics processor, featuring 48 pixel shader processors and 8 vertex shader processors, a 256-bit memory interface, a 650 MHz core clock, and 512 MB of GDDR3 memory operating at 775 MHz (1.55 GHz effective).[67] This configuration delivered exceptional shader-heavy performance for DirectX 9-era titles, positioning it as a direct competitor to NVIDIA's GeForce 7900 GTX. The card's architecture emphasized unified shaders capable of handling both pixel and vertex workloads, enabling more flexible rendering pipelines compared to prior generations.[68] The X1950 series, introduced in early 2007 as a refinement and precursor to the R600 architecture, built on the R580+ core shrunk to an 80 nm process for improved yields and thermal characteristics. The X1950 XTX variant maintained the full 48 pixel and 8 vertex shaders but achieved factory overclocks up to 650 MHz core and 1 GHz (2 GHz effective) GDDR4 memory, enhancing bandwidth to 64 GB/s while retaining the 256-bit bus and 512 MB capacity. Lower-tier variants like the X1950 Pro used a cut-down R570 core with 36 pixel shaders, and the X1950 XT mirrored the XTX's shader count but at reduced clocks around 625 MHz core and 800 MHz memory for balanced enthusiast pricing. These models incorporated minor optimizations, such as refined voltage regulation, to support overclocking stability.[69] Key advancements in the X1900 and X1950 lines included an evolved ring bus memory controller, which provided higher bandwidth efficiency and reduced latency over the X1800 series' implementation, contributing to better overall power efficiency despite increased transistor counts— the R580 achieved approximately 10-15% improved performance per watt in shader-bound scenarios.[70] In performance, these cards led enthusiast benchmarks, such as delivering playable frame rates above 30 FPS in Crysis at 1600x1200 resolution with high settings on dual-core systems, outperforming contemporaries in complex scenes with heavy foliage and dynamic lighting. CrossFire configurations scaled effectively, often achieving near 2x performance uplift in supported titles like Splinter Cell: Chaos Theory, with up to 83% efficiency in alternate frame rendering mode.[71][72] However, these high-end models carried notable drawbacks, including a thermal design power exceeding 175 W—approaching 200 W under load—and requiring dual 8-pin power connectors for stable operation, which strained many power supplies of the era. Launch drivers suffered from bugs, such as texture corruption and instability in multi-GPU setups, requiring several Catalyst updates to mature.[73]Mobile Variants
Mobility Radeon Lineup
The Mobility Radeon lineup within the X1000 series consisted of discrete graphics processors designed for laptops, leveraging the R500 architecture to deliver DirectX 9.0c compatibility and Shader Model 3.0 support in mobile form factors. These GPUs were tailored for varying performance levels, from entry-level to high-end, and were commonly paired with ATI's mobile chipsets to enable efficient integration in notebook designs.[74] The Mobility Radeon X1400, launched in January 2006 and codenamed M54, was based on the RV515 GPU core fabricated on a 90 nm process. It featured a 445 MHz core clock, 4 pixel shader units, 2 vertex shader units, and 128 MB of DDR2 memory on a 128-bit bus running at 250 MHz, providing a memory bandwidth of 8 GB/s. This model served as an entry-level option for mainstream laptops, often integrated with ATI chipsets like the RS690 for enhanced system compatibility.[75][76] The Mobility Radeon X1700, released in February 2006 and codenamed M66, utilized a mobile variant of the RV530 GPU on the same 90 nm process. It offered higher performance with 12 pixel shader units, 5 vertex shader units, a core clock of 475 MHz, and up to 256 MB of GDDR3 memory on a 128-bit bus at 400 MHz, yielding 12.8 GB/s bandwidth. Targeted at mid-range laptops, the X1700 supported HDMI output for external displays.[77][78] The Mobility Radeon X1800, released in March 2006 and codenamed M58, was a high-end variant based on the R520 GPU core on the 90 nm process. It featured a 450 MHz core clock, 12 pixel shader units, 6 vertex shader units, and 256 MB of GDDR3 memory on a 256-bit bus at 500 MHz, providing 32 GB/s bandwidth. The Mobility Radeon X1800 XT variant reached 550 MHz core clock with memory at 650 MHz. These models targeted premium gaming laptops with support for advanced features like Avivo.[79][80][81] The Mobility Radeon X2300, introduced in March 2007 and codenamed M64, was a low-power derivative of the RV515 GPU, optimized for thin-and-light notebooks and ultrabooks with a 480 MHz core clock, 4 pixel shader units, 2 vertex shader units, and 64 MB of DDR2 memory on a 64-bit bus at 400 MHz. It emphasized efficiency for basic graphics tasks in compact systems. Chipset pairings, such as the ATI RS600, supported unified memory architecture (UMA) sharing to allocate system RAM dynamically for graphics in thin client configurations, reducing the need for dedicated VRAM.[82][83]| Model | Codename | Launch Year | Core Clock (MHz) | Shaders (Pixel/Vertex) | Memory | Bus Width | Target Segment |
|---|---|---|---|---|---|---|---|
| X1400 | M54 (RV515) | 2006 | 445 | 4/2 | 128 MB DDR2 | 128-bit | Entry-level |
| X1700 | M66 (RV530) | 2006 | 475 | 12/5 | Up to 256 MB GDDR3 | 128-bit | Mid-range |
| X1800 | M58 (R520) | 2006 | 450 (550 XT) | 12/6 (16/8 XT) | 256 MB GDDR3 | 256-bit | High-end |
| X2300 | M64 (RV515) | 2007 | 480 | 4/2 | 64 MB DDR2 | 64-bit | Low-power/ultrabooks |
Power and Performance Adaptations
The mobile variants of the Radeon X1000 series incorporated key optimizations to balance performance with the demands of portability, focusing on reduced power draw and thermal output for extended battery life and seamless laptop integration. Central to these adaptations was a substantial lowering of thermal design power (TDP) from around 110 W in high-end desktop counterparts, such as the Radeon X1800 XT, to 25–45 W in mobility implementations. This was accomplished primarily through aggressive clock throttling and selective deactivation of processing pipelines, enabling the GPUs to operate within the limited power budgets of mobile platforms while maintaining core architectural features like the R520 core.[4][80] These power reductions came with inherent performance trade-offs, typically rendering mobile models 20–30% slower than their desktop equivalents due to diminished clock rates and constrained memory bandwidth. For instance, the Mobility Radeon X1800 ran its core at 450 MHz—compared to the desktop X1800 XT's 600 MHz—while the Mobility X1800 XT reached 550 MHz against the desktop XT's 600 MHz, prioritizing efficiency over peak throughput in thermally restricted environments.[84][4][85] Thermal management was enhanced via integrated heat spreaders that distributed heat more evenly across the die and chassis, coupled with dynamic power gating techniques to enter ultra-low-power idle states when graphics load was minimal. The Radeon X1000 family's Dynamic Voltage Control further supported this by adjusting supply voltages in real-time to curb heat generation without sacrificing responsiveness during active use.[2][86] In practice, these modifications positively influenced battery longevity, supporting up to 2 hours of gaming on typical 2006 laptops under moderate loads, a marked improvement for discrete graphics at the time. The integrated Avivo video processing unit played a crucial role here, offloading HD video decoding to dedicated hardware for smoother playback with minimal CPU involvement, thereby conserving power and extending runtime during media sessions by reducing overall system draw.[80][87]Comparisons and Legacy
Feature Matrix
The Radeon X1000 series encompasses a range of graphics processing units based on ATI's R500 architecture, featuring unified pixel and vertex shader processing compliant with Shader Model 3.0.[88] All variants support DirectX 9.0c and OpenGL 2.0, along with advanced rendering technologies such as HyperZ III for hierarchical Z-buffer compression and Avivo for hardware-accelerated video decode and encode.[21] Differences across models center on shader unit counts, which scale from 4 pixel shaders and 2 vertex shaders in entry-level configurations to 48 pixel shaders and 8 vertex shaders in top-tier models, enabling varied rendering capabilities.[67] The series also introduces adaptive anti-aliasing modes and supports up to 16x anisotropic filtering for improved image quality, with CrossFire multi-GPU scaling available on mid-range and higher desktop variants.[88]| Model | Shaders/Pipelines | Memory Bus/Size | API Support | AA/AF Levels | CrossFire |
|---|---|---|---|---|---|
| Radeon X1300 (Desktop) | 4 PS / 2 VS | 128-bit / 128 MB DDR2 | DX9.0c / OpenGL 2.0 | 16x / 16x | No |
| Radeon X1550 (Desktop) | 4 PS / 2 VS | 128-bit / 256 MB DDR2 | DX9.0c / OpenGL 2.0 | 16x / 16x | No |
| Radeon X1550 XT (OEM Desktop) | 4 PS / 2 VS | 128-bit / 256 MB DDR2 | DX9.0c / OpenGL 2.0 | 16x / 16x | No |
| Radeon X1600 XT (Desktop) | 12 PS / 5 VS | 128-bit / 256 MB GDDR3 | DX9.0c / OpenGL 2.0 | 16x / 16x | Yes |
| Radeon X1650 XT (Desktop) | 24 PS / 8 VS | 128-bit / 256 MB GDDR3 | DX9.0c / OpenGL 2.0 | 16x / 16x | Yes |
| Radeon X1800 XT (Desktop) | 16 PS / 8 VS | 256-bit / 512 MB GDDR3 | DX9.0c / OpenGL 2.0 | 16x / 16x | Yes |
| Radeon X1900 XTX (Desktop) | 48 PS / 8 VS | 256-bit / 512 MB GDDR3 | DX9.0c / OpenGL 2.0 | 16x / 16x | Yes |
| Radeon X1950 XTX (Desktop) | 48 PS / 8 VS | 256-bit / 512 MB GDDR4 | DX9.0c / OpenGL 2.0 | 16x / 16x | Yes |
| Mobility Radeon X1400 | 4 PS / 2 VS | 128-bit / 128 MB DDR2 | DX9.0c / OpenGL 2.0 | 16x / 16x | No |
| Mobility Radeon X1600 | 12 PS / 5 VS | 128-bit / 128-256 MB GDDR3 | DX9.0c / OpenGL 2.0 | 16x / 16x | No |
| Mobility Radeon X1800 XT | 16 PS / 8 VS | 256-bit / 256 MB GDDR3 | DX9.0c / OpenGL 2.0 | 16x / 16x | No |
Chipset Specifications
The Radeon X1000 series chipsets marked ATI's transition to finer process nodes, predominantly using TSMC's 90 nm fabrication for initial desktop and mobile implementations, which enabled higher transistor densities while balancing power and performance; select enthusiast models later received an 80 nm shrink in 2007 to reduce die sizes and thermal demands without altering core architectures.[89][67] These specifications span low-end to enthusiast desktop variants and corresponding mobile adaptations, with memory support evolving from GDDR2 in entry-level parts to GDDR3 in higher tiers, and bus interfaces primarily PCIe 1.0 x16 (with AGP 8x limited to select low-end desktop models).[90] Bandwidth examples include the X1900's 256-bit GDDR3 interface at an effective 800 MHz yielding approximately 51.2 GB/s (calculated as (256 bits × 1600 MT/s) / 8 / 1000).[67]Desktop Chipset Specifications
| Chip | Process | Transistors | Die Size | TDP | Fab | Memory Types | Interfaces |
|---|---|---|---|---|---|---|---|
| RV515 (X1300) | 90 nm | 105 million | 100 mm² | 30 W | TSMC | GDDR2 | AGP 8x / PCIe 1.0 x16 |
| RV530 (X1600 XT) | 90 nm | 157 million | 150 mm² | 42 W | TSMC | GDDR3 | PCIe 1.0 x16 |
| R520 (X1800 XT) | 90 nm | 312 million | 288 mm² | 113 W | TSMC | GDDR3 | PCIe 1.0 x16 |
| R580 (X1900 XTX) | 90 nm | 384 million | 352 mm² | 135 W | TSMC | GDDR3 | PCIe 1.0 x16 |
| RV570 (X1950 Pro) | 80 nm | 312 million | 230 mm² | 72 W | TSMC | GDDR3 / GDDR4 | PCIe 1.0 x16 |
Mobile Chipset Specifications
| Chip | Process | Transistors | Die Size | TDP | Fab | Memory Types | Interfaces |
|---|---|---|---|---|---|---|---|
| M54 (X1400) | 90 nm | 107 million | 100 mm² | 25 W | TSMC | DDR2 / GDDR2 | PCI Express x16 |
| M56-GL (X1600) | 90 nm | 157 million | 150 mm² | 30 W | TSMC | GDDR3 | PCI Express x16 |
| M58 (X1800) | 90 nm | 312 million | 288 mm² | 45 W | TSMC | GDDR3 | PCI Express x16 |
| M66-300 (X1900) | 80 nm | 312 million | 230 mm² | 35 W | TSMC | GDDR3 | PCI Express x16 |