Hubbry Logo
Radeon X1000 seriesRadeon X1000 seriesMain
Open search
Radeon X1000 series
Community hub
Radeon X1000 series
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Radeon X1000 series
Radeon X1000 series
from Wikipedia

ATI Radeon X1000 series
ATI Radeon X1950XTX
Release dateOctober 5, 2005; 20 years ago (2005-10-05)
CodenameFudo (R520)
Rodin (R580)
ArchitectureRadeon R500
Transistors107M 90nm (RV505)
  • 107M 90nm (RV515)
  • 105M 90nm (RV516)
  • 157M 90nm (RV530)
  • 312M 90nm (R520)
  • 384M 90nm (R580)
  • 384M 90nm (R580+)
  • 157M 80nm (RV535)
  • 312M 80nm (RV560)
  • 312M 80nm (RV570)
Cards
Entry-levelX1300, X1550
Mid-rangeX1600, X1650
High-endX1800, X1900
EnthusiastX1950
API support
Direct3DDirect3D 9.0c
Shader Model 3.0
OpenGLOpenGL 2.0
History
PredecessorRadeon X800 series
SuccessorRadeon HD 2000 series
Support status
Unsupported

The R520 (codenamed Fudo) is a graphics processing unit (GPU) developed by ATI Technologies and produced by TSMC. It was the first GPU produced using a 90 nm photolithography process.

The R520 is the foundation for a line of DirectX 9.0c and OpenGL 2.0 3D accelerator X1000 video cards. It is ATI's first major architectural overhaul since the R300 and is highly optimized for Shader Model 3.0. The Radeon X1000 series using the core was introduced on October 5, 2005, and competed primarily against Nvidia's GeForce 7 series. ATI released the successor to the R500 series with the R600 series on May 14, 2007.

ATI does not provide official support for any X1000 series cards for Windows 8 or Windows 10; the last AMD Catalyst for this generation is the 10.2 from 2010 up to Windows 7.[1] AMD stopped providing drivers for Windows 7 for this series in 2015.[2]

A series of open source Radeon drivers are available when using a Linux distribution.

The same GPUs are also found in some AMD FireMV products targeting multi-monitor set-ups.

Delay during the development

[edit]

The Radeon X1800 video cards that included an R520 were released with a delay of several months because ATI engineers discovered a bug within the GPU in a very late stage of development. This bug, caused by a faulty 3rd party 90 nm chip design library, greatly hampered clock speed ramping, so they had to "respin" the chip for another revision (a new GDSII had to be sent to TSMC). The problem had been almost random in how it affected the prototype chips, making it difficult to identify.

Architecture

[edit]

The R520 architecture is referred to by ATI as an "Ultra Threaded Dispatch Processor", which refers to ATI's plan to boost the efficiency of their GPU, instead of going with a brute force increase in the number of processing units. A central pixel shader "dispatch unit" breaks shaders down into threads (batches) of 16 pixels (4×4) and can track and distribute up to 128 threads per pixel "quad" (4 pipelines each). When a shader quad becomes idle due to a completion of a task or waiting for other data, the dispatch engine assigns the quad with another task to do in the meantime. The overall result is theoretically a greater utilization of the shader units. With a large number of threads per quad, ATI created a very large processor register array that is capable of multiple concurrent reads and writes, and has a high-bandwidth connection to each shader array, providing the temporary storage necessary to keep the pipelines fed by having work available as much as possible. With chips such as RV530 and R580, where the number of shader units per pipeline triples, the efficiency of pixel shading drops off slightly because these shaders still have the same level of threading resources as the less endowed RV515 and R520.[3]

The next major change to the core is to its memory bus. R420 and R300 had nearly identical memory controller designs, with the former being a bug fixed release designed for higher clock speeds. R520's memory bus differs with its central controller (arbiter) that connects to the "memory clients". Around the chip are two 256-bit ring buses running at the same speed as the DRAM chips, but in opposite directions to reduce latency. Along these ring buses are four "stop" points where data exits the ring and goes into or out of the memory chips. There is a fifth, significantly less complex stop that is designed for the PCI Express interface and video input. This design allows memory accesses to be quicker though lower latency from the smaller distance the signals need to move through the GPU, and by increasing the number of banks per DRAM. The chip can spread out memory requests faster and more directly to the RAM chips. ATI claimed a 40% improvement in efficiency over older designs. Smaller cores such as RV515 and RV530 received cutbacks due to their smaller, less costly designs. RV530, for example, has two internal 128-bit buses instead. This generation has support for all recent memory types, including GDDR4. In addition to a ring bus, each memory channel has the granularity of 32-bits, which improves memory efficiency when performing small memory requests.[3]

The vertex shader engines were already at the required FP32 precision in ATI's older products. Changes necessary for SM3.0 included longer instruction lengths, dynamic flow control instructions, with branches, loops and subroutines and a larger temporary register space. The pixel shader engines are actually quite similar in computational layout to their R420 counterparts, although they were heavily optimized and tweaked to reach high clock speeds on the 90 nm process. ATI has been working for years on a high-performance shader compiler in their driver for their older hardware, so staying with a similar basic design that is compatible offered obvious cost and time savings.[3]

At the end of the pipeline, the texture addressing processors are decoupled from pixel shaders, so any unused texturing units can be dynamically allocated to pixels that need more texture layers. Other improvements include 4096x4096 texture support and ATI's 3Dc normal map compression saw an improvement in compression ratio for more specific situations.[3]

The R5xx family introduced a more advanced onboard motion-video engine. Like the Radeon cards since the R100, the R5xx can offload almost the entire MPEG-1/2 video pipeline. The R5xx can also assist in Microsoft WMV9/VC-1 and MPEG H.264/AVC decoding, by a combination of the 3D/pipeline's shader-units and the motion-video engine. Benchmarks show only a modest decrease in CPU-utilization for VC-1 and H.264 playback.

A selection of real-time 3D demonstration programs was released at launch. ATI's development of their "digital superstar", Ruby, continued with a new demo named The Assassin. It showcased a highly complex environment, with high-dynamic-range lighting (HDR) and dynamic soft shadows. Ruby's latest competing program, Cyn, was composed of 120,000 polygons.[4]

The cards support dual-link DVI output and HDCP. However, using HDCP requires external ROM to be installed, which were not available for early models of the video cards. RV515, RV530, and RV535 cores include a single and a double DVI link; R520, RV560, RV570, R580, R580+ cores include two double DVI links.

AMD released the final Radeon R5xx Acceleration document.[5]

Drivers

[edit]

The last AMD Catalyst version that officially supports the X1000 series is 10.2, display driver version 8.702.

Variants

[edit]

X1300–X1550 series

[edit]
X1300 with GPU RV515 (heat sink removed)

This series is the budget solution of the X1000 series and is based on the RV515 core. The chips have four texture units, four ROPs, four pixel shaders, and 2 vertex shaders, similar to the older X300 – X600 cards. These chips use one quad of an R520, whereas the faster boards use just more of these quads; for example, the X1800 uses four quads. This modular design allows ATI to build a "top to bottom" line-up using identical technology, saving research, development time, and money. Because of its smaller design, these cards offer lower power demands (30 watts), so they run cooler and can be used in smaller cases.[3] Eventually, ATI created the X1550 and discontinued the X1300. The X1050 was based on the R300 core and was sold as an ultra-low-budget part.

Early Mobility Radeon X1300 to X1450 are based around the RV515 core as well.[6][7][8][9]

Beginning in 2006, Radeon X1300 and X1550 products were shifted to the RV505 core, which had similar capabilities and features as the previous RV515 core, but was manufactured by TSMC using an 80 nm process (reduced from the 90 nm process of the RV515).[10]

X1600 series

[edit]

X1600 uses the M56[1] core which is based on the RV530 core, a core similar but distinct from RV515.

The RV530 has a 3:1 ratio of pixel shaders to texture units. It possesses 12 pixel shaders while retaining RV515's four texture units and four ROPs. It also gains three extra vertex shaders, bringing the total to 5 units. The chip's single "quad" has 3 pixel shader processors per pipeline, similar to the design of R580's 4 quads. This means that RV530 has the same texturing ability as the X1300 at the same clock speed, but with its 12 pixel shaders it is on par with the X1800 in shader computational performance. Due to the programming content of available games, the X1600 is greatly hampered by lack of texturing power.[3]

The X1600 was positioned to replace Radeon X600 and Radeon X700 as ATI's mid-range GPU. The Mobility Radeon X1600 and X1700 are also based on the RV530.[11][12]

X1650 series

[edit]
ATI Radeon X1650 Pro

The X1650 series has two parts: the X1650 Pro uses the RV535 core (which is a RV530 core manufactured on the newer 80 nm process), and has both a lower power consumption and heat output than the X1600.[13] The other part, the X1650XT/X1650GT, uses the newer RV570 core (also known as the RV560) though it has lower processing power (note that the fully equipped RV570 core powers the X1950Pro, a high-performance card) to match its main competitor, Nvidia's 7600GT.[14] There's also Radeon X1650, which technically belongs to the previous generation of X1600, because it uses old 90nm RV530 core. If you look closely at the specs, it's basically renamed Radeon X1600 Pro with DDR2 memory.

X1800 series

[edit]

Originally the flagship of the X1000 series, the X1800 series was released with mild reception due to the rolling release and the gain by its competitor at that time, NVIDIA's GeForce 7 series. When the X1800 entered the market in late 2005, it was the first high-end video card with a 90 nm GPU. ATI opted to fit the cards with either 256 MB or 512 MB on-board memory (foreseeing a future of ever growing demands on local memory size). The X1800XT PE was exclusively on 512 MB on-board memory. The X1800 replaced the R480-based Radeon X850 as ATI's premier performance GPU.[3]

With R520's delayed release, its competition was far more impressive than if the chip had made its originally scheduled spring/summer release. Like its predecessor, the X850, the R520 chip carries 4 "quads", which means it has similar texturing capability at the same clock speed as its ancestor and the NVIDIA 6800 series. Unlike the X850, the R520's shader units are vastly improved: they are Shader Model 3 capable, and received some advancements in shader threading that can greatly improve the efficiency of the shader units. Unlike the X1900, the X1800 has 16 pixel shader processors and equal ratio of texturing to pixel shading capability. The chip also increases the vertex shader number from six on the X800 to eight. With the 90 nm low-K fabrication process, these high-transistor chips could still be clocked at very high frequencies, which allows the X1800 series to be competitive with GPUs with more pipelines but lower clock speeds, such as the NVIDIA 7800 and 7900 series that use 24 pipelines.[3]

The X1800 was quickly replaced by the X1900 because of its delayed release. The X1900 was not behind schedule, and was always planned as the "spring refresh" chip. However, due to the large quantity of unused X1800 chips, ATI decided to kill one quad of pixel pipelines and sell them off as the X1800GTO.

X1900 and X1950 series

[edit]
Sapphire Radeon X1950 Pro

The X1900 and X1950 series fixed several flaws in the X1800 design and added a significant pixel shading performance boost. The R580 core is pin-compatible with the R520 PCBs, which meant a redesign of the X1800 PCB was not needed. The boards carry either 256 MB or 512 MB of onboard GDDR3 memory depending on the variant. The primary change between the R580 and the R520 is that ATI changed the pixel shader processor-to-texture processor ratio. The X1900 cards have three pixel shaders on each pipeline instead of one, giving a total of 48 pixel shader units. ATI took this step with the expectation that future 3D software will be more pixel shader intensive.[15]

In the latter half of 2006, ATI introduced the Radeon X1950 XTX, which is a graphics board using a revised R580 GPU called R580+. R580+ is the same as R580 except it supports GDDR4 memory, a new graphics DRAM technology that offers lower power consumption per clock and offers a significantly higher clock rate ceiling. The X1950 XTX clocks its RAM at 1 GHz (2 GHz DDR), providing 64.0 GB/s of memory bandwidth, a 29% advantage over the X1900 XTX. The card was launched on August 23, 2006.[16]

The X1950 Pro was released on October 17, 2006, and was intended to replace the X1900GT in the competitive sub-$200 market segment. The X1950 Pro GPU is built off of the 80 nm RV570 core with only 12 texture units and 36 pixel shaders, and is the first ATI card that supports native Crossfire implementation by a pair of internal Crossfire connectors, which eliminates the need for the unwieldy external dongle found in older Crossfire systems.[17]

Radeon feature matrix

[edit]

The following table shows features of AMD/ATI's GPUs (see also: List of AMD graphics processing units).

Name of GPU series Wonder Mach 3D Rage Rage Pro Rage 128 R100 R200 R300 R400 R500 R600 RV670 R700 Evergreen Northern
Islands
Southern
Islands
Sea
Islands
Volcanic
Islands
Arctic
Islands
/Polaris
Vega Navi 1x Navi 2x Navi 3x Navi 4x
Released 1986 1991 Apr
1996
Mar
1997
Aug
1998
Apr
2000
Aug
2001
Sep
2002
May
2004
Oct
2005
May
2007
Nov
2007
Jun
2008
Sep
2009
Oct
2010
Dec
2010
Jan
2012
Sep
2013
Jun
2015
Jun 2016, Apr 2017, Aug 2019 Jun 2017, Feb 2019 Jul
2019
Nov
2020
Dec
2022
Feb
2025
Marketing Name Wonder Mach 3D
Rage
Rage
Pro
Rage
128
Radeon
7000
Radeon
8000
Radeon
9000
Radeon
X700/X800
Radeon
X1000
Radeon
HD 2000
Radeon
HD 3000
Radeon
HD 4000
Radeon
HD 5000
Radeon
HD 6000
Radeon
HD 7000
Radeon
200
Radeon
300
Radeon
400/500/600
Radeon
RX Vega, Radeon VII
Radeon
RX 5000
Radeon
RX 6000
Radeon
RX 7000
Radeon
RX 9000
AMD support Ended Current
Kind 2D 3D
Instruction set architecture Not publicly known TeraScale instruction set GCN instruction set RDNA instruction set
Microarchitecture Not publicly known GFX1 GFX2 TeraScale 1
(VLIW5)

(GFX3)
TeraScale 2
(VLIW5)

(GFX4)
TeraScale 2
(VLIW5)

up to 68xx
(GFX4)
TeraScale 3
(VLIW4)

in 69xx [18][19]
(GFX5)
GCN 1st
gen

(GFX6)
GCN 2nd
gen

(GFX7)
GCN 3rd
gen

(GFX8)
GCN 4th
gen

(GFX8)
GCN 5th
gen

(GFX9)
RDNA
(GFX10.1)
RDNA 2
(GFX10.3)
RDNA 3
(GFX11)
RDNA 4
(GFX12)
Type Fixed pipeline[a] Programmable pixel & vertex pipelines Unified shader model
Direct3D 5.0 6.0 7.0 8.1 9.0
11 (9_2)
9.0b
11 (9_2)
9.0c
11 (9_3)
10.0
11 (10_0)
10.1
11 (10_1)
11 (11_0) 11 (11_1)
12 (11_1)
11 (12_0)
12 (12_0)
11 (12_1)
12 (12_1)
11 (12_1)
12 (12_2)
Shader model 1.4 2.0+ 2.0b 3.0 4.0 4.1 5.0 5.1 5.1
6.5
6.7 6.8
OpenGL 1.1 1.2 1.3 1.5[b][20] 3.3 4.6[21][c]
Vulkan 1.1[c][d] 1.3[22][e] 1.4[23]
OpenCL Close to Metal 1.1 (not supported by Mesa) 1.2+ (on Linux: 1.1+ (no Image support on Clover, with by Rusticl) with Mesa, 1.2+ on GCN 1.Gen) 2.0+ (Adrenalin driver on Win7+)
(on Linux ROCm, Mesa 1.2+ (no Image support in Clover, but in Rusticl with Mesa, 2.0+ and 3.0 with AMD drivers or AMD ROCm), 5th gen: 2.2 win 10+ and Linux RocM 5.0+
2.2+ and 3.0 Windows 8.1+ and Linux ROCm 5.0+ (Mesa Rusticl 1.2+ and 3.0 (2.1+ and 2.2+ wip))[24][25][26]
HSA / ROCm Yes ?
Video decoding ASIC Avivo/UVD UVD+ UVD 2 UVD 2.2 UVD 3 UVD 4 UVD 4.2 UVD 5.0 or 6.0 UVD 6.3 UVD 7 [27][f] VCN 2.0 [27][f] VCN 3.0 [28] VCN 4.0 VCN 5.0
Video encoding ASIC VCE 1.0 VCE 2.0 VCE 3.0 or 3.1 VCE 3.4 VCE 4.0 [27][f]
Fluid Motion [g] No Yes No ?
Power saving ? PowerPlay PowerTune PowerTune & ZeroCore Power ?
TrueAudio Via dedicated DSP Via shaders
FreeSync 1
2
HDCP[h] ? 1.4 2.2 2.3 [29]
PlayReady[h] 3.0 No 3.0
Supported displays[i] 1–2 2 2–6 ? 4
Max. resolution ? 2–6 ×
2560×1600
2–6 ×
4096×2160 @ 30 Hz
2–6 ×
5120×2880 @ 60 Hz
3 ×
7680×4320 @ 60 Hz [30]

7680×4320 @ 60 Hz PowerColor
7680x4320

@165 Hz

7680x4320
/drm/radeon[j] Yes
/drm/amdgpu[j] Optional [31] Yes
  1. ^ The Radeon 100 Series has programmable pixel shaders, but do not fully comply with DirectX 8 or Pixel Shader 1.0. See article on R100's pixel shaders.
  2. ^ R300, R400 and R500 based cards do not fully comply with OpenGL 2+ as the hardware does not support all types of non-power of two (NPOT) textures.
  3. ^ a b OpenGL 4+ compliance requires supporting FP64 shaders and these are emulated on some TeraScale chips using 32-bit hardware.
  4. ^ Vulkan support is theoretically possible but has not been implemented in a stable driver.
  5. ^ Vulkan support in Linux relies on the amdgpu kernel driver which is incomplete and not enabled by default for GFX6 and GFX7.
  6. ^ a b c The UVD and VCE were replaced by the Video Core Next (VCN) ASIC in the Raven Ridge APU implementation of Vega.
  7. ^ Video processing for video frame rate interpolation technique. In Windows it works as a DirectShow filter in your player. In Linux, there is no support on the part of drivers and / or community.
  8. ^ a b To play protected video content, it also requires card, operating system, driver, and application support. A compatible HDCP display is also needed for this. HDCP is mandatory for the output of certain audio formats, placing additional constraints on the multimedia setup.
  9. ^ More displays may be supported with native DisplayPort connections, or splitting the maximum resolution between multiple monitors with active converters.
  10. ^ a b DRM (Direct Rendering Manager) is a component of the Linux kernel. AMDgpu is the Linux kernel module. Support in this table refers to the most current version.

Chipset table

[edit]

Note that ATI X1000 series cards (e.g. X1900) do not have Vertex Texture Fetch, hence they do not fully comply with the VS 3.0 model. Instead, they offer a feature called "Render to Vertex Buffer (R2VB)" that provides functionality that is an alternative Vertex Texture Fetch.

Model Launch
Code name
Fab (nm)
Transistors (million)
Die size (mm2)
Bus interface
Clock rate
Core config
Fillrate Memory
TDP (Watts)
API support (version)
Release Price (USD)
Core (MHz)
Memory (MHz)
MOperations/s
MPixels/s
MVertices/s
MTexels/s
Size (MB)
Bandwidth (GB/s)
Bus type
Bus width (bit)
Max.
Direct3D
Radeon X1300 October 5, 2005 (PCIe)
December 1, 2005 (AGP)
RV515 90 107 100 AGP 8×
PCI
PCIe ×16
450 250 4:2:4:4 1800 1800 225 1800 128
256
8.0 DDR DDR2 128 9.0c 2.1 $99 (128MB)
$129 (256 MB)
Radeon X1300
HyperMemory
October 5, 2005 PCIe ×16 128
256
512
4.0 DDR2 64 $
Radeon X1300 PRO October 5, 2005 (PCIe)
November 1, 2006 (AGP)
AGP 8×
PCIe ×16
600 400 2400 2400 300 2400 128
256
12.8 128 31 $149
Radeon X1300 XT August 12, 2006 RV530 157 150 500 12:5:4:4 6000 2000 625 2000 22 $89
Radeon X1550 January 8, 2007 RV516 107 100 AGP 8x
PCI
PCIe x16
4:2:4:4 2200 2200 275 2200 128
256
512
27 $
Radeon X1600 PRO October 10, 2005 RV530 157 150 AGP 8×
PCIe ×16
390
390–690
12:5:4:4 6000 2000 625 2000 12.48 DDR2
GDDR3
41 $149 (128MB)
$199 (256 MB)
Radeon X1600 XT October 10, 2005 (PCIe) 590 690 7080 2360 737.5 2360 256
512
22.08 GDDR3 42 $249
Radeon X1650 February 1, 2007 500 400 6000 2000 625 2000 12.8 DDR2 $
Radeon X1650 SE RV516 105 PCIe ×16 635 4:2:4:4 256 DDR2 $
Radeon X1650 GT May 1, 2007 (PCIe)
October 1, 2007 (AGP)
RV560 80 330 230 AGP 8×
PCIe x16
400 24:8:8:8 9600 3200 800 3200 256
512
GDDR3 $
Radeon X1650 PRO August 23, 2006 (PCIe)
October 15, 2006 (AGP)
RV535 131 600 700 12:5:4:4 7200 2400 750 2400 22.4 44 $99
Radeon X1650 XT October 30, 2006 RV560 230 525 24:8:8:8 12600 4200 1050 4200 55 $149
Radeon X1700 FSC November 5, 2007 (OEM) RV535 131 PCIe ×16 587 695 12:5:4:4 7044 2348 733 2348 256 22.2 44 OEM
Radeon X1700 SE November 30, 2007 RV560 230 500 500 24:8:8:8 12000 4000 1000 4000 512 16.0 50 $
Radeon X1800
CrossFire Edition
December 20, 2005 R520 90 321 288 600 700 16:8:16:16 9600 9600 900 9600 512 46.08 256 113 $
Radeon X1800 GTO March 9, 2006 500 495 12:8:12:8 6000 6000 1000 6000 256
512
32.0 48 $249
Radeon X1800 XL October 5, 2005 500 16:8:16:16 8000 8000 1000 8000 256 70 $449
Radeon X1800 XT 625 750 10000 10000 1250 10000 256
512
48.0 113 $499 (256MB)
$549 (512 MB)
Radeon X1900
CrossFire Edition
January 24, 2006 R580 384 352 625 725 48:8:16:16 30000 512 46.4 100 $599
Radeon X1900 GT May 5, 2006 575 600 36:8:12:12 20700 6900 1150 6900 256 38.4 75 $
Radeon X1900 GT Rev. 2 September 7, 2006 512 18432 6144 1024 6144 42.64
Radeon X1900 XT January 24, 2006 625 725 48:8:16:16 30000 10000 1250 10000 256
512
46.4 100 $549
Radeon X1900 XTX R580 650 775 31200 10400 1300 10400 512 49.6 135 $649
Radeon X1950
CrossFire Edition
August 23, 2006 R580+ 80 1000 31200 10400 1300 10400 64 GDDR4 $449
Radeon X1950 GT January 29, 2007 (PCIe)
February 10, 2007 (AGP)
RV570 330 230 AGP 8x
PCIe x16
500 600 36:8:12:12 18000 6000 1000 6000 256
512
38.4 GDDR3 57 $140
Radeon X1950 PRO October 17, 2006 (PCIe)
October 25, 2006 (AGP)
575 690 20700 6900 1150 6900 44.16 66 $199
Radeon X1950 XT October 17, 2006 (PCIe)
February 18, 2007 (AGP)
R580+ 384 352 AGP 8x
PCIe 1.0 x16
625 700 (AGP)
900 (PCIe)
48:8:16:16 30000 10000 1250 10000 44.8 (AGP)
57.6 (PCIe)
96 $
Radeon X1950 XTX October 17, 2006 PCIe 1.0 ×16 650 1000 31200 10400 1300 10400 512 64 GDDR4 125 $449
Model Launch
Code name
Fab (nm)
Transistors (million)
Die size (mm2)
Bus interface
Core (MHz)
Memory (MHz)
Core config
MOperations/s
MPixels/s
MVertices/s
MTexels/s
Size (MB)
Bandwidth (GB/s)
Bus type
Bus width (bit)
Max.
Direct3D
Release price (USD)
Clock rate Fillrate Memory
TDP (Watts)
API support (version)

1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units

Mobility Radeon X1000 series

[edit]
Model Launch
Model number
Code name
Fab (nm)
Core clock (MHz)
Memory clock (MHz)
Core config1
Fillrate Memory API compliance (version)
Notes
Pixel (GP/s)
Texture (GT/s)
Size (MB)
Bandwidth (GB/s)
Bus type
Bus width (bit)
Mobility Radeon X1300 January 19, 2006 M52 RV515 90 PCIe ×16 350 250 2:4:4:4 1.4 1.4 128 + shared 8 DDR DDR2 128 9.0c 2.0
Mobility Radeon X1350 September 18, 2006 M62 470 350 1.88 1.88 11.2 DDR2 GDDR3
Mobility Radeon X1400 January 19, 2006 M54 445 250 1.78 1.78 8 DDR DDR2
Mobility Radeon X1450 September 18, 2006 M64 550 450 2.2 2.2 14.4 DDR2 GDDR3
Mobility Radeon X1600 February 1, 2006 M56 RV530 425
450
375
470
5:12:4:4 1.7
1.8
1.7
1.8
256 12.0
15.04
DDR2
GDDR3
Mobility Radeon X1700 M66 RV535 475 400
550
1.9 1.9 11.2
17.6
strained silicon
Mobility Radeon X1800 March 1, 2006 M58 R520 450 500 8:12:12:12 5.4 5.4 32 GDDR3 256
Mobility Radeon X1800 XT M58 550 650 8:16:16:16 8.8 8.8 41.6
Mobility Radeon X1900 January 11, 2007 M68 RV570 80 400 470 8:36:12:12 4.8 4.8 30.08 PowerPlay 6.0

1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Radeon X1000 series is a family of graphics processing units (GPUs) developed by , announced on October 5, 2005, and launched later that year as the company's most advanced graphics solution at the time, featuring the new R500 architecture optimized for 9.0c and Shader Model 3.0 support. Built on TSMC's , the series marked ATI's first major architectural overhaul since the R300 generation, incorporating an Ultra-Threaded Engine with up to 512 simultaneous threads for enhanced performance and efficiency. Key innovations included Avivo video and display technology for hardware-accelerated H.264 and decoding with dual 10-bit pipelines, support for high dynamic range (HDR) rendering at 128-bit floating-point precision, and improved multi-GPU scaling for up to 2x frame rates with advanced anti-aliasing modes. The lineup spanned entry-level to enthusiast segments, with high-end models like the X1800 series (e.g., X1800 XT at 600 MHz core clock, 256-bit GDDR3 memory interface, and 321 million transistors) targeting gamers and professionals, mid-range X1600 variants for mainstream use, budget X1300 cards featuring flexible HyperMemory 2 technology for shared system memory, and top-tier enthusiast models such as the X1900 and X1950 series. Performance highlights included peak fill rates up to 10.4 Gpixels/s on top models and dynamic for reduced heat and energy use, positioning the series as a competitive alternative to NVIDIA's 7 lineup. Released just before AMD's acquisition of ATI in July 2006, the X1000 series represented the final major product under ATI's independent branding and laid groundwork for subsequent generations.

Development and Release

Development History

Development of the Radeon X1000 series, based on the R500 microarchitecture, began in the early 2000s under , with a primary focus on achieving full compliance with 9.0c and implementing Shader Model 3.0 capabilities in both vertex and pixel . This effort built on prior generations like the R300 and R400, aiming to enhance shader complexity and performance for next-generation gaming and graphics applications, including adaptations for the 360's Xenos GPU, which featured early concepts of unified shader processing. ATI's engineering teams grappled with significant challenges in integrating advanced vertex and pixel shader units, including the need for efficient load balancing and dynamic to handle the increased computational demands without compromising stability. Key hurdles emerged during silicon prototyping, particularly around thermal management and clock speed reliability in early samples, necessitating multiple design iterations. The first tape-out for the flagship R520 chip occurred in November 2004, but subsequent respins were required in 2005 due to instability issues that prevented achieving target clock speeds. A successful tape-out was finally achieved in July 2005 after three attempts, allowing progression toward production. These technical setbacks were compounded by manufacturing difficulties, as the R500 series was fabricated on TSMC's 90 nm process, which suffered from low yields and production bottlenecks in mid-2005, contributing to delays in silicon validation. The planned mid-2005 launch was postponed to November 2005 as a result of these yield problems and ongoing redesigns, allowing ATI additional time to resolve fabrication and performance issues. Internally, ATI faced distractions from acquisition rumors circulating in mid-2005, with speculation about interest from , , and others potentially disrupting team focus and resource allocation during the critical final development phase. These rumors, which culminated in 's actual acquisition of ATI in 2006, added pressure but did not directly alter the R500 timeline. Despite the challenges, the resolved prototypes paved the way for the series' eventual market entry, marking a pivotal advancement in ATI's graphics technology.

Release Timeline and Market Impact

The X1000 series debuted with the high-end X1800 XT flagship model, launched on October 5, 2005, and widely available by November of that year. This was followed by the introduction of mid-range offerings, including the X1600 series in January 2006, and the entry-level X1300 series in March 2006. The lineup expanded further with the premium X1900 XTX on January 24, 2006, and concluded with the X1950 XTX on August 23, 2006, and the X1950 Pro in October 2006. ATI's pricing strategy positioned the X1000 series to span budget to premium segments, with the X1800 XT launching at $549 USD and the X1900 XTX at $650 USD, while more affordable models like the X1300 series targeted $99 to $149 USD and the X1600 XT at $199 to $249 USD. The series provided strong competition to NVIDIA's 7 lineup, particularly in Shader Model 3.0 performance, helping ATI achieve approximately 45% in discrete desktop during Q1 2006, declining to 43% by Q3 2006. However, the cards faced criticisms for elevated power consumption, with high-end models like the X1800 XT reaching a 110W TDP, contributing to higher heat output compared to predecessors. The -ATI merger, announced in July 2006 and completed in October 2006, facilitated unified branding under for subsequent X1000 releases like the X1950 series and improved efficiency, enabling better production scaling amid competitive pressures. Production of the X1000 series wound down by with the shift to the HD 2000 and later lines, though provided legacy driver support through the Catalyst 10.2 release in 2010.

Architecture

R500 Microarchitecture

The R500 microarchitecture formed the foundation of the Radeon X1000 series graphics processing units, marking ATI's initial foray into a more flexible shader design optimized for DirectX 9-era workloads. Fabricated on TSMC's 90 nm process node, the high-end R520 variant featured a die size of 288 mm² and integrated 321 million transistors, enabling higher performance density compared to the preceding 130 nm R400 architecture. Central to the R500 was its advanced shader architecture, which supported Shader Model 3.0 through 16 shader units and 8 vertex shader units, allowing for improved versatility in handling complex effects like dynamic shadows and higher-order surface rendering without fully merging and vertex processing as in later fully unified designs. This setup provided 16 rendering pipelines, balancing geometry and fragment processing for enhanced efficiency in Shader Model 3.0-compliant applications. Memory access was managed via an innovative ring bus controller, featuring a 512-bit internal data path that efficiently distributed bandwidth to 16 render backends, supporting external interfaces like 256-bit GDDR3 for high-bandwidth operations while minimizing latency through a centralized crossbar . Typical operating parameters included core clock speeds up to 600 MHz on flagship implementations, complemented by GDDR3 memory clocked at an effective 1.4 GHz (700 MHz data rate), delivering substantial throughput for texture-heavy scenes. Power efficiency was addressed through early dynamic clocking and voltage scaling mechanisms, which adjusted frequencies based on workload to curb idle consumption—such as dropping to lower states during desktop use—although these lacked the granular, per-shader control of subsequent architectures like TeraScale.

Graphics Pipeline and Features

The of the Radeon X1000 series, based on the R500 , features a design that separates from fragment processing to enhance scalability and efficiency. Geometry setup includes hardware transform and (T&L) capabilities through dedicated vertex shader processors, with the X1800 variant employing 8 such processors capable of handling up to 512 simultaneous threads via Ultra-Threading . This setup supports complex vertex operations, including dynamic flow control and up to 512 instructions per , efficient handling of 3D models before passing to the rasterization stage. The fragment processing stage utilizes 16 pixel in high-end models like the X1800, each equipped with arithmetic logic units (ALUs) for execution and units (TMUs) for sampling; for instance, the X1900 extends this to 48 ALUs across 16 , providing three ALUs per pipeline for vector operations while maintaining one TMU and one render output unit (ROP) per pipeline. Texture involves advanced 16 texture units with fully associative caches to minimize bandwidth demands, supporting up to 128-tap filtering in high-quality modes. The series fully complies with 9.0c ( Model 3.0) and 2.0, including partial support for (HDR) rendering via 128-bit floating-point precision throughout the , as well as up to 16x for improved texture clarity at oblique angles. is handled through Smoothvision 3.1, which incorporates adaptive modes such as multi-sample (MSAA) up to 6x and options reaching 14x in configurations, balancing quality and performance by dynamically adjusting sample patterns. Key innovative features include HyperZ III, an evolution of ATI's hierarchical Z-buffer technology that achieves up to 8:1 lossless Z-compression, reducing usage by eliminating redundant reads and writes for hidden surfaces and enabling up to 60% more efficient hidden compared to prior generations. Complementing this is the Avivo video engine, a dedicated hardware block for high-definition video processing, which accelerates decoding of formats like MPEG-2, MPEG-4, , WMV9, , and H.264, offloading the CPU and supporting smooth playback of content with integrated de-interlacing and conversion. However, the pipeline's reliance on separate vertex and shaders—without unified —results in inefficiencies for workloads requiring frequent data sharing between stages, such as advanced , as vertex texture fetch is not fully supported.

Software Support

Proprietary Drivers

The proprietary drivers for the Radeon X1000 series were provided by under brand, starting with the launch driver version 5.11 released on November 10, 2005, which introduced initial support for the series including the Radeon X1800 model. This driver package included optimizations tailored to the R500 microarchitecture's requirements, such as enhanced 9.0c compatibility and initial configurations for multi-GPU setups. AMD continued issuing Catalyst updates for the X1000 series through version 10.2, released in February 2010, which provided the final major feature enhancements and stability improvements for , Vista, and 7 operating systems. Notable updates included 6.11 in November 2006, which addressed stability issues specific to the X1900 series, such as fixing display corruption during resume from sleep states in mode and resolving intermittent TV output rotation problems. Another key release, 7.1 in January 2007, expanded support to the entire X1000 lineup under , enabling Alternate Frame Rendering for improved multi-GPU performance in compatible applications. The drivers offered full compatibility with Windows up to version 7, including certified support for 32-bit and 64-bit editions, but provided no official drivers for or later; users could achieve partial functionality on through legacy installations of Catalyst 10.2 using compatibility modes or third-party wrappers. Key features enabled by the Catalyst Control Center included GPU via the integrated Overdrive utility, allowing users to adjust core and memory clocks on supported X1000 models, as well as HydraVision for advanced management, which facilitated creation and hotkey-based display switching across up to six monitors. Official support for the Catalyst drivers on the X1000 series ended with the 10.2 release in , after which discontinued new feature development and transitioned the hardware to legacy status without further patches.

Open-Source Drivers

The open-source drivers for the X1000 series, based on the R500 , originated with 's strategic initiative announced in September 2007 to support community-driven development for R500 hardware, including the emerging radeonhd driver effort that provided initial support, building on and eventually merging into the mainline radeon driver after its discontinuation around . This initiative marked 's early commitment to open-source development, focusing initially on basic functionality for the X1000 series while leveraging reverse-engineering by developers. Key milestones in driver maturity included the achievement of 2D by early 2008, enabling hardware-accelerated rendering through EXA and XAA backends in the , which improved desktop performance over software rendering. By 2010, 3D advanced significantly with the integration of the Gallium3D framework into the driver, providing support via the classic r500 Mesa driver state tracker, though performance lagged behind proprietary options for demanding workloads. Full Kernel Mode Setting (KMS) enablement arrived with 3.0 in July 2011, allowing seamless mode switching and better power management for R500 GPUs without requiring user-space intervention. Following AMD's acquisition of ATI in 2006 and subsequent shifts in strategy, the company increased contributions to open-source efforts starting around 2009, including documentation releases and code upstreaming for the driver, though by this point the X1000 series was treated as legacy hardware compared to newer architectures like R600. These contributions built on reverse-engineering but prioritized forward-looking support, such as Gallium3D enhancements, leaving R500-specific optimizations to volunteers. Despite progress, limitations persisted, including limited hardware-accelerated video decoding for the Avivo engine; VA-API support for the radeon driver provides basic acceleration since around 2011, but more complex formats like and H.264 rely on software fallbacks. Additionally, open-source implementations offered incomplete Model 3.0 compliance, particularly in vertex texture fetch and dynamic branching, constraining compatibility with certain 9.0c applications emulated via Wine or native ports. As of 2025, support for the X1000 series remains in Mesa 25.x through the legacy radeon kernel module and classic r500 driver, providing stable 2D/3D operation and basic compatibility via the Zink OpenGL-over- layer, which translates older calls to for applications lacking native legacy support; recent updates in Mesa 25.3 include new extensions for R500 GPUs. This maintenance ensures usability in modern distributions for lightweight tasks, though advanced features like ray tracing are unavailable due to hardware constraints.

Desktop Variants

Low-End Models (X1300–X1550)

The low-end models of the Radeon X1000 series, the X1300 and X1550, targeted budget users requiring discrete graphics for basic computing, light gaming, and (HTPC) applications as replacements for integrated solutions. These cards supported both x16 and AGP 8x interfaces to accommodate legacy systems. They emphasized low power consumption and compatibility over high-end capabilities, with typical power draw under 40 W and no need for auxiliary power connectors. The Radeon X1300, released in 2005 and based on the RV515 chip fabricated on a with 107 million transistors, featured 4 pixel shader processors, 2 vertex shader processors, 4 texture mapping units, and 4 render output units. It utilized a 128-bit bus paired with 256 MB of DDR2 at an effective 400 MHz, alongside a 450 MHz core clock. Variants including the Pro, XT, GT, and SE offered adjustments in clock speeds and timings for varied performance levels; for instance, the XT model increased the core to 600 MHz and to an effective 500 MHz. Performance was adequate for era-specific tasks, delivering 20–30 fps in games like at 1024x768 resolution on high settings. Introduced in 2007, the X1550 employed the RV516 chip, also on 90 nm with 105 million transistors, maintaining the same and pipeline configuration of 4 processors, 2 vertex processors, 4 units, and 4 render output units. It supported a 128-bit bus with options for DDR2 or GDDR3 up to 512 MB in some configurations, clocked at up to 800 MHz effective, and a core clock reaching 550 MHz. Variants such as XT and GT included clock and memory optimizations for enhanced efficiency, with power consumption rated at 27 . These models provided similar entry-level capabilities, suitable for HTPC media playback and casual gaming at low resolutions.

Mid-Range Models (X1600–X1650)

The models in the X1000 series, including the X1600 and X1650, targeted mainstream gamers seeking a balance of performance and efficiency on desktop systems. The X1600, launched in and based on the RV530 GPU fabricated on a with 157 million transistors, featured 12 pixel shaders and 5 vertex shaders, along with 4 texture mapping units (TMUs) and 4 render output units (ROPs). It utilized a 128-bit interface with GDDR3 , typically configured at 256 MB, and operated at core clock speeds ranging from 500 MHz to 590 MHz depending on the variant, such as the X1600 Pro or XT. Building on this foundation, the X1650 arrived in late with the RV560 GPU, which increased processing power through 24 pixel shaders and 8 vertex shaders, along with 8 TMUs and 8 ROPs on an 80 nm process. Available in Pro and XT variants, it supported configurations of 256 MB to 512 MB using DDR2 or GDDR3 on a 128-bit bus, with core clocks up to 700 MHz and memory speeds reaching 1.4 Gbps effective in higher-end models. These enhancements allowed the X1650 to handle more demanding rendering tasks compared to its predecessor. Key features of these mid-range models included native support for multi-GPU configurations from their initial release, enabling scalable performance in compatible systems without requiring specialized hardware bridges in later implementations. They also offered superior (AA) capabilities relative to low-end X1000 variants, thanks to higher counts that improved edge smoothing and image quality in games. Performance-wise, the X1600 XT delivered 60-70 frames per second (fps) in at 1280x1024 resolution on high-quality settings, demonstrating solid playability for contemporary titles. With a (TDP) around 50 W, these cards emphasized energy efficiency suitable for standard power supplies. In the market, the X1600 and X1650 series directly competed with NVIDIA's 7600 lineup, offering competitive rasterization and performance for 1024x768 to 1280x1024 gaming resolutions. Their popularity in (OEM) systems stemmed from reliable integration, low power draw, and support for features like Avivo , making them a staple in mid-tier pre-built PCs from vendors such as HP and during the mid-2000s. Driver updates later enhanced stability for these models, though primary optimizations were handled in the proprietary Catalyst suite.

High-End Models (X1800)

The Radeon X1800 series was powered by the R520 , introduced in 2005, featuring 16 pixel pipelines, a 256-bit memory interface, and support for up to 512 MB of GDDR3 memory operating at clock speeds ranging from 520 to 625 MHz on the core. This architecture marked a significant advancement in ATI's R500 family, emphasizing ultra-threaded processing with 48 simultaneous operations to handle complex DirectX 9.0c workloads more efficiently than prior generations. The series targeted high-end desktop gaming, delivering robust performance for the era's demanding titles while introducing key hardware innovations. Key variants included the flagship Radeon X1800 XT, clocked at 625 MHz with 512 MB of GDDR3 memory clocked at 750 MHz (1500 MHz effective), positioning it as the performance leader; the cut-down Radeon X1800 GTO, limited to 12 pipelines at 500 MHz core and 256 MB of memory for a more affordable option; and the OEM-focused Radeon X1800 XL, running at 500 MHz with 256 MB of memory for system integrator builds. Among its innovations, the became the first to ship with 512 MB of VRAM, enhancing texture handling in high-resolution scenarios, and it mandated an external 6-pin power connector to meet its elevated power requirements, diverging from earlier PCIe designs that relied solely on slot power. In performance testing, the X1800 XT achieved over 60 frames per second in games like at 1600x1200 resolution with 4x enabled, often surpassing NVIDIA's 7800 GTX in shader-intensive tasks due to its superior parallel processing capabilities. However, the series faced challenges with thermal management, drawing a thermal design power (TDP) of 113 W that necessitated robust cooling solutions, resulting in noticeable fan noise under load. Multi-GPU configurations provided scaling benefits of up to 1.8x in select titles, though real-world gains varied by application and averaged around 1.4x to 1.6x depending on scene complexity.

Enthusiast Models (X1900–X1950)

The Radeon X1900 XTX, released in January 2006, represented ATI's flagship enthusiast GPU based on the R580 graphics processor, featuring 48 pixel shader processors and 8 vertex shader processors, a 256-bit memory interface, a 650 MHz core clock, and 512 MB of GDDR3 memory operating at 775 MHz (1.55 GHz effective). This configuration delivered exceptional shader-heavy performance for 9-era titles, positioning it as a direct competitor to NVIDIA's 7900 GTX. The card's architecture emphasized unified shaders capable of handling both pixel and vertex workloads, enabling more flexible rendering pipelines compared to prior generations. The X1950 series, introduced in early 2007 as a refinement and precursor to the R600 architecture, built on the R580+ core shrunk to an 80 nm process for improved yields and thermal characteristics. The X1950 XTX variant maintained the full 48 pixel and 8 vertex shaders but achieved factory overclocks up to 650 MHz core and 1 GHz (2 GHz effective) GDDR4 memory, enhancing bandwidth to 64 GB/s while retaining the 256-bit bus and 512 MB capacity. Lower-tier variants like the X1950 Pro used a cut-down R570 core with 36 pixel shaders, and the X1950 XT mirrored the XTX's shader count but at reduced clocks around 625 MHz core and 800 MHz memory for balanced enthusiast pricing. These models incorporated minor optimizations, such as refined voltage regulation, to support overclocking stability. Key advancements in the X1900 and X1950 lines included an evolved ring bus , which provided higher bandwidth efficiency and reduced latency over the X1800 series' implementation, contributing to better overall power efficiency despite increased counts— the R580 achieved approximately 10-15% improved in shader-bound scenarios. In performance, these cards led enthusiast benchmarks, such as delivering playable frame rates above 30 FPS in at 1600x1200 resolution with high settings on dual-core systems, outperforming contemporaries in complex scenes with heavy foliage and dynamic lighting. CrossFire configurations scaled effectively, often achieving near 2x performance uplift in supported titles like Splinter Cell: Chaos Theory, with up to 83% efficiency in alternate frame rendering mode. However, these high-end models carried notable drawbacks, including a exceeding 175 W—approaching 200 W under load—and requiring dual 8-pin power connectors for stable operation, which strained many power supplies of the . Launch drivers suffered from bugs, such as texture corruption and instability in multi-GPU setups, requiring several updates to mature.

Mobile Variants

Mobility Radeon Lineup

The Mobility Radeon lineup within the X1000 series consisted of discrete graphics processors designed for laptops, leveraging the R500 to deliver 9.0c compatibility and Model 3.0 support in mobile form factors. These GPUs were tailored for varying performance levels, from entry-level to high-end, and were commonly paired with ATI's mobile chipsets to enable efficient integration in designs. The Mobility X1400, launched in January 2006 and codenamed M54, was based on the RV515 GPU core fabricated on a . It featured a 445 MHz core clock, 4 pixel shader units, 2 vertex shader units, and 128 MB of DDR2 memory on a 128-bit bus running at 250 MHz, providing a memory bandwidth of 8 GB/s. This model served as an entry-level option for mainstream laptops, often integrated with ATI chipsets like the RS690 for enhanced system compatibility. The Mobility Radeon X1700, released in February 2006 and codenamed M66, utilized a mobile variant of the RV530 GPU on the same . It offered higher performance with 12 shader units, 5 vertex shader units, a core clock of 475 MHz, and up to 256 MB of GDDR3 memory on a 128-bit bus at 400 MHz, yielding 12.8 GB/s bandwidth. Targeted at mid-range laptops, the X1700 supported output for external displays. The Mobility X1800, released in March 2006 and codenamed M58, was a high-end variant based on the R520 GPU core on the . It featured a 450 MHz core clock, 12 shader units, 6 vertex shader units, and 256 MB of GDDR3 on a 256-bit bus at 500 MHz, providing 32 GB/s bandwidth. The Mobility X1800 XT variant reached 550 MHz core clock with at 650 MHz. These models targeted premium gaming laptops with support for advanced features like Avivo. The Mobility Radeon X2300, introduced in March 2007 and codenamed M64, was a low-power derivative of the RV515 GPU, optimized for thin-and-light notebooks and ultrabooks with a 480 MHz core clock, 4 pixel shader units, 2 vertex shader units, and 64 MB of DDR2 memory on a 64-bit bus at 400 MHz. It emphasized efficiency for basic graphics tasks in compact systems. pairings, such as the ATI RS600, supported unified memory architecture (UMA) sharing to allocate system RAM dynamically for graphics in configurations, reducing the need for dedicated VRAM.
ModelCodenameLaunch YearCore Clock (MHz)Shaders (Pixel/Vertex)MemoryBus WidthTarget Segment
X1400M54 (RV515)20064454/2128 MB DDR2128-bitEntry-level
X1700M66 (RV530)200647512/5Up to 256 MB GDDR3128-bitMid-range
X1800M58 (R520)2006450 (550 XT)12/6 (16/8 XT)256 MB GDDR3256-bitHigh-end
X2300M64 (RV515)20074804/264 MB DDR264-bitLow-power/ultrabooks

Power and Performance Adaptations

The mobile variants of the X1000 series incorporated key optimizations to balance performance with the demands of portability, focusing on reduced power draw and thermal output for extended battery life and seamless integration. Central to these adaptations was a substantial lowering of (TDP) from around 110 W in high-end desktop counterparts, such as the Radeon X1800 XT, to 25–45 W in mobility implementations. This was accomplished primarily through aggressive clock throttling and selective deactivation of processing pipelines, enabling the GPUs to operate within the limited power budgets of mobile platforms while maintaining core architectural features like the R520 core. These power reductions came with inherent performance trade-offs, typically rendering mobile models 20–30% slower than their desktop equivalents due to diminished clock rates and constrained memory bandwidth. For instance, the Mobility Radeon X1800 ran its core at 450 MHz—compared to the desktop X1800 XT's 600 MHz—while the Mobility X1800 XT reached 550 MHz against the desktop XT's 600 MHz, prioritizing efficiency over peak throughput in thermally restricted environments. Thermal management was enhanced via integrated heat spreaders that distributed heat more evenly across the die and chassis, coupled with dynamic techniques to enter ultra-low-power idle states when graphics load was minimal. The X1000 family's Dynamic Voltage Control further supported this by adjusting supply voltages in real-time to curb heat generation without sacrificing responsiveness during active use. In practice, these modifications positively influenced battery longevity, supporting up to 2 hours of gaming on typical 2006 laptops under moderate loads, a marked improvement for discrete graphics at the time. The integrated Avivo video processing unit played a crucial role here, offloading HD video decoding to dedicated hardware for smoother playback with minimal CPU involvement, thereby conserving power and extending runtime during media sessions by reducing overall system draw.

Comparisons and Legacy

Feature Matrix

The Radeon X1000 series encompasses a range of graphics processing units based on ATI's R500 architecture, featuring unified pixel and vertex shader processing compliant with Shader Model 3.0. All variants support DirectX 9.0c and OpenGL 2.0, along with advanced rendering technologies such as HyperZ III for hierarchical Z-buffer compression and Avivo for hardware-accelerated video decode and encode. Differences across models center on shader unit counts, which scale from 4 pixel shaders and 2 vertex shaders in entry-level configurations to 48 pixel shaders and 8 vertex shaders in top-tier models, enabling varied rendering capabilities. The series also introduces adaptive anti-aliasing modes and supports up to 16x anisotropic filtering for improved image quality, with CrossFire multi-GPU scaling available on mid-range and higher desktop variants.
ModelShaders/PipelinesMemory Bus/SizeAPI SupportAA/AF Levels
X1300 (Desktop)4 PS / 2 VS128-bit / 128 MB DDR2DX9.0c / 2.016x / 16xNo
X1550 (Desktop)4 PS / 2 VS128-bit / 256 MB DDR2DX9.0c / 2.016x / 16xNo
X1550 XT (OEM Desktop)4 PS / 2 VS128-bit / 256 MB DDR2DX9.0c / 2.016x / 16xNo
X1600 XT (Desktop)12 PS / 5 VS128-bit / 256 MB GDDR3DX9.0c / 2.016x / 16xYes
X1650 XT (Desktop)24 PS / 8 VS128-bit / 256 MB GDDR3DX9.0c / 2.016x / 16xYes
X1800 XT (Desktop)16 PS / 8 VS256-bit / 512 MB GDDR3DX9.0c / 2.016x / 16xYes
X1900 XTX (Desktop)48 PS / 8 VS256-bit / 512 MB GDDR3DX9.0c / 2.016x / 16xYes
X1950 XTX (Desktop)48 PS / 8 VS256-bit / 512 MB GDDR4DX9.0c / 2.016x / 16xYes
Mobility X14004 PS / 2 VS128-bit / 128 MB DDR2DX9.0c / 2.016x / 16xNo
Mobility X160012 PS / 5 VS128-bit / 128-256 MB GDDR3DX9.0c / 2.016x / 16xNo
Mobility X1800 XT16 PS / 8 VS256-bit / 256 MB GDDR3DX9.0c / 2.016x / 16xNo

Chipset Specifications

The Radeon X1000 series chipsets marked ATI's transition to finer process nodes, predominantly using TSMC's 90 nm fabrication for initial desktop and mobile implementations, which enabled higher densities while balancing power and ; select enthusiast models later received an 80 nm shrink in 2007 to reduce die sizes and thermal demands without altering core architectures. These specifications span low-end to enthusiast desktop variants and corresponding mobile adaptations, with memory support evolving from GDDR2 in entry-level parts to GDDR3 in higher tiers, and bus interfaces primarily PCIe 1.0 x16 (with AGP 8x limited to select low-end desktop models). Bandwidth examples include the X1900's 256-bit GDDR3 interface at an effective 800 MHz yielding approximately 51.2 GB/s (calculated as (256 bits × 1600 MT/s) / 8 / 1000).

Desktop Chipset Specifications

ChipProcessTransistorsDie SizeTDPFabMemory TypesInterfaces
RV515 (X1300)90 nm105 million100 mm²30 WTSMCGDDR2AGP 8x / PCIe 1.0 x16
RV530 (X1600 XT)90 nm157 million150 mm²42 WTSMCGDDR3PCIe 1.0 x16
R520 (X1800 XT)90 nm312 million288 mm²113 WTSMCGDDR3PCIe 1.0 x16
R580 (X1900 XTX)90 nm384 million352 mm²135 WTSMCGDDR3PCIe 1.0 x16
RV570 (X1950 Pro)80 nm312 million230 mm²72 WTSMCGDDR3 / GDDR4PCIe 1.0 x16

Mobile Chipset Specifications

ChipProcessTransistorsDie SizeTDPFabMemory TypesInterfaces
M54 (X1400)90 nm107 million100 mm²25 WTSMCDDR2 / GDDR2PCI Express x16
M56-GL (X1600)90 nm157 million150 mm²30 WTSMCGDDR3PCI Express x16
M58 (X1800)90 nm312 million288 mm²45 WTSMCGDDR3PCI Express x16
M66-300 (X1900)80 nm312 million230 mm²35 WTSMCGDDR3PCI Express x16

Legacy

The Radeon X1000 series competed effectively against NVIDIA's 7 lineup, excelling in Shader Model 3.0 and HDR rendering but facing challenges with driver stability and power efficiency. As ATI's final independent release before 's 2006 acquisition, it influenced subsequent HD architectures by introducing Avivo and Ultra-Threaded shaders. Official driver support ended with AMD Catalyst 13.1 (March 2013) for , with legacy open-source support via Mesa continuing as of 2025.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.