Recent from talks
Nothing was collected or created yet.
GDDR3 SDRAM
View on WikipediaThis article needs additional citations for verification. (December 2006) |
| Type of RAM | |
GDDR3 chips on a AMD Radeon HD 4670 | |
| Developer | JEDEC |
|---|---|
| Type | Synchronous dynamic random-access memory |
| Generation | 3rd generation |
| Predecessor | GDDR2 SDRAM |
| Successor | GDDR4 SDRAM |


GDDR3 SDRAM (Graphics Double Data Rate 3 SDRAM) is a type of DDR SDRAM specialized for graphics processing units (GPUs) offering less access latency and greater device bandwidths compared to DDR2 SDRAM of the same generation. Its specification was developed by ATI Technologies in collaboration with DRAM vendors including Elpida Memory, Hynix Semiconductor, Infineon (later Qimonda) and Micron.[1] It was later adopted as a JEDEC standard.
Overview
[edit]It has much the same technological base as DDR2, but the power and heat dispersal requirements have been reduced somewhat, allowing for higher performance memory modules, and simplified cooling systems. GDDR3 is not related to the JEDEC DDR3 specification. This memory uses internal terminators, enabling it to better handle certain graphics demands. To improve throughput, GDDR3 memory transfers 4 bits of data per pin in 2 clock cycles.
The GDDR3 interface transfers two 32 bit wide data words per clock cycle from the I/O pins. Corresponding to the 4n-prefetch a single write or read access consists of a 128 bit wide, one-clock-cycle data transfer at the internal memory core and four corresponding 32 bit wide, one-half-clock-cycle data transfers at the I/O Pins. Single-ended unidirectional Read and Write Data strobes are transmitted simultaneously with Read and Write data respectively in order to capture data properly at the receivers of both the Graphics SDRAM and the controller. Data strobes are organized per byte of the 32 bit wide interface.
Commercial implementation
[edit]Despite being designed by ATI, the first card to use the technology was nVidia's GeForce FX 5700 Ultra in early 2004, where it replaced the GDDR2 chips used up to that time. The next card to use GDDR3 was nVidia's GeForce 6800 Ultra, where it was key in maintaining reasonable power requirements compared to the card's predecessor, the GeForce 5950 Ultra. ATI began using the memory on its Radeon X800 cards. GDDR3 was Sony's choice for the PlayStation 3 gaming console's graphics memory, although its nVidia based GPU is also capable of accessing the main system memory, which consists of XDR DRAM designed by Rambus Incorporated (Similar technology is marketed by nVidia as TurboCache in PC platform GPUs). Microsoft's Xbox 360 has 512 MB of GDDR3 memory. Nintendo's Wii also contains 64 MB of GDDR3 memory.
Advantages of GDDR3 over DDR2
[edit]- GDDR3's strobe signal, unlike DDR2 SDRAM, is unidirectional & single-ended (RDQS, WDQS). This means there is a separate read and write data strobe allowing for a quicker read to write ratio than DDR2.
- GDDR3 has a hardware reset capability allowing it to flush all data from memory and then start again.
- Lower voltage requirements leads to lower power requirements, and lower heat output.
- Higher clock frequencies, due to lower heat output, this is beneficial for increased throughput and more precise timings.
See also
[edit]References
[edit]- ^ "ATI Technologies Promotes GDDR3". Archived from the original on 2002-12-07. Retrieved 2002-12-07.
- Gregory Agostinelli. "Method and Apparatus for fine tuning a memory interface". US PATENT OFFICE.
External links
[edit]GDDR3 SDRAM
View on GrokipediaIntroduction
Definition and Purpose
GDDR3 SDRAM, or Graphics Double Data Rate 3 Synchronous Dynamic Random-Access Memory, is a specialized variant of DDR SDRAM engineered specifically for graphics processing units (GPUs). It emphasizes high bandwidth and reduced access latency to efficiently manage the demanding data flows inherent in visual rendering tasks, distinguishing it from general-purpose DDR SDRAM used in system memory. The "G" prefix highlights its graphics-oriented design, which prioritizes rapid parallel data transfers over the sequential access patterns typical of computing workloads.[6] The primary purpose of GDDR3 SDRAM is to support the intensive parallel processing required in graphics applications, such as texture mapping, vertex shading, and frame buffer operations. These workloads involve simultaneous access to vast datasets for real-time image synthesis, where high throughput is essential to achieve smooth performance in gaming, 3D modeling, and video processing. By optimizing for GPU architectures, GDDR3 enables more effective handling of pixel and vertex data streams, reducing bottlenecks that could degrade visual quality or frame rates, unlike standard DDR SDRAM which focuses on broad compatibility for CPU-centric tasks.[7] This graphics-specific evolution stems from collaborative efforts by industry leaders like ATI Technologies and memory manufacturers, who tailored GDDR3 to meet the escalating demands of immersive virtual environments and high-fidelity graphics. Its architecture facilitates greater device bandwidth, making it ideal for accelerating the rendering of complex scenes without the overhead of general-purpose memory constraints.[6]Key Characteristics
GDDR3 SDRAM employs a 4n-prefetch architecture, which enables the transfer of four bits of data per pin over two clock cycles during burst operations, facilitating efficient sequential data access optimized for graphics workloads.[8] This design supports programmable burst lengths of 4 or 8 words, emphasizing high-throughput burst transfers rather than low-latency random access patterns typical in system memory.[8] A key feature for signal integrity is on-die termination (ODT), implemented on both data lines and command/address buses, which minimizes reflections and improves eye diagram margins in high-speed graphics interfaces.[7] Additionally, GDDR3 includes a dedicated hardware reset pin (RESET), a VDDQ CMOS input that ensures reliable device initialization by placing outputs in a high-impedance state and disabling internal circuits during power-up, preventing undefined states.[9] GDDR3 achieves effective data rates up to 2 Gbps per pin, translating to bandwidths of approximately 4 GB/s for 16-bit wide chips and 8 GB/s for 32-bit wide configurations, prioritizing overall system throughput in bandwidth-intensive applications.[1] Operating at a core voltage of 1.8 V ± 0.1 V, it consumes about half the power of preceding GDDR2 memory (at 2.5 V), resulting in reduced heat generation suitable for densely packed graphics cards.[7][8]History and Development
Origins and Standardization
The development of GDDR3 SDRAM was led by ATI Technologies, which announced the specification in October 2002 in collaboration with major DRAM manufacturers including Elpida Memory, Hynix Semiconductor, Infineon Technologies, and Micron Technology.[3][2] This partnership aimed to create a memory type optimized for graphics processing, building on the foundations of prior DDR technologies while addressing specific needs of high-performance graphics cards. The effort was completed over the summer of 2002, with initial chips targeted for availability in mid-2003.[2] ATI's initial specification for GDDR3 was proprietary, designed to overcome the bandwidth and speed limitations of GDDR2 SDRAM, which struggled with the increasing demands of advanced graphics rendering. Key focuses included achieving higher clock speeds—starting at 500 MHz and potentially reaching up to 800 MHz—to enable faster data transfer rates for graphics workloads, while also reducing power consumption compared to predecessors to support denser memory configurations up to 128 MB on graphics cards.[2][3][4] This approach leveraged elements from JEDEC's ongoing DDR-II work but tailored them for point-to-point graphics interfaces, marking one of the first instances of a market-specific DRAM specification preceding broader industry adoption.[3] The GDDR3 specification was subsequently adopted as a formal JEDEC standard in May 2005 under section 3.11.5.7 of JESD21-C, which defined GDDR3-specific functions for synchronous graphics RAM (SGRAM).[10] This standardization process ensured compatibility across manufacturers, facilitating widespread production and integration into graphics hardware, as the collaborative foundation established by ATI and its partners enabled seamless transition from proprietary to open implementation.[11]Timeline of Introduction
The development of GDDR3 SDRAM, initially led by ATI Technologies in collaboration with memory manufacturers, culminated in its market debut through NVIDIA's implementation, despite ATI's foundational role in the specification. In early 2004, NVIDIA introduced the GeForce FX 5700 Ultra graphics card, which featured the first commercial use of GDDR3 memory, offering improved bandwidth over prior GDDR2 implementations in select configurations.[12][13] By mid-2004, ATI accelerated GDDR3's adoption with the launch of its Radeon X800 series on May 4, fully integrating the memory type to enhance performance in high-end GPUs and establishing it as a standard for graphics applications. From 2005 to 2006, GDDR3 saw widespread integration across major GPU lines, including NVIDIA's GeForce 6 and 7 series—such as the GeForce 6800 GT released in June 2004 with 256 MB GDDR3—and ATI's Radeon X1000 series, launched on October 5, 2005, which further solidified its prevalence in consumer and professional graphics cards.[14][15] The emergence of GDDR4 in 2006, first appearing in ATI's Radeon X1950 series in August, began signaling an initial shift, though GDDR3 remained dominant.[16] Production of GDDR3 tapered off in the late 2000s as GDDR5 gained dominance starting in 2008 with AMD's Radeon HD 4000 series and NVIDIA's GeForce 200 series, with manufacturing largely ceasing around 2010 to prioritize higher-performance successors.[12][17][18]Technical Specifications
Electrical and Timing Parameters
GDDR3 SDRAM operates with a supply voltage of 1.8 V ±0.1 V or 1.9 V ±0.1 V for both the core (VDD) and I/O interface (VDDQ), depending on the speed grade, to ensure stable performance under varying thermal and electrical conditions.[1] This voltage level represents a reduction from prior generations, contributing to lower overall power dissipation while supporting high-speed graphics workloads.[1] The memory achieves effective data rates from 1.4 GT/s to 2.0 GT/s per pin, driven by clock frequencies ranging from 700 MHz to 1.0 GHz, with the internal clock running at half the data rate to enable double data rate transfers on both clock edges.[1] At the upper limit, this corresponds to a minimum clock cycle time (tCK) of 1.0 ns, providing access times suitable for demanding rendering applications.[1] Timing parameters are optimized for graphics throughput, with the row-to-column delay (tRCD) varying by speed grade and operation type to minimize latency in burst accesses. The CAS latency (CL) is programmable across multiple clock cycles to allow flexibility in system design. Representative values for a high-speed variant are summarized below:| Parameter | Symbol | Value (High-Speed Grade) | Unit | Notes |
|---|---|---|---|---|
| Clock Cycle Time | tCK | 1.0 | ns | Minimum for 2.0 GT/s |
| Row-to-Column Delay (Read) | tRCD | 14 | ns | For 2.0 GT/s grade |
| Row-to-Column Delay (Write) | tRCD | 10 | ns | For 2.0 GT/s grade |
| CAS Latency | CL | 7–13 | cycles | Programmable |