Hubbry Logo
DIMMDIMMMain
Open search
DIMM
Community hub
DIMM
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
DIMM
DIMM
from Wikipedia
Two types of DIMMs: a 168-pin SDRAM module (top) and a 184-pin DDR SDRAM module (bottom). The SDRAM module has two notches (rectangular cuts or incisions) on the bottom edge, while the DDR1 SDRAM module has one. Also, each module has eight RAM chips, but the lower one has an unoccupied space for the ninth chip; this space is occupied in ECC DIMMs.
Three SDRAM DIMM slots on a ABIT BP6 computer motherboard.

A DIMM (Dual In-line Memory Module) is a popular type of memory module used in computers. It is a printed circuit board with one or both sides (front and back) holding DRAM chips and pins.[1] The vast majority of DIMMs are manufactured in compliance with JEDEC memory standards, although there are proprietary DIMMs. DIMMs come in a variety of speeds and capacities, and are generally one of two lengths: PC, which are 133.35 mm (5.25 in), and laptop (SO-DIMM), which are about half the length at 67.60 mm (2.66 in).[2]

History

[edit]

DIMMs (Dual In-line Memory Module) were a 1990s upgrade for SIMMs (Single In-line Memory Modules)[3][4] as Intel P5-based Pentium processors began to gain market share. The Pentium had a 64-bit bus width, which would require SIMMs installed in matched pairs in order to populate the data bus. The processor would then access the two SIMMs in parallel.

DIMMs were introduced to eliminate this disadvantage. The contacts on SIMMs on both sides are redundant, while DIMMs have separate electrical contacts on each side of the module.[5] This allowed them to double the SIMMs 32-bit data path into a 64-bit data path.[6]

The name "DIMM" was chosen as an acronym for Dual In-line Memory Module symbolizing the split in the contacts of a SIMM into two independent rows.[6] Many enhancements have occurred to the modules in the intervening years, but the word "DIMM" has remained as one of the generic terms[clarification needed] for a computer memory module.

Form factors

[edit]

Widths

[edit]

DIMMs come in a number of board sizes. In order of descending size: DIMM, SO-DIMM, MiniDIMM, and MicroDIMM.

Regular DIMMs are generally 133.35 mm in length, while SO-DIMMs are generally 67.6 mm in length.[2]

SO-DIMM

[edit]
Assorted SO-DIMM Modules
A DDR SO-DIMM slot on a computer motherboard.

A SO-DIMM (pronounced "so dim" /ˈsdɪm/, also spelled SODIMM) or small outline DIMM, is a smaller alternative to a DIMM, being roughly half the physical size of a regular DIMM. The first SO-DIMMs had 72 pins and were introduced by JEDEC in 1997.[7][8][9] Before its introduction, many laptops would use proprietary[10] RAM modules which were expensive and hard to find.[7][11]

SO-DIMMs are often used in computers that have limited space, which include laptops, notebook computers, small-footprint personal computers such as those based on Nano-ITX motherboards, high-end upgradable office printers, and networking hardware such as routers and NAS devices.[12] They are usually available with the same size data path and speed ratings of the regular DIMMs though normally with smaller capacities.

Connector

[edit]
A comparison between 200-pin DDR and DDR2 SDRAM SO-DIMMs, and a 204-pin DDR3 SO-DIMM module. They share the same width but differ in pin and notch placement.[13]
16 GiB DDR4-2666 1.2 V unbuffered DIMM (UDIMM).

Different generations of memory are not interchangable: neither forward compatible nor backward compatible. To make this difference clear and avoid any confusion, their DIMM modules all have different pin counts and/or different notch positions. DDR5 SDRAM is the most recent type of DDR memory and has been in use since 2020.

DIMM
SO-DIMM
  • 72-pin: FPM DRAM and EDO DRAM;[7] different pin configuration from 72-pin SIMM
  • 144-pin: SDR SDRAM,[7] sometimes used for DDR2 SDRAM
  • 200-pin: DDR SDRAM[7] and DDR2 SDRAM
  • 204-pin: DDR3 SDRAM
  • 260-pin: DDR4 SDRAM
  • 260-pin: UniDIMMs carrying either DDR3 or DDR4 SDRAM; differently notched than DDR4 SO-DIMMs
  • 262-pin: DDR5 SDRAM
MiniDIMM
  • 244-pin: DDR2 SDRAM
MicroDIMM
  • 144-pin: SDRAM[7]
  • 172-pin: DDR SDRAM[7]
  • 214-pin: DDR2 SDRAM

Besides pin count there are also physical notches to differentiate the incompatible types of DIMM. For example, the ancient 168-pin SDR SDRAM had different voltage ratings (5.0 V or 3.3 V) and a difference of registered (buffered) vs unbuffered. As a result it has two notch positions to prevent the insertion of the wrong type of module.

Heights

[edit]

Several form factors are commonly used in DIMMs. Single Data Rate Synchronous DRAM (SDR SDRAM) DIMMs were primarily manufactured in 1.5 inches (38 mm) and 1.7 inches (43 mm) heights, with the nominal value being 30 millimetres (1.2 in). When 1U rackmount servers started becoming popular, these form factor registered DIMMs had to plug into angled DIMM sockets to fit in the 1.75 inches (44 mm) high box. To alleviate this issue, the next standards of DDR DIMMs were created with a "low profile" (LP) height of around 1.2 inches (30 mm). These fit into vertical DIMM sockets for a 1U platform.

With the advent of blade servers, angled slots have once again become common in order to accommodate LP form factor DIMMs in these space-constrained boxes. This led to the development of the Very Low Profile (VLP) form factor DIMM with a height of around 18 millimetres (0.71 in). These will fit vertically in ATCA systems.

Very similar height levels are used for SO-DIMM, Mini-DIMM and Micro-DIMM.[15]

JEDEC standard heights for DIMMs[16]
Generation Full-height (1U) Very low profile (VLP) Notes
Nominal Maximum Nominal Maximum
DDR2[17] 30.00 millimetres (1.181 in) 30.50 millimetres (1.201 in)
DDR3[18] 30.00 millimetres (1.181 in) 30.50 millimetres (1.201 in) 18.75 millimetres (0.738 in) 18.90 millimetres (0.744 in)
DDR4[19] 31.25 millimetres (1.230 in) 31.40 millimetres (1.236 in) 18.75 millimetres (0.738 in) 18.90 millimetres (0.744 in)
DDR5[20] 31.25 millimetres (1.230 in) 31.40 millimetres (1.236 in)
  • For DIMMs, there is a new height called 2U DIMM at 56.90 millimetres (2.240 in) nominal and 57.05 millimetres (2.246 in) max.
  • DDR5 and LPDDR5 also use CAMM2 units. These are mounted flush to the motherboard.

Notes:

  • Low profile (LP) is not a JEDEC standard.
  • The full JEDEC standards also regulate factors such as thickness.
  • SO-DIMMs for DDR4 and DDR5 maintain the traditional height of 30.00±0.15 mm; see JEDEC MO-310A and MO-337B. The height increase for "full height" DIMM does not apply to SO-DIMM.
  • It is common for higher-end consumer DDR4 DIMMs to exceed the JEDEC full height due to the use of an added heat sink. Some heat sinks add as little as 1 millimetre (0.039 in) while others add up to 5 millimetres (0.20 in).

Similar connectors

[edit]

As of Q2 2017, Asus has had a PCIe based "DIMM.2", which has a similar socket to DDR3 DIMMs and is used to put in a module to connect up to two M.2 NVMe solid-state drives. However, it cannot use common DDR type ram and does not have much support from other than Asus.[21].

Components

[edit]

Organization

[edit]

Most DIMMs are built using "×4" ("by four") or "×8" ("by eight") memory chips with up to nine chips per side; "×4" and "×8" refer to the data width of the DRAM chips in bits. High-capacity DIMMs such as 256 GB DIMMs can have up to 19 chips per side.

In the case of "×4" registered DIMMs, the data width per side is 36 bits; therefore, the memory controller (which requires 72 bits) needs to address both sides at the same time to read or write the data it needs. In this case, the two-sided module is single-ranked. For "×8" registered DIMMs, each side is 72 bits wide, so the memory controller only addresses one side at a time (the two-sided module is dual-ranked).

The above example applies to ECC memory that stores 72 bits instead of the more common 64. There would also be one extra chip per group of eight, which is not counted.

Ranking

[edit]

Sometimes memory modules are designed with two or more independent sets of DRAM chips connected to the same address and data buses; each such set is called a rank. Ranks that share the same slot, only one rank may be accessed at any given time; it is specified by activating the corresponding rank's chip select (CS) signal. The other ranks on the module are deactivated for the duration of the operation by having their corresponding CS signals deactivated. DIMMs are currently[when?] being commonly manufactured with up to four ranks per module. Consumer DIMM vendors have recently[when?] begun to distinguish between single and dual ranked DIMMs.

After a memory word is fetched, the memory is typically inaccessible for an extended period of time while the sense amplifiers are charged for access of the next cell. By interleaving the memory (e.g. cells 0, 4, 8, etc. are stored together in one rank), sequential memory accesses can be performed more rapidly because sense amplifiers have 3 cycles of idle time for recharging, between accesses.

DIMMs are often referred to as "single-sided" or "double-sided" to describe whether the DRAM chips are located on one or both sides of the module's printed circuit board (PCB). However, these terms may cause confusion, as the physical layout of the chips does not necessarily relate to how they are logically organized or accessed. Indeed, quad-ranked DIMMs exist.

JEDEC decided that the terms "dual-sided", "double-sided", or "dual-banked" were not correct when applied to registered DIMMs (RDIMMs).

Multiplexed Rank DIMM (MRDIMM) allow data from multiple ranks to be transmitted on the same channel. It was announced for DDR5 in July 2024 and is expected to be backwards compatible with DDR5 RDIMM.[22]

SPD EEPROM

[edit]

A DIMM's capacity and other operational parameters may be identified with serial presence detect (SPD), an additional chip which contains information about the module type and timing for the memory controller to be configured correctly. The SPD EEPROM connects to the System Management Bus and may also contain thermal sensors (TS-on-DIMM).[23]

Features

[edit]

Speeds

[edit]

For various technologies, there are certain bus and device clock frequencies that are standardized; there is also a decided nomenclature for each of these speeds for each type.

DIMMs based on Single Data Rate (SDR) DRAM have the same bus frequency for data, address and control lines. DIMMs based on Double Data Rate (DDR) DRAM have data but not the strobe at double the rate of the clock; this is achieved by clocking on both the rising and falling edge of the data strobes. Power consumption and voltage gradually became lower with each generation of DDR-based DIMMs.

Another influence is Column Access Strobe (CAS) latency, or CL, which affects memory access speed. This is the delay time between the READ command and the moment data is available. See main article CAS/CL and memory timing.

SDR SDRAM DIMMs
Chip Module Effective clock
(MHz)
Transfer rate
(MT/s)
Voltage
(V)
SDR-66 PC-66 66 66 3.3
SDR-100 PC-100 100 100 3.3
SDR-133 PC-133 133 133 3.3
DDR SDRAM (DDR1) DIMMs
Chip Module Memory clock
(MHz)
I/O bus clock
(MHz)
Transfer rate
(MT/s)
Voltage
(V)
DDR-200 PC-1600 100 100 200 2.5
DDR-266 PC-2100 133 133 266 2.5
DDR-333 PC-2700 166 166 333 2.5
DDR-400 PC-3200 200 200 400 2.6
DDR2 SDRAM DIMMs
Chip Module Memory clock
(MHz)
I/O bus clock
(MHz)
Transfer rate
(MT/s)
Voltage
(V)
DDR2-400 PC2-3200 100 200 400 1.8
DDR2-533 PC2-4200 133 266 533 1.8
DDR2-667 PC2-5300 166 333 667 1.8
DDR2-800 PC2-6400 200 400 800 1.8
DDR2-1066 PC2-8500 266 533 1066 1.8
DDR3 SDRAM DIMMs
Chip Module Memory clock
(MHz)
I/O bus clock
(MHz)
Transfer rate
(MT/s)
Voltage
(V)
DDR3-800 PC3-6400 100 400 800 1.5
DDR3-1066 PC3-8500 133 533 1066 1.5
DDR3-1333 PC3-10600 166 667 1333 1.5
DDR3-1600 PC3-12800 200 800 1600 1.5
DDR3-1866 PC3-14900 233 933 1866 1.5
DDR3-2133 PC3-17000 266 1066 2133 1.5
DDR3-2400 PC3-19200 300 1200 2400 1.5
DDR4 SDRAM DIMMs
Chip Module Memory clock
(MHz)
I/O bus clock
(MHz)
Transfer rate
(MT/s)
Voltage
(V)
DDR4-1600 PC4-12800 200 800 1600 1.2
DDR4-1866 PC4-14900 233 933 1866 1.2
DDR4-2133 PC4-17000 266 1066 2133 1.2
DDR4-2400 PC4-19200 300 1200 2400 1.2
DDR4-2666 PC4-21300 333 1333 2666 1.2
DDR4-3200 PC4-25600 400 1600 3200 1.2
DDR5 SDRAM DIMMs
Chip Module Memory clock
(MHz)
I/O bus clock
(MHz)
Transfer rate
(MT/s)
Voltage
(V)
DDR5-4000 PC5-32000 2000 2000 4000 1.1
DDR5-4400 PC5-35200 2200 2200 4400 1.1
DDR5-4800 PC5-38400 2400 2400 4800 1.1
DDR5-5200 PC5-41600 2600 2600 5200 1.1
DDR5-5600 PC5-44800 2800 2800 5600 1.1
DDR5-6000 PC5-48000 3000 3000 6000 1.1
DDR5-6200 PC5-49600 3100 3100 6200 1.1
DDR5-6400 PC5-51200 3200 3200 6400 1.1
DDR5-6800 PC5-54400 3400 3400 6800 1.1
DDR5-7200 PC5-57600 3600 3600 7200 1.1
DDR5-7600 PC5-60800 3800 3800 7600 1.1
DDR5-8000 PC5-64000 4000 4000 8000 1.1
DDR5-8400 PC5-67200 4200 4200 8400 1.1
DDR5-8800 PC5-70400 4400 4400 8800 1.1

Error correction

[edit]

ECC DIMMs are those that have extra data bits which can be used by the system memory controller to detect and correct errors. There are numerous ECC schemes, but perhaps the most common is Single Error Correct, Double Error Detect (SECDED) which uses an extra byte per 64-bit word. ECC modules usually carry a multiple of 9 instead of a multiple of 8 chips as a result.

Register/buffer

[edit]

It is electrically demanding for a memory controller to drive many DIMMs. Registered DIMMs add a hardware register to the clock, address, and command lines so that the signals are refreshed on the DIMM, allowing a reduced load on the memory controller. Variants include LRDIMM with all lines buffered and CUDIMM/CSODIMM with only the clock signal buffered. The register feature often occurs with ECC, but they do not actually depend on each other and can occur independently.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A Dual In-line Memory Module (DIMM) is a type of hardware that consists of a small populated with multiple (RAM) chips, featuring pins on both sides for connecting to a and enabling a 64-bit path for efficient temporary and retrieval in desktops, laptops, workstations, and servers. DIMMs evolved from earlier single in-line memory modules (SIMMs) in the early 1990s to support the 64-bit architecture of processors like Intel's , addressing the limitations of 32-bit SIMMs by doubling the data bandwidth through independent pin connections on each side of the module. Initially featuring 168-pin connectors, DIMMs were standardized by for interoperability and quickly became the dominant form factor for PC upgrades by the mid-1990s. Over time, they have incorporated advancements in DRAM technology, progressing from synchronous DRAM (SDRAM) to (DDR) variants, with modern iterations like DDR5 supporting capacities up to 128 GB per module and clock speeds exceeding 8,000 MT/s. Key types of DIMMs include unbuffered DIMMs (UDIMMs), which are non-ECC modules commonly used in consumer desktops and laptops for cost-effective performance; registered DIMMs (RDIMMs), which incorporate a register to reduce and enhance stability in multi-module server configurations; and load-reduced DIMMs (LRDIMMs), designed for high-capacity environments by using a memory buffer to minimize signal degradation. Consumer-grade DIMMs, typically UDIMMs without ECC, are cheaper and support higher clock speeds for gaming and daily use but lack error correction and are suitable for home PCs but not for 24/7 continuous loads. In contrast, server-grade or industrial-grade DIMMs, such as RDIMMs or LRDIMMs with ECC, detect and correct single-bit errors (and detect multi-bit errors), are slightly more expensive, may operate at potentially lower frequencies for enhanced stability, and are designed for continuous operation and reliability. Smaller outline variants, known as SO-DIMMs, adapt the DIMM design for compact devices like laptops, measuring about half the length of standard DIMMs while maintaining similar pin counts in later generations (e.g., 260 pins for DDR4). DIMMs often include features like error-correcting code (ECC) for in enterprise applications, heat spreaders for in high-density setups, and support for multi-channel configurations to boost overall system throughput.

Overview

Definition and Purpose

A Dual In-line Memory Module (DIMM) is a type of (RAM) module consisting of multiple (DRAM) chips mounted on a , featuring independent pins on both sides of the board to enable separate electrical connections and addressing. This design allows for a wider data pathway compared to earlier modules, facilitating efficient transfer within computer systems. The primary purpose of a DIMM is to serve as high-capacity, high-speed that temporarily stores data and instructions for quick access by the processor, supporting the operational needs of various computing devices. It enables users to easily upgrade and expand system memory by installing additional modules into motherboard slots, thereby improving overall performance without requiring complex hardware modifications. DIMMs are commonly used in desktop personal computers, workstations, and servers to handle demanding workloads such as and multitasking. Variants have evolved for use in laptops, adapting the form factor while retaining core functionality. At its core, a DIMM operates by storing data in its integrated DRAM chips, which are organized to provide access via a standard 64-bit wide data bus for transferring information to and from the system's . This configuration ensures reliable, low-latency retrieval of volatile data essential for running applications and operating systems.

Advantages Over Predecessors

DIMM modules introduced significant improvements over their predecessors, particularly Single In-line Modules (SIMMs), by enabling independent electrical contacts on both sides of the module. This design allows for a native 64-bit data path without the need for interleaving or pairing modules, effectively doubling the bandwidth compared to the 32-bit paths of SIMMs. In terms of scalability, DIMMs supported higher capacities per module, reaching up to 128 MB in their initial implementations during the mid-1990s, compared to the typical 16-32 MB limits of SIMMs at the time. This advancement facilitated easier multi-module configurations in systems, allowing for greater overall memory expansion without the constraints of paired installations required by SIMMs. DIMM's architecture was specifically tailored for compatibility with 64-bit processors, such as Intel's series, which featured a 64-bit external data bus. Unlike SIMMs, which necessitated the use of two modules in tandem to achieve full bus utilization, a single DIMM could populate the entire data bus, streamlining system design and reducing complexity. From a and efficiency standpoint, the standardized dual-sided layout of DIMMs simplified production processes and minimized signal interference through independent electrical contacts on each side of the module. This resulted in lower power consumption—operating at 3.3 V versus SIMMs' 5 V—and enhanced reliability in high-density configurations, making DIMMs more cost-effective for and deployment.

Historical Development

Origins in the 1990s

The Dual In-Line Memory Module (DIMM) emerged in the early as a response to the evolving demands of computing architectures requiring wider memory interfaces. The Intel Pentium processor, released in March 1993, featured a 64-bit external data bus, necessitating a shift from the 32-bit Single In-Line Memory Module (SIMM) design, which required pairing two modules to achieve the necessary bandwidth. This transition addressed the limitations of SIMM configurations in supporting higher data throughput without increasing complexity in design. JEDEC, the Joint Electron Device Engineering Council, played a pivotal role in formalizing the SDRAM standard in 1993, with the 168-pin DIMM mechanical specification following in 1995 as a standardized successor to specifically tailored for 64-bit systems. The initial DIMM design incorporated Extended Data Out () Dynamic Random-Access (DRAM) chips, which improved access times over prior Fast Page Mode (FPM) DRAM by allowing data output to begin before the next address was fully latched. 's standardization efforts focused on establishing through precise electrical characteristics, such as signal timing and voltage levels, and mechanical features like pin layouts and connector notches to prevent incorrect insertions. Early commercial adoption of DIMMs began in 1994, primarily in personal computers and workstations equipped with processors, where they simplified memory expansion by providing a single module for 64-bit access. The 168-pin configuration quickly gained prominence as the for subsequent Synchronous DRAM (SDRAM) implementations, enabling broader compatibility across vendors. JEDEC's collaborative process involved industry stakeholders in iterative reviews to refine these specifications, ensuring reliable performance in emerging 64-bit environments without proprietary variations.

Key Milestones and Transitions

The transition to Synchronous Dynamic Random-Access Memory (SDRAM) marked a pivotal shift in DIMM technology during the mid-1990s, with widespread adoption of 168-pin SDR DIMMs occurring between 1996 and 1997 as they replaced earlier Fast Page Mode (FPM) and Extended Data Out (EDO) modules. This change synchronized memory operations with the system clock, enabling higher speeds and better performance in personal computers and early servers compared to asynchronous predecessors. The introduction of Double Data Rate (DDR) SDRAM in 2000 represented the next major evolution, launching 184-pin DDR DIMMs that effectively doubled data transfer rates over SDRAM by capturing data on both rising and falling clock edges. This standard, formalized as JESD79-1 in June 2000, quickly gained traction in consumer and enterprise systems. Subsequent generations followed: DDR2 SDRAM in 2003 with 240-pin DIMMs under JESD79-2, offering improved power efficiency and higher bandwidth; and DDR3 SDRAM in 2007, also using 240-pin configurations via JESD79-3, which further reduced operating voltages to 1.5V while supporting greater module capacities. More recent advancements include , standardized in September 2012 under JESD79-4 and entering the market in 2014 with 288-pin DIMMs designed for higher densities and speeds up to 3200 MT/s. followed in July 2020 via JESD79-5, retaining the 288-pin form factor but incorporating an on-module (PMIC) to enhance and efficiency, with initial speeds reaching 4800 MT/s and updates supporting speeds up to 9200 MT/s as of October 2025. These transitions have profoundly influenced industry adoption, particularly in servers where Registered DIMMs (RDIMMs) became prevalent in the 2000s to handle higher channel populations and ensure in multi-socket environments. Capacity growth per DIMM module, driven by advancements aligned with principles of exponential density increases, evolved from typical 256 MB in early DDR eras to up to 512 GB per module in DDR5 configurations as of 2025, enabling scalable architectures.

Physical Design

Form Factors and Dimensions

The standard full-size Dual In-Line Memory Module (DIMM) measures 133.35 mm in length, 31.25 mm in height, and approximately 4 mm in thickness, adhering to mechanical outline specifications such as MO-309 for DDR4 variants. This form factor features a gold-plated with 240 pins for DDR3 modules and 288 pins for DDR4 modules, ensuring reliable and compatibility with desktop and server motherboards. The dimensions provide a balance between component density and ease of insertion into standard sockets, with tolerances defined by to maintain interchangeability across manufacturers. A compact variant, the Small Outline DIMM (SO-DIMM), is designed for laptops and space-constrained systems, measuring 67.6 mm in length while retaining a height of approximately 30 mm and a thickness of 3.8 mm, as outlined in standards for SO-DIMMs. SO-DIMMs use 200 pins for DDR2, 204 pins for DDR3, 260 pins for DDR4, and 262 pins for DDR5, depending on the generation, offering a thinner profile to fit into narrower without compromising performance in mobile applications. Unbuffered DIMMs (UDIMMs) and registered DIMMs (RDIMMs) share the core form factor but differ slightly in height due to the additional register chip on RDIMMs, which can increase the overall module height by up to 1-2 mm in some designs for better thermal dissipation. Both types include optional heat spreaders—aluminum or copper plates attached to the PCB—for enhanced thermal management in high-load scenarios, though these add minimal thickness (typically 0.5-1 mm) and are not part of the base JEDEC outline. Notch positions on the edge connector serve as keying mechanisms: the primary notch differentiates unbuffered (right position), registered (middle), and reserved/future use (left) configurations to prevent incompatible insertions, while a secondary voltage key notch ensures proper voltage alignment. JEDEC specifications also define precise mechanical tolerances, including a PCB thickness of 1.27 mm ±0.1 mm and edge connector lead spacing of 1.0 mm for DDR3 and 0.85 mm for DDR4 DIMMs, ensuring robust mechanical integrity and alignment during socket insertion. These parameters, along with guidelines for hole spacing in manufacturing, support consistent production and prevent issues like warping or misalignment in assembled systems.

Pin Configurations

The pin configurations of Dual In-line Memory Modules (DIMMs) define the electrical interfaces between the module and the system motherboard, encompassing signal lines for data, addresses, commands, clocks, power, and ground, while ensuring backward incompatibility through distinct layouts across generations. These configurations evolve with each DDR iteration to support higher densities, faster signaling, and improved integrity, standardized by the Joint Electron Device Engineering Council (JEDEC). The 168-pin (SDRAM) DIMM, introduced for single data rate operation, features 84 pins per side of the printed wiring board (PWB), operating at 3.3 V. It allocates 12 to 13 address pins for row and column selection (A0–A12), 64 data input/output pins (DQ0–DQ63) for the primary 64-bit wide bus, and dedicated control pins including Row Address Strobe (RAS#), Column Address Strobe (CAS#), and Write Enable (WE#), along with clock (CLK), (CS#), and bank address lines (BA0–BA1). Power (VDD) and ground (VSS) pins are distributed throughout for stable supply, with additional pins for optional error correction (ECC) in 72-bit variants using check bits (CB0–CB7). Succeeding it, the 184-pin Double Data Rate (DDR) SDRAM DIMM maintains a similar structure but increases to 92 pins per side, reducing voltage to 2.5 V for VDD and VDDQ to enable higher speeds while preserving compatibility with the 64-bit data bus (DQ0–DQ63). Key enhancements include differential clock pairs (CK and CK#) for reduced noise, along with strobe signals (DQS and DQS#) per byte lane for data synchronization, and multiplexed address/command pins (A0–A12, BA0–BA1) that combine row/column and bank addressing. Control signals like RAS#, CAS#, and WE# persist, with power and ground pins similarly interspersed, and an optional ECC extension to 72 bits. The 240-pin configurations for DDR2 and DDR3 SDRAM DIMMs expand to 120 pins per side, supporting 1.8 V operation for DDR2 and 1.5 V for DDR3, with provisions for additional bank addressing (up to BA0–BA2) via extra pins (A13, A14 in higher densities) to handle increased internal banks (up to 16). Both retain the 64-bit DQ bus with per-byte DQS/DQS# pairs and differential clocks, but DDR3 introduces a fly-by topology where address, command, and clock signals daisy-chain across ranks on the module for improved signal integrity and reduced skew, compared to the T-branch topology in DDR2. Control pins (RAS#, CAS#, WE#, ODT for on-die termination) and power/ground distribution evolve accordingly, with 72-bit ECC support. Modern 288-pin DDR4 and DDR5 DIMMs use 144 pins per side, operating at 1.2 V for DDR4 and introducing further refinements in DDR5 with dual 32-bit sub-channels per module for better efficiency. DDR4 employs a fly-by with (Pseudo Open Drain) signaling on data lines for lower power and swing, featuring 17 row bits (A0–A16), bank groups (BG0–BG1), and banks (BA0–BA1), alongside the 64-bit DQ with DQS/DQS# and differential CK/CK#. DDR5 builds on this with on-die ECC integrated into each DRAM device (eliminating module-level ECC pins in base configs), POD signaling across more lines, and dedicated pins for the Power Management Integrated Circuit (PMIC), which regulates voltages like VDD (1.1 V) and VPP from a 12 V input. Control signals include enhanced CS#, CKE, and parity bits for command/ reliability, with power/ground pins optimized for multi-rank support up to 8. To prevent cross-compatibility issues, DIMMs incorporate keying notches at specific positions along the pin edge: for example, the notch for 168-pin SDR is centered differently from the offset position in 184-pin DDR (around pin 92), while 240-pin DDR2/DDR3 notches are further shifted (near pin 120), and 288-pin DDR4/DDR5 notches are positioned even more offset (around pin 144) to ensure physical mismatch with prior sockets.
GenerationPin CountVoltage (VDD)Key SignalsTopology/Signaling Notes
SDR (168-pin)1683.3 VA0–A12, DQ0–DQ63, RAS#/CAS#/WE#Single-ended clock; T-branch
DDR (184-pin)1842.5 VA0–A12, BA0–BA1, DQ0–DQ63, DQS/DQS#Differential clock pairs; T-branch
DDR2/DDR3 (240-pin)2401.8 V / 1.5 VA0–A14, BA0–BA2, DQ0–DQ63Fly-by (DDR3); increased banks
DDR4/DDR5 (288-pin)2881.2 V / 1.1 VA0–A16/17, BG/BA, DQ0–DQ63 (dual sub-channels in DDR5)Fly-by; POD signaling; PMIC pins (DDR5)

Memory Architecture

Internal Organization

A Dual In-Line Memory Module (DIMM) internally organizes DRAM chips to provide a standardized 64-bit (or 72-bit for ECC variants) data interface to the system . The chips are arranged along the edges of the , with their data pins (DQ) connected in parallel to form the module's data width. Typically, unbuffered DIMMs use 8 to 18 DRAM chips to achieve this width, depending on the chip's data organization—x4 (4 bits per chip, requiring 16 chips per rank for 64 bits), x8 (8 bits per chip, requiring 8 chips per rank), or x16 (16 bits per chip, requiring 4 chips per rank)—and the presence of error-correcting code (ECC) chips, which add one extra chip per rank. The total capacity of a DIMM is calculated based on the number of chips, each chip's density (expressed in gigabits, Gb), and the overall structure, converting total bits to bytes via division by 8. For a single-rank unbuffered non-ECC DIMM using x8 organization, the formula simplifies to total capacity (in GB) = (number of chips × chip density in Gb) / 8; for example, 8 chips each of 8 Gb density yield (8 × 8) / 8 = 8 GB. This scales with higher-density chips or additional ranks, enabling modules from 1 GB to 128 GB or more in modern configurations. Addressing within a DIMM follows the standard DRAM row-and-column multiplexed scheme, where the sends row addresses followed by column addresses over shared pins to select data locations. In DDR4, each DRAM chip includes 16 s divided into 4 bank groups (with 4 banks per group), supporting fine-grained parallelism by allowing independent access to different groups while minimizing conflicts; DDR5 extends this to 32 banks organized into 8 bank groups. Row addresses typically span 14 to 18 bits (16K to 256K rows), and column addresses use 9 to 10 bits (512 to 1K columns), varying by density and organization. DIMM rank structure defines how chips are grouped for access: a single-rank module connects all chips to the same chip-select (CS) and control signals, treating them as one accessible unit for simpler, lower-density designs. In contrast, a dual-rank module interleaves two independent sets of chips—often placed on opposite sides of the PCB—with distinct CS signals, enabling the controller to alternate accesses between ranks for higher effective throughput and density, though at the potential cost of slightly increased latency due to rank switching.

Channel Ranking

In memory systems, channel ranking refers to the of ranks across one or more DIMMs connected to a single memory channel, where a rank constitutes a 64-bit (or 72-bit with ECC) wide set of DRAM chips that can be accessed simultaneously via shared signals. Single-rank configurations, featuring one such set per DIMM, prioritize simplicity and potentially higher operating speeds due to lower electrical loading on the channel, while multi-rank setups—such as dual-rank or quad-rank DIMMs—enable greater density by allowing multiple independent 64-bit accesses per channel through rank interleaving, though they introduce overhead from rank switching. Common configurations include dual-channel architectures, prevalent in consumer and entry-level server platforms, where two independent 64-bit channels operate in to achieve an effective 128-bit data width and double the bandwidth of a single-channel setup; this typically involves populating one or two DIMMs per channel for balanced performance and capacity. In high-end servers, quad-channel configurations extend this to four 64-bit channels for 256-bit effective width, quadrupling bandwidth and supporting denser populations, such as multiple multi-rank DIMMs per channel to maximize system-scale capacity. Increasing ranks per channel enhances overall capacity but can degrade maximum achievable speeds owing to heightened bus loading, which amplifies challenges and necessitates timing adjustments like extended all-bank row active times. Unbuffered DIMMs (UDIMMs) are generally limited to 2-4 total ranks per channel to mitigate excessive loading, restricting them to one or two DIMMs in most setups. To this, registered DIMMs (RDIMMs) employ a register to buffer command and signals, reducing the on those lines and enabling up to three DIMMs per channel without proportional speed penalties. Load-reduced DIMMs (LRDIMMs) further optimize by fully buffering data, command, and signals via an isolation memory buffer, which supports daisy-chained topologies and allows up to three DIMMs per channel even with higher-rank modules, prioritizing density in large-scale servers.

Performance Specifications

Data Speeds and Transfer Rates

Data speeds for DIMMs are typically measured in mega-transfers per second (MT/s), which indicates the number of data transfers occurring per second on the memory bus. This metric reflects the effective , accounting for the (DDR) mechanism where data is transferred on both the rising and falling edges of the . For instance, a DDR4-3200 DIMM operates at 3200 MT/s, enabling high-throughput data movement between the memory modules and the system controller. The evolution of DIMM speeds has progressed significantly across generations, starting from (SDRAM) DIMMs with clock speeds of 66-133 MHz (equivalent to 66-133 MT/s due to single data rate operation). Subsequent DDR generations doubled and then multiplied these rates: DDR1 reached 266-400 MT/s, DDR2 advanced to 533-800 MT/s, DDR3 to 800-2133 MT/s, and DDR4 to 2133-3200 MT/s. DDR5, the current standard as of 2025, begins at 4800 MT/s and extends up to 9200 MT/s per specifications, representing a substantial increase in transfer capabilities for modern computing demands.
GenerationStandard MT/s RangePeak Bandwidth per DIMM (GB/s)
SDRAM66-1330.53-1.06
DDR1266-4002.1-3.2
DDR2533-8004.3-6.4
DDR3800-21336.4-17.1
DDR42133-320017.1-25.6
DDR54800-920038.4-73.6
Bandwidth, or the maximum data transfer rate per DIMM, is calculated using the formula: Bandwidth (GB/s)=MT/s×64 bits8×1000\text{Bandwidth (GB/s)} = \frac{\text{MT/s} \times 64 \text{ bits}}{8 \times 1000} This simplifies to MT/s × 8 bytes per transfer divided by 1000 for gigabytes, assuming a standard 64-bit wide bus. For example, a DDR5-6400 DIMM achieves 51.2 GB/s, doubling the effective bandwidth of comparable DDR4 modules through higher transfer rates and architectural optimizations like dual 32-bit sub-channels per DIMM. To exceed JEDEC-standard speeds, users often employ via Intel Extreme Memory Profile (XMP) technology, which embeds pre-configured profiles in the DIMM's (SPD) . These profiles automatically adjust clock speeds, timings, and voltages—such as increasing from 1.2V to 1.35V or higher—for non-standard operation, like pushing DDR5 beyond 6400 MT/s to 8000 MT/s or more, provided the and cooling support it. However, requires stability testing to avoid .

Timings and Latency

DIMM performance is significantly influenced by timing parameters that dictate the delays in accessing and refreshing data within the memory modules. These timings, specified in clock cycles, determine how quickly the memory can respond to read or write requests from the processor. The primary timing metrics include (CL), which measures the number of clock cycles between a column strobe (CAS) command and the availability of the first data bit; row-to-column delay (tRCD), representing the cycles needed to activate a row and then select a column within it; and row precharge time (tRP), the cycles required to close the current row and prepare for the next row activation. For instance, a typical DDR4 DIMM might operate at CL22, tRCD 22, and tRP 22, as standardized by for modules like DDR4-3200. To convert these cycle-based timings into practical measures of responsiveness, the effective latency is calculated in nanoseconds using the formula: Effective Latency (ns)=CLData Rate (MT/s)2000\text{Effective Latency (ns)} = \frac{\text{CL}}{\frac{\text{Data Rate (MT/s)}}{2000}} This accounts for the memory's clock frequency, where the data rate in mega-transfers per second (MT/s) is divided by 2000 to yield gigahertz-equivalent frequency. For example, a DDR4-3200 module with CL22 yields an effective CAS latency of approximately 14 ns (22 / (3200 / 2000) = 13.75 ns), providing a benchmark for comparing responsiveness across different speeds. Similarly, tRCD and tRP can be converted using the same divisor to assess full access times, often resulting in latencies around 11-14 ns for standard DDR4 configurations. This metric highlights how higher data rates can mitigate the impact of increased cycle counts in faster generations. Trade-offs in these timings balance latency improvements against power consumption and stability. Lower CL, tRCD, or tRP values reduce access delays, enhancing responsiveness for latency-sensitive applications like gaming or , but they demand higher voltage or more robust signaling, increasing power draw—typically by 10-20% for aggressive timings. Generational advancements have progressively lowered these latencies; DDR5 DIMMs, for example, achieve tCL values of 32-40 cycles at speeds up to 6400 MT/s, translating to effective latencies of about 10-12.5 ns, thanks to on-die error correction and refined bank architectures that allow tighter timings without excessive power penalties. Timings are formally defined by standards to ensure interoperability, with modules tested for compliance at specified voltages and temperatures, but real-world performance often varies due to or system-specific factors. Tools like provide benchmarks measuring actual read/write latencies, revealing differences of 1-3 ns between JEDEC-compliant operation and optimized setups on modern platforms, underscoring the gap between standardized specs and practical deployment.

Integrated Features

Serial Presence Detect (SPD)

Serial Presence Detect (SPD) is a standardized feature on DIMM modules that enables automatic configuration by storing essential module parameters in a non-volatile chip. This chip, typically ranging from 256 bytes for older DDR generations to 512 bytes for DDR4, is accessible via the (SMBus) or protocol, allowing the host system to query the without manual intervention. The primary function of SPD is to provide the or with accurate details about the installed , ensuring compatibility and optimal operation by preventing configuration mismatches that could lead to instability or failure. The -defined SPD format organizes into structured fields within the , including the module's manufacturing information such as the manufacturer ID (a JEDEC-assigned code) and for unique identification. Key operational parameters stored encompass the module's capacity (e.g., total in gigabits), supported speeds (e.g., maximum clock frequency), and timings (e.g., and row access times). Additional fields cover supported operating voltages (e.g., 1.2V for DDR4) and optional profiles like Extreme Memory Profile (XMP), which encode settings for enhanced performance beyond standard JEDEC limits. These fields are encoded in binary or ASCII formats, with the first 128 bytes dedicated to core JEDEC parameters and subsequent bytes for vendor-specific or extended . During system initialization, the motherboard's reads the SPD data from the over the dedicated SMBus lines (typically pins on the DIMM connector) at boot time, using the address assigned to the SPD device (e.g., 0x50 or 0x51). This process allows the system to automatically program the with the appropriate voltage, , and timing values derived from the SPD, thereby configuring the subsystem for reliable operation. If multiple modules are present, the compares their SPD data to determine the settings, avoiding incompatibilities such as differing speeds or ranks. This read-only interaction (with enabled post-manufacture) ensures and simplifies installation for end users. The SPD specification has evolved to accommodate advancing memory technologies, with DDR5 introducing an expanded 1024-byte EEPROM capacity under the JESD400-5 standard to support more complex configurations. As of the October 2025 update to version 1.4, enhancements include additional fields for (PMIC) data, enabling finer control over on-module , alongside support for higher speeds up to DDR5-9200. These updates reflect the growing demands of DDR5's dual-channel architecture and integrated features, while maintaining backward compatibility with core principles.

Error Correction and Reliability

Error-Correcting Code (ECC) enhances the reliability of DIMMs by incorporating parity bits that enable the detection and correction of data errors arising from transient faults. In standard implementations, ECC appends 8 parity bits to 64 data bits, creating a 72-bit codeword based on an extended that corrects single-bit errors and detects double-bit errors. ECC is typically realized through dedicated parity chips mounted on the DIMM or, in newer designs, integrated directly into the DRAM devices. Registered DIMMs (RDIMMs), optimized for server workloads, incorporate ECC as a standard feature to safeguard against errors in high-density configurations. Unbuffered DIMMs (UDIMMs), prevalent in desktop and client systems, provide ECC support optionally, allowing flexibility based on application needs. Consumer-grade RAM typically consists of non-ECC UDIMMs, which are cheaper and can support higher clock speeds for gaming and daily use but lack error detection and correction, making them suitable for home PCs yet unsuitable for 24/7 continuous loads due to potential data integrity issues. In contrast, industrial-grade and server-grade RAM, such as RDIMMs and Load-Reduced DIMMs (LRDIMMs) with ECC, are designed for continuous operation in demanding environments, correcting single-bit errors and detecting multi-bit errors for enhanced reliability, though they may be slightly more expensive and operate at potentially lower frequencies to prioritize stability over maximum speed. By correcting single-bit soft errors—transient bit flips often induced by cosmic rays or alpha particles—ECC significantly reduces the soft error rate (SER), which quantifies error occurrences per unit of memory and time. Complementary memory scrubbing periodically reads data blocks, applies ECC correction if needed, and rewrites the verified content, preempting multi-bit error escalation and bolstering overall system dependability. DDR5 DIMMs introduce on-die ECC, which internally corrects single-bit errors within individual DRAM chips to enhance manufacturing yields and operational stability independent of external ECC. In server-grade setups, Chipkill ECC advances reliability further by tolerating multi-bit failures, such as those from an entire failing chip, through redundant data distribution and advanced Reed-Solomon coding across modules.

Types and Variants

Standard Full-Size DIMMs

Standard full-size DIMMs represent the baseline form factor for desktop computers and entry-level servers, featuring unbuffered and basic registered implementations that prioritize compatibility and cost-effectiveness across memory generations. These modules adhere to standards, providing a standardized interface for integrating (DRAM) into mainstream computing systems. With a typical physical of 133.35 mm in length and 31.25 mm in height, they support efficient in non-buffered configurations suitable for consumer-grade applications. Key characteristics include a 288-pin connector configuration for DDR4 and DDR5 generations, enabling high-density data paths without additional buffering in unbuffered variants (UDIMMs). These DIMMs support capacities up to 64 GB per module for DDR4 UDIMMs under specifications, extending to 128 GB for DDR5 unbuffered modules through advancements in die stacking and error correction integration. Primarily deployed in desktops and entry-level servers, they facilitate reliable performance in environments requiring up to dual-processor support without the overhead of advanced buffering. Consumer-grade UDIMMs, which typically lack error-correcting code (ECC) functionality, are cheaper and often support higher clock speeds optimized for gaming and daily use, but they do not provide error detection or correction, making them less suitable for 24/7 continuous loads where data integrity is critical. Evolutionary generations of standard full-size DIMMs trace from the obsolete SDR variants, which utilized a 168-pin layout for early synchronous DRAM implementations in late-1990s systems. Subsequent DDR3 unbuffered DIMMs, with 240 pins, became the standard for PCs in the mid-2000s, operating at 1.5 V nominally or 1.35 V in low-voltage modes to balance power and performance. DDR4 unbuffered DIMMs, introduced in 2014, maintain the 288-pin form while reducing operating voltage to 1.2 V, enhancing energy efficiency by approximately 20% over DDR3 equivalents. DDR5 further refines this with 1.1 V operation, supporting higher capacities and speeds while preserving the 288-pin interface for backward-compatible designs. In applications, standard full-size DIMMs excel in single- or dual-channel configurations, where they maximize bandwidth in desktop and entry-level server setups by populating two modules per channel for optimal interleaving. A keying notch positioned approximately 60 pins from one edge on DDR4 modules ensures proper orientation and prevents insertion into incompatible slots, such as those for DDR3. This choice supports seamless upgrades within compatible platforms, though compact alternatives like SO-DIMMs are preferred for space-constrained laptops. Limitations of these DIMMs include a maximum of 4 to 8 modules per channel in unbuffered configurations, constrained by signal loading and electrical tolerances to avoid degradation in timing and stability. Beyond this, buffering becomes necessary for higher densities. Heat dissipation is managed through optional heatsinks attached to the module, particularly in high-speed DDR4/DDR5 variants operating above 3200 MT/s, to maintain thermal thresholds under sustained workloads.

Small Outline DIMMs (SO-DIMMs)

Small Outline DIMMs (SO-DIMMs) represent a compact variant of dual in-line modules optimized for space-limited environments, measuring approximately half the length of standard full-size DIMMs at about 67.6 mm compared to 133.35 mm. This reduced form factor enables their integration into portable and embedded systems without compromising core functionality. SO-DIMMs adhere to the same underlying DDR standards as full-size DIMMs but adapt the pin configuration to suit their smaller footprint: 204 pins for DDR3 implementations, 260 pins for DDR4, and 262 pins for DDR5. These modules are predominantly deployed in laptops, where their diminutive size facilitates slim designs, as well as in printers and routers that require reliable memory in constrained enclosures. DDR5 SO-DIMMs, in particular, support module capacities up to 64 GB, enabling higher memory densities in modern portable as of 2025. Unlike their full-size counterparts, SO-DIMMs often incorporate lower power specifications, with DDR5 variants operating at 1.1 V to contribute to extended runtime in battery-powered devices. In ultra-thin laptops and tablets, SO-DIMMs are commonly soldered directly onto the to further minimize thickness and eliminate removable slots. SO-DIMMs ensure broad compatibility with full-size DIMMs by employing identical (SPD) protocols and timing parameters defined under standards, allowing systems to automatically configure memory settings upon detection. The compact nature of SO-DIMM-based motherboards inherently features shorter trace lengths between the and modules, which improves by reducing propagation delays and minimizing in high-speed DDR environments. This design adaptation supports stable operation in densely packed layouts without necessitating additional buffering for most consumer applications.

Registered and Buffered Variants

Registered DIMMs (RDIMMs) incorporate a register, typically a clock driver (RCD), that retimes and buffers command and address signals to reduce the on the , enabling support for up to three DIMMs per channel in server systems. This buffering isolates the from the capacitive load of multiple modules, improving and allowing higher operating frequencies compared to unbuffered DIMMs. However, the register introduces an additional one clock cycle of latency, as commands are held and retransmitted on the next cycle. Server-grade and industrial-grade RDIMMs typically include ECC for detecting and correcting single-bit errors while detecting multi-bit errors, making them slightly more expensive with potentially lower frequencies for enhanced stability, and they are designed for reliability in continuous server operation under 24/7 loads. Load-Reduced DIMMs (LRDIMMs) extend this buffering approach by incorporating isolation memory buffers (iMBs) that handle not only command and address signals but also data input/output (DQ) lines, presenting a single low-load differential signal to the for each rank. This load reduction allows for three or more DIMMs per channel without significant signal degradation, supporting greater memory density—such as up to 768 GB total capacity in configurations using 32 GB LRDIMMs across multiple channels in enterprise servers. Server-grade and industrial-grade LRDIMMs also incorporate ECC for single-bit error correction and multi-bit detection, prioritizing reliability over maximum speed in high-density, continuous-operation environments like data centers. Other buffered variants include 3D-stacked () DIMMs, which use through-silicon vias (TSVs) to vertically stack multiple DRAM dies, achieving higher densities like 128 GB or 256 GB per module while maintaining compatibility with RDIMM or LRDIMM architectures. Some systems support an online spare feature on RDIMMs or LRDIMMs, which incorporates to automatically disable failed modules and activate spares, minimizing downtime in mission-critical environments without hot-swapping hardware. These variants are primarily deployed in data centers and (HPC) applications, where high memory capacity and reliability are essential. Emerging options like Clocked Unbuffered DIMMs (CUDIMMs) for DDR5 integrate a clock driver to enable 4-rank configurations with up to 128 GB per module as of 2025. In DDR5 implementations, LRDIMMs integrate a (PMIC) to regulate the 1.1 V operating voltage, enhancing energy efficiency by up to 20% over DDR4 while supporting dense configurations for AI and workloads as of 2025.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.