Recent from talks
Nothing was collected or created yet.

A DIMM (Dual In-line Memory Module) is a popular type of memory module used in computers. It is a printed circuit board with one or both sides (front and back) holding DRAM chips and pins.[1] The vast majority of DIMMs are manufactured in compliance with JEDEC memory standards, although there are proprietary DIMMs. DIMMs come in a variety of speeds and capacities, and are generally one of two lengths: PC, which are 133.35 mm (5.25 in), and laptop (SO-DIMM), which are about half the length at 67.60 mm (2.66 in).[2]
History
[edit]DIMMs (Dual In-line Memory Module) were a 1990s upgrade for SIMMs (Single In-line Memory Modules)[3][4] as Intel P5-based Pentium processors began to gain market share. The Pentium had a 64-bit bus width, which would require SIMMs installed in matched pairs in order to populate the data bus. The processor would then access the two SIMMs in parallel.
DIMMs were introduced to eliminate this disadvantage. The contacts on SIMMs on both sides are redundant, while DIMMs have separate electrical contacts on each side of the module.[5] This allowed them to double the SIMMs 32-bit data path into a 64-bit data path.[6]
The name "DIMM" was chosen as an acronym for Dual In-line Memory Module symbolizing the split in the contacts of a SIMM into two independent rows.[6] Many enhancements have occurred to the modules in the intervening years, but the word "DIMM" has remained as one of the generic terms[clarification needed] for a computer memory module.
Form factors
[edit]Widths
[edit]DIMMs come in a number of board sizes. In order of descending size: DIMM, SO-DIMM, MiniDIMM, and MicroDIMM.
Regular DIMMs are generally 133.35 mm in length, while SO-DIMMs are generally 67.6 mm in length.[2]
-
256 MB MicroDIMM PC133 SDRAM (double sided, 4 chips).
SO-DIMM
[edit]

A SO-DIMM (pronounced "so dim" /ˈsoʊdɪm/, also spelled SODIMM) or small outline DIMM, is a smaller alternative to a DIMM, being roughly half the physical size of a regular DIMM. The first SO-DIMMs had 72 pins and were introduced by JEDEC in 1997.[7][8][9] Before its introduction, many laptops would use proprietary[10] RAM modules which were expensive and hard to find.[7][11]
SO-DIMMs are often used in computers that have limited space, which include laptops, notebook computers, small-footprint personal computers such as those based on Nano-ITX motherboards, high-end upgradable office printers, and networking hardware such as routers and NAS devices.[12] They are usually available with the same size data path and speed ratings of the regular DIMMs though normally with smaller capacities.
-
The original 72-pin SO-DIMM
-
SO-DIMM SDR 144pin 128MB ram chip by IBM
-
A 200-pin PC2-5300 DDR2 SO-DIMM.
-
A 204-pin PC3-10600 DDR3 SO-DIMM.
Connector
[edit]

Different generations of memory are not interchangable: neither forward compatible nor backward compatible. To make this difference clear and avoid any confusion, their DIMM modules all have different pin counts and/or different notch positions. DDR5 SDRAM is the most recent type of DDR memory and has been in use since 2020.
- DIMM
-
- 100-pin: printer SDRAM and printer ROM (e.g., PostScript)
- 168-pin: SDR SDRAM, sometimes used for FPM/EDO DRAM in workstations or servers, may be 3.3 or 5 V
- 184-pin: DDR SDRAM
- 200-pin: FPM/EDO DRAM in some Sun workstations and servers
- 240-pin: DDR2 SDRAM, DDR3 SDRAM and FB-DIMM DRAM
- 278-pin: HP high density SDRAM
- 288-pin: DDR4 SDRAM and DDR5 SDRAM[14]
- SO-DIMM
-
- 72-pin: FPM DRAM and EDO DRAM;[7] different pin configuration from 72-pin SIMM
- 144-pin: SDR SDRAM,[7] sometimes used for DDR2 SDRAM
- 200-pin: DDR SDRAM[7] and DDR2 SDRAM
- 204-pin: DDR3 SDRAM
- 260-pin: DDR4 SDRAM
- 260-pin: UniDIMMs carrying either DDR3 or DDR4 SDRAM; differently notched than DDR4 SO-DIMMs
- 262-pin: DDR5 SDRAM
- MiniDIMM
-
- 244-pin: DDR2 SDRAM
Besides pin count there are also physical notches to differentiate the incompatible types of DIMM. For example, the ancient 168-pin SDR SDRAM had different voltage ratings (5.0 V or 3.3 V) and a difference of registered (buffered) vs unbuffered. As a result it has two notch positions to prevent the insertion of the wrong type of module.
-
Notch positions on DDR (top) and DDR2 (bottom) DIMM modules.
-
4 GB DDR VLP DIMM from Kingston Technology
Heights
[edit]Several form factors are commonly used in DIMMs. Single Data Rate Synchronous DRAM (SDR SDRAM) DIMMs were primarily manufactured in 1.5 inches (38 mm) and 1.7 inches (43 mm) heights, with the nominal value being 30 millimetres (1.2 in). When 1U rackmount servers started becoming popular, these form factor registered DIMMs had to plug into angled DIMM sockets to fit in the 1.75 inches (44 mm) high box. To alleviate this issue, the next standards of DDR DIMMs were created with a "low profile" (LP) height of around 1.2 inches (30 mm). These fit into vertical DIMM sockets for a 1U platform.
With the advent of blade servers, angled slots have once again become common in order to accommodate LP form factor DIMMs in these space-constrained boxes. This led to the development of the Very Low Profile (VLP) form factor DIMM with a height of around 18 millimetres (0.71 in). These will fit vertically in ATCA systems.
Very similar height levels are used for SO-DIMM, Mini-DIMM and Micro-DIMM.[15]
| Generation | Full-height (1U) | Very low profile (VLP) | Notes | ||
|---|---|---|---|---|---|
| Nominal | Maximum | Nominal | Maximum | ||
| DDR2[17] | 30.00 millimetres (1.181 in) | 30.50 millimetres (1.201 in) | — | — | |
| DDR3[18] | 30.00 millimetres (1.181 in) | 30.50 millimetres (1.201 in) | 18.75 millimetres (0.738 in) | 18.90 millimetres (0.744 in) | |
| DDR4[19] | 31.25 millimetres (1.230 in) | 31.40 millimetres (1.236 in) | 18.75 millimetres (0.738 in) | 18.90 millimetres (0.744 in) | |
| DDR5[20] | 31.25 millimetres (1.230 in) | 31.40 millimetres (1.236 in) | — | — |
|
Notes:
- Low profile (LP) is not a JEDEC standard.
- The full JEDEC standards also regulate factors such as thickness.
- SO-DIMMs for DDR4 and DDR5 maintain the traditional height of 30.00±0.15 mm; see JEDEC MO-310A and MO-337B. The height increase for "full height" DIMM does not apply to SO-DIMM.
- It is common for higher-end consumer DDR4 DIMMs to exceed the JEDEC full height due to the use of an added heat sink. Some heat sinks add as little as 1 millimetre (0.039 in) while others add up to 5 millimetres (0.20 in).
Similar connectors
[edit]As of Q2 2017, Asus has had a PCIe based "DIMM.2", which has a similar socket to DDR3 DIMMs and is used to put in a module to connect up to two M.2 NVMe solid-state drives. However, it cannot use common DDR type ram and does not have much support from other than Asus.[21].
Components
[edit]Organization
[edit]Most DIMMs are built using "×4" ("by four") or "×8" ("by eight") memory chips with up to nine chips per side; "×4" and "×8" refer to the data width of the DRAM chips in bits. High-capacity DIMMs such as 256 GB DIMMs can have up to 19 chips per side.
In the case of "×4" registered DIMMs, the data width per side is 36 bits; therefore, the memory controller (which requires 72 bits) needs to address both sides at the same time to read or write the data it needs. In this case, the two-sided module is single-ranked. For "×8" registered DIMMs, each side is 72 bits wide, so the memory controller only addresses one side at a time (the two-sided module is dual-ranked).
The above example applies to ECC memory that stores 72 bits instead of the more common 64. There would also be one extra chip per group of eight, which is not counted.
Ranking
[edit]Sometimes memory modules are designed with two or more independent sets of DRAM chips connected to the same address and data buses; each such set is called a rank. Ranks that share the same slot, only one rank may be accessed at any given time; it is specified by activating the corresponding rank's chip select (CS) signal. The other ranks on the module are deactivated for the duration of the operation by having their corresponding CS signals deactivated. DIMMs are currently[when?] being commonly manufactured with up to four ranks per module. Consumer DIMM vendors have recently[when?] begun to distinguish between single and dual ranked DIMMs.
After a memory word is fetched, the memory is typically inaccessible for an extended period of time while the sense amplifiers are charged for access of the next cell. By interleaving the memory (e.g. cells 0, 4, 8, etc. are stored together in one rank), sequential memory accesses can be performed more rapidly because sense amplifiers have 3 cycles of idle time for recharging, between accesses.
DIMMs are often referred to as "single-sided" or "double-sided" to describe whether the DRAM chips are located on one or both sides of the module's printed circuit board (PCB). However, these terms may cause confusion, as the physical layout of the chips does not necessarily relate to how they are logically organized or accessed. Indeed, quad-ranked DIMMs exist.
JEDEC decided that the terms "dual-sided", "double-sided", or "dual-banked" were not correct when applied to registered DIMMs (RDIMMs).
Multiplexed Rank DIMM (MRDIMM) allow data from multiple ranks to be transmitted on the same channel. It was announced for DDR5 in July 2024 and is expected to be backwards compatible with DDR5 RDIMM.[22]
SPD EEPROM
[edit]A DIMM's capacity and other operational parameters may be identified with serial presence detect (SPD), an additional chip which contains information about the module type and timing for the memory controller to be configured correctly. The SPD EEPROM connects to the System Management Bus and may also contain thermal sensors (TS-on-DIMM).[23]
Features
[edit]Speeds
[edit]For various technologies, there are certain bus and device clock frequencies that are standardized; there is also a decided nomenclature for each of these speeds for each type.
DIMMs based on Single Data Rate (SDR) DRAM have the same bus frequency for data, address and control lines. DIMMs based on Double Data Rate (DDR) DRAM have data but not the strobe at double the rate of the clock; this is achieved by clocking on both the rising and falling edge of the data strobes. Power consumption and voltage gradually became lower with each generation of DDR-based DIMMs.
Another influence is Column Access Strobe (CAS) latency, or CL, which affects memory access speed. This is the delay time between the READ command and the moment data is available. See main article CAS/CL and memory timing.
| Chip | Module | Effective clock (MHz) |
Transfer rate (MT/s) |
Voltage (V) |
|---|---|---|---|---|
| SDR-66 | PC-66 | 66 | 66 | 3.3 |
| SDR-100 | PC-100 | 100 | 100 | 3.3 |
| SDR-133 | PC-133 | 133 | 133 | 3.3 |
| Chip | Module | Memory clock (MHz) |
I/O bus clock (MHz) |
Transfer rate (MT/s) |
Voltage (V) |
|---|---|---|---|---|---|
| DDR-200 | PC-1600 | 100 | 100 | 200 | 2.5 |
| DDR-266 | PC-2100 | 133 | 133 | 266 | 2.5 |
| DDR-333 | PC-2700 | 166 | 166 | 333 | 2.5 |
| DDR-400 | PC-3200 | 200 | 200 | 400 | 2.6 |
| Chip | Module | Memory clock (MHz) |
I/O bus clock (MHz) |
Transfer rate (MT/s) |
Voltage (V) |
|---|---|---|---|---|---|
| DDR2-400 | PC2-3200 | 100 | 200 | 400 | 1.8 |
| DDR2-533 | PC2-4200 | 133 | 266 | 533 | 1.8 |
| DDR2-667 | PC2-5300 | 166 | 333 | 667 | 1.8 |
| DDR2-800 | PC2-6400 | 200 | 400 | 800 | 1.8 |
| DDR2-1066 | PC2-8500 | 266 | 533 | 1066 | 1.8 |
| Chip | Module | Memory clock (MHz) |
I/O bus clock (MHz) |
Transfer rate (MT/s) |
Voltage (V) |
|---|---|---|---|---|---|
| DDR3-800 | PC3-6400 | 100 | 400 | 800 | 1.5 |
| DDR3-1066 | PC3-8500 | 133 | 533 | 1066 | 1.5 |
| DDR3-1333 | PC3-10600 | 166 | 667 | 1333 | 1.5 |
| DDR3-1600 | PC3-12800 | 200 | 800 | 1600 | 1.5 |
| DDR3-1866 | PC3-14900 | 233 | 933 | 1866 | 1.5 |
| DDR3-2133 | PC3-17000 | 266 | 1066 | 2133 | 1.5 |
| DDR3-2400 | PC3-19200 | 300 | 1200 | 2400 | 1.5 |
| Chip | Module | Memory clock (MHz) |
I/O bus clock (MHz) |
Transfer rate (MT/s) |
Voltage (V) |
|---|---|---|---|---|---|
| DDR4-1600 | PC4-12800 | 200 | 800 | 1600 | 1.2 |
| DDR4-1866 | PC4-14900 | 233 | 933 | 1866 | 1.2 |
| DDR4-2133 | PC4-17000 | 266 | 1066 | 2133 | 1.2 |
| DDR4-2400 | PC4-19200 | 300 | 1200 | 2400 | 1.2 |
| DDR4-2666 | PC4-21300 | 333 | 1333 | 2666 | 1.2 |
| DDR4-3200 | PC4-25600 | 400 | 1600 | 3200 | 1.2 |
| Chip | Module | Memory clock (MHz) |
I/O bus clock (MHz) |
Transfer rate (MT/s) |
Voltage (V) |
|---|---|---|---|---|---|
| DDR5-4000 | PC5-32000 | 2000 | 2000 | 4000 | 1.1 |
| DDR5-4400 | PC5-35200 | 2200 | 2200 | 4400 | 1.1 |
| DDR5-4800 | PC5-38400 | 2400 | 2400 | 4800 | 1.1 |
| DDR5-5200 | PC5-41600 | 2600 | 2600 | 5200 | 1.1 |
| DDR5-5600 | PC5-44800 | 2800 | 2800 | 5600 | 1.1 |
| DDR5-6000 | PC5-48000 | 3000 | 3000 | 6000 | 1.1 |
| DDR5-6200 | PC5-49600 | 3100 | 3100 | 6200 | 1.1 |
| DDR5-6400 | PC5-51200 | 3200 | 3200 | 6400 | 1.1 |
| DDR5-6800 | PC5-54400 | 3400 | 3400 | 6800 | 1.1 |
| DDR5-7200 | PC5-57600 | 3600 | 3600 | 7200 | 1.1 |
| DDR5-7600 | PC5-60800 | 3800 | 3800 | 7600 | 1.1 |
| DDR5-8000 | PC5-64000 | 4000 | 4000 | 8000 | 1.1 |
| DDR5-8400 | PC5-67200 | 4200 | 4200 | 8400 | 1.1 |
| DDR5-8800 | PC5-70400 | 4400 | 4400 | 8800 | 1.1 |
Error correction
[edit]ECC DIMMs are those that have extra data bits which can be used by the system memory controller to detect and correct errors. There are numerous ECC schemes, but perhaps the most common is Single Error Correct, Double Error Detect (SECDED) which uses an extra byte per 64-bit word. ECC modules usually carry a multiple of 9 instead of a multiple of 8 chips as a result.
Register/buffer
[edit]It is electrically demanding for a memory controller to drive many DIMMs. Registered DIMMs add a hardware register to the clock, address, and command lines so that the signals are refreshed on the DIMM, allowing a reduced load on the memory controller. Variants include LRDIMM with all lines buffered and CUDIMM/CSODIMM with only the clock signal buffered. The register feature often occurs with ECC, but they do not actually depend on each other and can occur independently.
See also
[edit]- Dual in-line package (DIP)
- Memory scrambling
- Memory geometry – logical configuration of RAM modules (channels, ranks, banks, etc.)
- Motherboard
- NVDIMM – non-volatile DIMM
- Row hammer
- Rambus in-line memory module (RIMM)
- Single in-line memory module (SIMM)
- Single in-line package (SIP)
- Zig-zag in-line package (ZIP)
- Compression Attached Memory Module (CAMM)
References
[edit]- ^ "What is DIMM (Dual Inline Memory Module)?". GeeksforGeeks. 2020-04-15. Archived from the original on 2024-04-07. Retrieved 2024-04-07.
In the case of SIMM, the connectors are only present on the single side of the module...DIMM has a row of connectors on both sides(front and back) of the module
- ^ a b "Common DIMM Memory Form Factor". 2009-10-06. Archived from the original on 2021-05-13. Retrieved 2021-05-13.
- ^ Lyla, Das B. (September 2010). The X86 Microprocessors: Architecture and Programming (8086 to Pentium). Pearson Education India. ISBN 978-81-317-3246-5.
- ^ Mueller, Scott (March 7, 2013). Upgrading and Repairing PCs: Upgrading and Repairing_c21. Que Publishing. ISBN 978-0-13-310536-0. Archived from the original on September 17, 2024. Retrieved December 26, 2023 – via Google Books.
- ^ Jacob, Bruce; Wang, David; Ng, Spencer (28 July 2010). Memory Systems: Cache, DRAM, Disk. Morgan Kaufmann. ISBN 978-0-08-055384-9.
- ^ a b Mueller, Scott (2004). Upgrading and Repairing PCS. Que. ISBN 978-0-7897-2974-3. Archived from the original on 2024-09-17. Retrieved 2023-12-26.
- ^ a b c d e f g Mueller, Scott (2004). Upgrading and Repairing Laptops. Que. ISBN 978-0-7897-2800-5. Archived from the original on 2024-09-17. Retrieved 2023-12-26.
- ^ "72 Pin DRAM SO-DIMM | JEDEC". Archived from the original on 2024-09-17. Retrieved 2023-11-09.
- ^ Fulton, Jennifer (November 9, 2000). "The complete idiot's guide to upgrading and repairing PCs". Indianapolis, IN : Alpha Books – via Internet Archive.
- ^ Jones, Mitt (December 25, 1990). "PC Magazine". Ziff-Davis Publishing Company. Archived from the original on September 17, 2024. Retrieved February 4, 2024 – via Google Books.
- ^ Norton, Peter; Clark, Scott H. (2002). Peter Norton's New Inside the PC. Sams. ISBN 978-0-672-32289-1.
- ^ Synology Inc. "Synology RAM Module". synology.com. Archived from the original on 2016-06-02. Retrieved 2022-03-23.
- ^ "Are DDR, DDR2 and DDR3 SO-DIMM memory modules interchangeable?". acer.custhelp.com. Retrieved 2015-06-26.[permanent dead link]
- ^ Smith, Ryan (2020-07-14). "DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond". AnandTech. Archived from the original on 2021-04-05. Retrieved 2020-07-15.
- ^ "CST Inc,DDR5,DDR4,DDR3,DDR2,DDR,Nand,Nor,Flash,MCP,LPDDR,LPDDR2,LPDDR3,LPDDR4,LRDIMM, Memory Tester Automatic DIMM SODIMM Handler Company Provides Memory Solution". www.simmtester.com.
- ^ "MICROELECTRONIC OUTLINES (MO)" (PDF). JEDEC.
- ^ JEDEC documents MO-256, MO-260, MO-274
- ^ JEDEC MO-269J Whitepaper., accessed Aug. 20, 2014.
- ^ JEDEC MO-309E Whitepaper., accessed Aug. 20, 2014.
- ^ DIMM:MO-329J; SO-DIMM: MO-337B.
- ^ ASUS DIMM.2 is a M.2 Riser Card. Archived 2020-06-05 at the Wayback Machine, accessed Jun. 4, 2020.
- ^ "JEDEC Unveils Plans for DDR5 MRDIMM and LPDDR6 CAMM Standards to Propel High-Performance Computing and AI" (Press release). JEDEC. 22 July 2024.
- ^ "Temperature Sensor in DIMM memory modules". Archived from the original on 2016-04-01. Retrieved 2013-03-17.
External links
[edit]Overview
Definition and Purpose
A Dual In-line Memory Module (DIMM) is a type of random-access memory (RAM) module consisting of multiple dynamic random-access memory (DRAM) chips mounted on a printed circuit board, featuring independent pins on both sides of the board to enable separate electrical connections and addressing.[2][6] This design allows for a wider data pathway compared to earlier modules, facilitating efficient data transfer within computer systems.[1] The primary purpose of a DIMM is to serve as high-capacity, high-speed volatile memory that temporarily stores data and instructions for quick access by the processor, supporting the operational needs of various computing devices.[7] It enables users to easily upgrade and expand system memory by installing additional modules into motherboard slots, thereby improving overall performance without requiring complex hardware modifications.[8] DIMMs are commonly used in desktop personal computers, workstations, and servers to handle demanding workloads such as data processing and multitasking.[9] Variants have evolved for use in laptops, adapting the form factor while retaining core functionality.[2] At its core, a DIMM operates by storing data in its integrated DRAM chips, which are organized to provide access via a standard 64-bit wide data bus for transferring information to and from the system's memory controller.[10] This configuration ensures reliable, low-latency retrieval of volatile data essential for running applications and operating systems.[1]Advantages Over Predecessors
DIMM modules introduced significant improvements over their predecessors, particularly Single In-line Memory Modules (SIMMs), by enabling independent electrical contacts on both sides of the module. This design allows for a native 64-bit data path without the need for interleaving or pairing modules, effectively doubling the bandwidth compared to the 32-bit paths of SIMMs.[2][8] In terms of scalability, DIMMs supported higher memory capacities per module, reaching up to 128 MB in their initial implementations during the mid-1990s, compared to the typical 16-32 MB limits of SIMMs at the time. This advancement facilitated easier multi-module configurations in systems, allowing for greater overall memory expansion without the constraints of paired installations required by SIMMs.[2][11] DIMM's architecture was specifically tailored for compatibility with 64-bit processors, such as Intel's Pentium series, which featured a 64-bit external data bus. Unlike SIMMs, which necessitated the use of two modules in tandem to achieve full bus utilization, a single DIMM could populate the entire data bus, streamlining system design and reducing complexity.[12][8] From a manufacturing and efficiency standpoint, the standardized dual-sided layout of DIMMs simplified production processes and minimized signal interference through independent electrical contacts on each side of the module. This resulted in lower power consumption—operating at 3.3 V versus SIMMs' 5 V—and enhanced reliability in high-density configurations, making DIMMs more cost-effective for mass production and deployment.[2][13]Historical Development
Origins in the 1990s
The Dual In-Line Memory Module (DIMM) emerged in the early 1990s as a response to the evolving demands of computing architectures requiring wider memory interfaces. The Intel Pentium processor, released in March 1993, featured a 64-bit external data bus, necessitating a shift from the 32-bit Single In-Line Memory Module (SIMM) design, which required pairing two modules to achieve the necessary bandwidth.[14][15] This transition addressed the limitations of SIMM configurations in supporting higher data throughput without increasing complexity in motherboard design.[16] JEDEC, the Joint Electron Device Engineering Council, played a pivotal role in formalizing the SDRAM standard in 1993, with the 168-pin DIMM mechanical specification following in 1995 as a standardized successor to SIMM specifically tailored for 64-bit systems.[17] The initial DIMM design incorporated Extended Data Out (EDO) Dynamic Random-Access Memory (DRAM) chips, which improved access times over prior Fast Page Mode (FPM) DRAM by allowing data output to begin before the next address was fully latched.[15][18] JEDEC's standardization efforts focused on establishing interoperability through precise electrical characteristics, such as signal timing and voltage levels, and mechanical features like pin layouts and connector notches to prevent incorrect insertions.[19] Early commercial adoption of DIMMs began in 1994, primarily in personal computers and workstations equipped with Pentium processors, where they simplified memory expansion by providing a single module for 64-bit access.[20] The 168-pin configuration quickly gained prominence as the de facto standard for subsequent Synchronous DRAM (SDRAM) implementations, enabling broader compatibility across vendors.[21] JEDEC's collaborative process involved industry stakeholders in iterative reviews to refine these specifications, ensuring reliable performance in emerging 64-bit environments without proprietary variations.[22]Key Milestones and Transitions
The transition to Synchronous Dynamic Random-Access Memory (SDRAM) marked a pivotal shift in DIMM technology during the mid-1990s, with widespread adoption of 168-pin SDR DIMMs occurring between 1996 and 1997 as they replaced earlier Fast Page Mode (FPM) and Extended Data Out (EDO) modules.[23] This change synchronized memory operations with the system clock, enabling higher speeds and better performance in personal computers and early servers compared to asynchronous predecessors.[23] The introduction of Double Data Rate (DDR) SDRAM in 2000 represented the next major evolution, launching 184-pin DDR DIMMs that effectively doubled data transfer rates over SDRAM by capturing data on both rising and falling clock edges.[24] This standard, formalized as JESD79-1 in June 2000, quickly gained traction in consumer and enterprise systems.[24] Subsequent generations followed: DDR2 SDRAM in 2003 with 240-pin DIMMs under JESD79-2, offering improved power efficiency and higher bandwidth; and DDR3 SDRAM in 2007, also using 240-pin configurations via JESD79-3, which further reduced operating voltages to 1.5V while supporting greater module capacities.[25][26] More recent advancements include DDR4 SDRAM, standardized in September 2012 under JESD79-4 and entering the market in 2014 with 288-pin DIMMs designed for higher densities and speeds up to 3200 MT/s.[27] DDR5 SDRAM followed in July 2020 via JESD79-5, retaining the 288-pin form factor but incorporating an on-module Power Management Integrated Circuit (PMIC) to enhance voltage regulation and efficiency, with initial speeds reaching 4800 MT/s and updates supporting speeds up to 9200 MT/s as of October 2025.[28][29][30] These transitions have profoundly influenced industry adoption, particularly in servers where Registered DIMMs (RDIMMs) became prevalent in the 2000s to handle higher channel populations and ensure signal integrity in multi-socket environments.[23] Capacity growth per DIMM module, driven by advancements aligned with Moore's Law principles of exponential density increases, evolved from typical 256 MB in early DDR eras to up to 512 GB per module in DDR5 configurations as of 2025, enabling scalable data center architectures.[23][31]Physical Design
Form Factors and Dimensions
The standard full-size Dual In-Line Memory Module (DIMM) measures 133.35 mm in length, 31.25 mm in height, and approximately 4 mm in thickness, adhering to JEDEC mechanical outline specifications such as MO-309 for DDR4 variants.[32] This form factor features a gold-plated edge connector with 240 pins for DDR3 modules and 288 pins for DDR4 modules, ensuring reliable electrical contact and compatibility with desktop and server motherboards.[33][34] The dimensions provide a balance between component density and ease of insertion into standard sockets, with tolerances defined by JEDEC to maintain interchangeability across manufacturers. A compact variant, the Small Outline DIMM (SO-DIMM), is designed for laptops and space-constrained systems, measuring 67.6 mm in length while retaining a height of approximately 30 mm and a thickness of 3.8 mm, as outlined in JEDEC standards for SO-DIMMs. SO-DIMMs use 200 pins for DDR2, 204 pins for DDR3, 260 pins for DDR4, and 262 pins for DDR5, depending on the generation, offering a thinner profile to fit into narrower chassis without compromising performance in mobile applications.[35] Unbuffered DIMMs (UDIMMs) and registered DIMMs (RDIMMs) share the core form factor but differ slightly in height due to the additional register chip on RDIMMs, which can increase the overall module height by up to 1-2 mm in some designs for better thermal dissipation.[35] Both types include optional heat spreaders—aluminum or copper plates attached to the PCB—for enhanced thermal management in high-load scenarios, though these add minimal thickness (typically 0.5-1 mm) and are not part of the base JEDEC outline. Notch positions on the edge connector serve as keying mechanisms: the primary notch differentiates unbuffered (right position), registered (middle), and reserved/future use (left) configurations to prevent incompatible insertions, while a secondary voltage key notch ensures proper voltage alignment.[36] JEDEC specifications also define precise mechanical tolerances, including a PCB thickness of 1.27 mm ±0.1 mm and edge connector lead spacing of 1.0 mm for DDR3 and 0.85 mm for DDR4 DIMMs, ensuring robust mechanical integrity and alignment during socket insertion.[33][34] These parameters, along with guidelines for hole spacing in manufacturing, support consistent production and prevent issues like warping or misalignment in assembled systems.Pin Configurations
The pin configurations of Dual In-line Memory Modules (DIMMs) define the electrical interfaces between the module and the system motherboard, encompassing signal lines for data, addresses, commands, clocks, power, and ground, while ensuring backward incompatibility through distinct layouts across generations. These configurations evolve with each DDR iteration to support higher densities, faster signaling, and improved integrity, standardized by the Joint Electron Device Engineering Council (JEDEC). The 168-pin Synchronous Dynamic Random-Access Memory (SDRAM) DIMM, introduced for single data rate operation, features 84 pins per side of the printed wiring board (PWB), operating at 3.3 V. It allocates 12 to 13 address pins for row and column selection (A0–A12), 64 data input/output pins (DQ0–DQ63) for the primary 64-bit wide bus, and dedicated control pins including Row Address Strobe (RAS#), Column Address Strobe (CAS#), and Write Enable (WE#), along with clock (CLK), chip select (CS#), and bank address lines (BA0–BA1). Power (VDD) and ground (VSS) pins are distributed throughout for stable supply, with additional pins for optional error correction (ECC) in 72-bit variants using check bits (CB0–CB7).[37][38] Succeeding it, the 184-pin Double Data Rate (DDR) SDRAM DIMM maintains a similar structure but increases to 92 pins per side, reducing voltage to 2.5 V for VDD and VDDQ to enable higher speeds while preserving compatibility with the 64-bit data bus (DQ0–DQ63). Key enhancements include differential clock pairs (CK and CK#) for reduced noise, along with strobe signals (DQS and DQS#) per byte lane for data synchronization, and multiplexed address/command pins (A0–A12, BA0–BA1) that combine row/column and bank addressing. Control signals like RAS#, CAS#, and WE# persist, with power and ground pins similarly interspersed, and an optional ECC extension to 72 bits.[39][40] The 240-pin configurations for DDR2 and DDR3 SDRAM DIMMs expand to 120 pins per side, supporting 1.8 V operation for DDR2 and 1.5 V for DDR3, with provisions for additional bank addressing (up to BA0–BA2) via extra pins (A13, A14 in higher densities) to handle increased internal banks (up to 16). Both retain the 64-bit DQ bus with per-byte DQS/DQS# pairs and differential clocks, but DDR3 introduces a fly-by topology where address, command, and clock signals daisy-chain across ranks on the module for improved signal integrity and reduced skew, compared to the T-branch topology in DDR2. Control pins (RAS#, CAS#, WE#, ODT for on-die termination) and power/ground distribution evolve accordingly, with 72-bit ECC support.[36][41][38] Modern 288-pin DDR4 and DDR5 DIMMs use 144 pins per side, operating at 1.2 V for DDR4 and introducing further refinements in DDR5 with dual 32-bit sub-channels per module for better efficiency. DDR4 employs a fly-by topology with POD (Pseudo Open Drain) signaling on data lines for lower power and swing, featuring 17 row address bits (A0–A16), bank groups (BG0–BG1), and banks (BA0–BA1), alongside the 64-bit DQ with DQS/DQS# and differential CK/CK#. DDR5 builds on this with on-die ECC integrated into each DRAM device (eliminating module-level ECC pins in base configs), POD signaling across more lines, and dedicated pins for the Power Management Integrated Circuit (PMIC), which regulates voltages like VDD (1.1 V) and VPP from a 12 V input. Control signals include enhanced CS#, CKE, and parity bits for command/address reliability, with power/ground pins optimized for multi-rank support up to 8.[42][43][44] To prevent cross-compatibility issues, DIMMs incorporate keying notches at specific positions along the pin edge: for example, the notch for 168-pin SDR is centered differently from the offset position in 184-pin DDR (around pin 92), while 240-pin DDR2/DDR3 notches are further shifted (near pin 120), and 288-pin DDR4/DDR5 notches are positioned even more offset (around pin 144) to ensure physical mismatch with prior sockets.[45][38]| Generation | Pin Count | Voltage (VDD) | Key Signals | Topology/Signaling Notes |
|---|---|---|---|---|
| SDR (168-pin) | 168 | 3.3 V | A0–A12, DQ0–DQ63, RAS#/CAS#/WE# | Single-ended clock; T-branch |
| DDR (184-pin) | 184 | 2.5 V | A0–A12, BA0–BA1, DQ0–DQ63, DQS/DQS# | Differential clock pairs; T-branch |
| DDR2/DDR3 (240-pin) | 240 | 1.8 V / 1.5 V | A0–A14, BA0–BA2, DQ0–DQ63 | Fly-by (DDR3); increased banks |
| DDR4/DDR5 (288-pin) | 288 | 1.2 V / 1.1 V | A0–A16/17, BG/BA, DQ0–DQ63 (dual sub-channels in DDR5) | Fly-by; POD signaling; PMIC pins (DDR5) |
Memory Architecture
Internal Organization
A Dual In-Line Memory Module (DIMM) internally organizes DRAM chips to provide a standardized 64-bit (or 72-bit for ECC variants) data interface to the system memory controller. The chips are arranged along the edges of the printed circuit board, with their data pins (DQ) connected in parallel to form the module's data width. Typically, unbuffered DIMMs use 8 to 18 DRAM chips to achieve this width, depending on the chip's data organization—x4 (4 bits per chip, requiring 16 chips per rank for 64 bits), x8 (8 bits per chip, requiring 8 chips per rank), or x16 (16 bits per chip, requiring 4 chips per rank)—and the presence of error-correcting code (ECC) chips, which add one extra chip per rank. The total capacity of a DIMM is calculated based on the number of chips, each chip's density (expressed in gigabits, Gb), and the overall structure, converting total bits to bytes via division by 8. For a single-rank unbuffered non-ECC DIMM using x8 organization, the formula simplifies to total capacity (in GB) = (number of chips × chip density in Gb) / 8; for example, 8 chips each of 8 Gb density yield (8 × 8) / 8 = 8 GB. This scales with higher-density chips or additional ranks, enabling modules from 1 GB to 128 GB or more in modern configurations.[46] Addressing within a DIMM follows the standard DRAM row-and-column multiplexed scheme, where the memory controller sends row addresses followed by column addresses over shared pins to select data locations. In DDR4, each DRAM chip includes 16 banks divided into 4 bank groups (with 4 banks per group), supporting fine-grained parallelism by allowing independent access to different groups while minimizing conflicts; DDR5 extends this to 32 banks organized into 8 bank groups. Row addresses typically span 14 to 18 bits (16K to 256K rows), and column addresses use 9 to 10 bits (512 to 1K columns), varying by density and organization.[47][48][31] DIMM rank structure defines how chips are grouped for access: a single-rank module connects all chips to the same chip-select (CS) and control signals, treating them as one accessible unit for simpler, lower-density designs. In contrast, a dual-rank module interleaves two independent sets of chips—often placed on opposite sides of the PCB—with distinct CS signals, enabling the controller to alternate accesses between ranks for higher effective throughput and density, though at the potential cost of slightly increased latency due to rank switching.[48]Channel Ranking
In memory systems, channel ranking refers to the organization of ranks across one or more DIMMs connected to a single memory channel, where a rank constitutes a 64-bit (or 72-bit with ECC) wide set of DRAM chips that can be accessed simultaneously via shared chip select signals.[49] Single-rank configurations, featuring one such set per DIMM, prioritize simplicity and potentially higher operating speeds due to lower electrical loading on the channel, while multi-rank setups—such as dual-rank or quad-rank DIMMs—enable greater memory density by allowing multiple independent 64-bit accesses per channel through rank interleaving, though they introduce overhead from rank switching.[50] Common configurations include dual-channel architectures, prevalent in consumer and entry-level server platforms, where two independent 64-bit channels operate in parallel to achieve an effective 128-bit data width and double the bandwidth of a single-channel setup; this typically involves populating one or two DIMMs per channel for balanced performance and capacity. In high-end servers, quad-channel configurations extend this to four 64-bit channels for 256-bit effective width, quadrupling bandwidth and supporting denser populations, such as multiple multi-rank DIMMs per channel to maximize system-scale capacity. Increasing ranks per channel enhances overall capacity but can degrade maximum achievable speeds owing to heightened bus loading, which amplifies signal integrity challenges and necessitates timing adjustments like extended all-bank row active times.[49] Unbuffered DIMMs (UDIMMs) are generally limited to 2-4 total ranks per channel to mitigate excessive loading, restricting them to one or two DIMMs in most setups.[50] To address this, registered DIMMs (RDIMMs) employ a register to buffer command and address signals, reducing the electrical load on those lines and enabling up to three DIMMs per channel without proportional speed penalties.[51] Load-reduced DIMMs (LRDIMMs) further optimize by fully buffering data, command, and address signals via an isolation memory buffer, which supports daisy-chained topologies and allows up to three DIMMs per channel even with higher-rank modules, prioritizing density in large-scale servers.[51]Performance Specifications
Data Speeds and Transfer Rates
Data speeds for DIMMs are typically measured in mega-transfers per second (MT/s), which indicates the number of data transfers occurring per second on the memory bus. This metric reflects the effective clock rate, accounting for the double data rate (DDR) mechanism where data is transferred on both the rising and falling edges of the clock signal. For instance, a DDR4-3200 DIMM operates at 3200 MT/s, enabling high-throughput data movement between the memory modules and the system controller.[52] The evolution of DIMM speeds has progressed significantly across generations, starting from synchronous dynamic random-access memory (SDRAM) DIMMs with clock speeds of 66-133 MHz (equivalent to 66-133 MT/s due to single data rate operation). Subsequent DDR generations doubled and then multiplied these rates: DDR1 reached 266-400 MT/s, DDR2 advanced to 533-800 MT/s, DDR3 to 800-2133 MT/s, and DDR4 to 2133-3200 MT/s. DDR5, the current standard as of 2025, begins at 4800 MT/s and extends up to 9200 MT/s per JEDEC specifications, representing a substantial increase in transfer capabilities for modern computing demands.[52][53]| Generation | Standard MT/s Range | Peak Bandwidth per DIMM (GB/s) |
|---|---|---|
| SDRAM | 66-133 | 0.53-1.06 |
| DDR1 | 266-400 | 2.1-3.2 |
| DDR2 | 533-800 | 4.3-6.4 |
| DDR3 | 800-2133 | 6.4-17.1 |
| DDR4 | 2133-3200 | 17.1-25.6 |
| DDR5 | 4800-9200 | 38.4-73.6 |