Recent from talks
Nothing was collected or created yet.
TOP500
View on Wikipedia
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks,[1] a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
Key Information
The most recent edition of TOP500 was published in June 2025 as the 65th edition of TOP500, while the next edition of TOP500 will be published in November 2025 as the 66th edition of TOP500. As of June 2025, the United States' El Capitan is the most powerful supercomputer in the TOP500, reaching 1742 petaFlops (1.742 exaFlops) on the LINPACK benchmarks.[2] As of submitted data until June 2025, the United States has the highest number of systems with 175 supercomputers; China is in second place with 47, and Germany is third at 41; the United States has by far the highest share of total computing power on the list (48.4%).[3] Due to secrecy of the latest Chinese programs, publicly known supercomputer performance share in China represents only 2% that of global as of June 2025.[3][4][5]
The TOP500 list is compiled by Jack Dongarra of the University of Tennessee, Knoxville, Erich Strohmaier and Horst Simon of the National Energy Research Scientific Computing Center (NERSC) and Lawrence Berkeley National Laboratory (LBNL), and, until his death in 2014, Hans Meuer of the University of Mannheim, Germany.[citation needed] The TOP500 project also includes lists such as Green500 (measuring energy efficiency) and HPCG (measuring I/O bandwidth).[6]
History
[edit]
In the early 1990s, a new definition of supercomputer was needed to produce meaningful statistics. After experimenting with metrics based on processor count in 1992, the idea arose at the University of Mannheim to use a detailed listing of installed systems as the basis. In early 1993, Jack Dongarra was persuaded to join the project with his LINPACK benchmarks. A first test version was produced in May 1993, partly based on data available on the Internet, including the following sources:[7][8]
- "List of the World's Most Powerful Computing Sites" maintained by Gunter Ahrendt[9]
- David Kahaner, the director of the Asian Technology Information Program (ATIP);[10] published a report in 1992, titled "Kahaner Report on Supercomputer in Japan"[8] which had an immense amount of data.[citation needed]
The information from those sources was used for the first two lists. Since June 1993, the TOP500 is produced bi-annually based on site and vendor submissions only. Since 1993, performance of the No. 1 ranked position has grown steadily in accordance with Moore's law, doubling roughly every 14 months. In June 2018, Summit was fastest with an Rpeak[11] of 187.6593 PFLOPS. For comparison, this is over 1,432,513 times faster than the Connection Machine CM-5/1024 (1,024 cores), which was the fastest system in November 1993 (twenty-five years prior) with an Rpeak of 131.0 GFLOPS.[12]
Architecture and operating systems
[edit]
While Intel, or at least the x86-64 CPU architecture has previously dominated the supercomputer list, by now AMD has more systems using that same architecture on top10, including 1st and 2nd place. And Microsoft Azure has 8 systems on top100, thereof only two with Intel CPUs, including though its most performant by far in 4th place (previously 3rd place). AMDs CPUs are usually coupled with AMD's GPU accelerators, while Intel's CPUs have historically been very often coupled with NVidia's GPU, though current Intel's third place (previously 2nd place) system notably uses Intel Data Center GPU Max. Arm-based system are also notable on the list in 4th, 7th (Fugaku, previously nr. 1) and 8th place and in total at least 23 not just from Fujitsu that introduced Arm-based the top spot; Nvidia has others with their "Superchip" CPU, not just GPUs.
As of June 2022[update], all supercomputers on TOP500 are 64-bit supercomputers, mostly based on CPUs with the x86-64 instruction set architecture, 384 of which are Intel EMT64-based and 101 of which are AMD AMD64-based, with the latter including the top eight supercomputers. 15 other supercomputers are all based on RISC architectures, including six based on ARM64 and seven based on the Power ISA used by IBM Power microprocessors.[citation needed]
In recent years, heterogeneous computing has dominated the TOP500, mostly using Nvidia's graphics processing units (GPUs) or Intel's x86-based Xeon Phi as coprocessors. This is because of better performance per watt ratios and higher absolute performance. AMD GPUs have taken the top 1 and displaced Nvidia in top 10 part of the list. The recent exceptions include the aforementioned Fugaku, Sunway TaihuLight, and K computer. Tianhe-2A is also an interesting exception, as US sanctions prevented use of Xeon Phi; instead, it was upgraded to use the Chinese-designed Matrix-2000[13] accelerators.[citation needed]
Two computers which first appeared on the list in 2018 were based on architectures new to the TOP500. One was a new x86-64 microarchitecture from Chinese manufacturer Sugon, using Hygon Dhyana CPUs (these resulted from a collaboration with AMD, and are a minor variant of Zen-based AMD EPYC) and was ranked 38th, now 117th,[14] and the other was the first ARM-based computer on the list – using Cavium ThunderX2 CPUs.[15] Before the ascendancy of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made up most TOP500 supercomputers, including SPARC, MIPS, PA-RISC, and Alpha.

All the fastest supercomputers since the Earth Simulator supercomputer (gained top spot in 2002, kept it for 2 and a half years until June 2004, was decommissioned in 2009; though other non-Linux on the list for longer) have used operating systems based on Linux. Since November 2017[update], all the listed supercomputers use an operating system based on the Linux kernel.[16][17]
Since November 2015, no computer on the list runs Windows (while Microsoft reappeared on the list in 2021 with Ubuntu based on Linux). In November 2014, Windows Azure[18] cloud computer was no longer on the list of fastest supercomputers (its best rank was 165th in 2012), leaving the Shanghai Supercomputer Center's Magic Cube as the only Windows-based supercomputer on the list, until it also dropped off the list. It was ranked 436th in its last appearance on the list released in June 2015, while its best rank was 11th in 2008.[19] There are no longer any Mac OS computers on the list. It had at most five such systems at a time, one more than the Windows systems that came later, while the total performance share for Windows was higher. Their relative performance share of the whole list was however similar, and never high for either. In 2004, the System X supercomputer based on Mac OS X (Xserve, with 2,200 PowerPC 970 processors) once ranked 7th place.[20]
It has been well over a decade since MIPS systems dropped entirely off the list[21] though the Gyoukou supercomputer that jumped to 4th place[22] in November 2017 had a MIPS-based design as a small part of the coprocessors. Use of 2,048-core coprocessors (plus 8× 6-core MIPS, for each, that "no longer require to rely on an external Intel Xeon E5 host processor"[23]) made the supercomputer much more energy efficient than the other top 10 (i.e. it was 5th on Green500 and other such ZettaScaler-2.2-based systems take first three spots).[24] At 19.86 million cores, it was by far the largest system by core-count, with almost double that of the then-best manycore system, the Chinese Sunway TaihuLight.
TOP500
[edit]As of June 2025[update], the number one supercomputer is El Capitan, the leader on Green500 is JEDI, a Bull Sequana XH3000 system using the Nvidia Grace Hopper GH200 Superchip. In June 2022, the top 4 systems of Graph500 used both AMD CPUs and AMD accelerators. After an upgrade, for the 56th TOP500 in November 2020,
Fugaku grew its HPL performance to 442 petaflops, a modest increase from the 416 petaflops the system achieved when it debuted in June 2020. More significantly, the ARMv8.2 based Fugaku increased its performance on the new mixed precision HPC-AI benchmark to 2.0 exaflops, besting its 1.4 exaflops mark recorded six months ago. These represent the first benchmark measurements above one exaflop for any precision on any type of hardware.[25]
Summit, a previously fastest supercomputer, is currently highest-ranked IBM-made supercomputer; with IBM POWER9 CPUs. Sequoia became the last IBM Blue Gene/Q model to drop completely off the list; it had been ranked 10th on the 52nd list (and 1st on the June 2012, 41st list, after an upgrade).
For the first time, all 500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops." However, for a different benchmark "Summit and Sierra remain the only two systems to exceed a petaflop on the HPCG benchmark, delivering 2.9 petaflops and 1.8 petaflops, respectively. The average HPCG result on the current list is 213.3 teraflops, a marginal increase from 211.2 six months ago.[26]
Microsoft is back on the TOP500 list with six Microsoft Azure instances (that use/are benchmarked with Ubuntu, so all the supercomputers are still Linux-based), with CPUs and GPUs from same vendors, the fastest one currently 11th,[27] and another older/slower previously made 10th.[28] And Amazon with one AWS instance currently ranked 64th (it was previously ranked 40th). The number of Arm-based supercomputers is 6; currently all Arm-based supercomputers use the same Fujitsu CPU as in the number 2 system, with the next one previously ranked 13th, now 25th.[29]
| Rank (previous) | Rmax Rpeak (PetaFLOPS) |
Name | Model | CPU cores | Accelerator (e.g. GPU) cores | Total cores (CPUs + accelerators) | Interconnect | Manufacturer | Site country |
Year | Operating system |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1,742.00 2,746.38 |
El Capitan | HPE Cray EX255a | 1,051,392 (43,808 × 24-core Optimized 4th Generation EPYC 24C @1.8 GHz) |
9,988,224 (43,808 × 228 AMD Instinct MI300A) |
11,039,616 | Slingshot-11 | HPE | Lawrence Livermore National Laboratory |
2024 | Linux (TOSS) |
| 2 | 1,353.00 2,055.72 |
Frontier | HPE Cray EX235a | 614,656 (9,604 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz) |
8,451,520 (38,416 × 220 AMD Instinct MI250X) |
9,066,176 | Slingshot-11 | HPE | Oak Ridge National Laboratory |
2022 | Linux (HPE Cray OS) |
| 3 | 1,012.00 1,980.01 |
Aurora | HPE Cray EX | 1,104,896 (21,248 × 52-core Intel Xeon Max 9470 @2.4 GHz) |
8,159,232 (63,744 × 128 Intel Max 1550) |
9,264,128 | Slingshot-11 | HPE | Argonne National Laboratory |
2023 | Linux (SUSE Linux Enterprise Server 15 SP4) |
| 4 |
793.40 930.00 |
JUPITER | BullSequana XH3000 | 1,694,592 (23,536 × 72-Arm Neoverse V2 cores Nvidia Grace @3 GHz) |
3,106,752 (23,536 × 132 Nvidia Hopper H100) |
4,801,344 | Quad-rail NVIDIA NDR200 Infiniband | Atos | EuroHPC JU |
2025 | Linux (RHEL) |
| 5 |
561.20 846.84 |
Eagle | Microsoft NDv5 | 172,800 (3,600 × 48-core Intel Xeon Platinum 8480C @2.0 GHz) |
1,900,800 (14,400 × 132 Nvidia Hopper H100) |
2,073,600 | NVIDIA Infiniband NDR | Microsoft | Microsoft |
2023 | Linux (Ubuntu 22.04 LTS) |
| 6 |
477.90 606.97 |
HPC6 | HPE Cray EX235a | 213,120 (3,330 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz) |
2,930,400 (13,320 × 220 AMD Instinct MI250X) |
3,143,520 | Slingshot-11 | HPE | Eni S.p.A |
2024 | Linux (RHEL 8.9) |
| 7 |
442.01 537.21 |
Fugaku | Supercomputer Fugaku | 7,630,848 (158,976 × 48-core Fujitsu A64FX @2.2 GHz) |
- | 7,630,848 | Tofu interconnect D | Fujitsu | Riken Center for Computational Science |
2020 | Linux (RHEL) |
| 8 |
434.90 574.84 |
Alps | HPE Cray EX254n | 748,800 (10,400 × 72-Arm Neoverse V2 cores Nvidia Grace @3.1 GHz) |
1,372,800 (10,400 × 132 Nvidia Hopper H100) |
2,121,600 | Slingshot-11 | HPE | CSCS Swiss National Supercomputing Centre |
2024 | Linux (HPE Cray OS) |
| 9 |
379.70 531.51 |
LUMI | HPE Cray EX235a | 186,624 (2,916 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz) |
2,566,080 (11,664 × 220 AMD Instinct MI250X) |
2,752,704 | Slingshot-11 | HPE | EuroHPC JU |
2022 | Linux (HPE Cray OS) |
| 10 |
241.20 306.31 |
Leonardo | BullSequana XH2000 | 110,592 (3,456 × 32-core Xeon Platinum 8358 @2.6 GHz) |
1,714,176 (15,872 × 108 Nvidia Ampere A100) |
1,824,768 | Quad-rail NVIDIA HDR100 Infiniband | Atos | EuroHPC JU |
2023 | Linux (RHEL 8)[31] |
Legend:[32]
- Rank – Position within the TOP500 ranking. In the TOP500 list table, the computers are ordered first by their Rmax value. In the case of equal performances (Rmax value) for different computers, the order is by Rpeak. For sites that have the same computer, the order is by memory size and then alphabetically.
- Rmax – The highest score measured using the LINPACK benchmarks suite. This is the number that is used to rank the computers. Measured in quadrillions of 64-bit floating point operations per second, i.e., petaFLOPS.[33]
- Rpeak – This is the theoretical peak performance of the system. Computed in petaFLOPS.
- Name – Some supercomputers are unique, at least on its location, and are thus named by their owner.
- Model – The computing platform as it is marketed.
- Processor – The instruction set architecture or processor microarchitecture, alongside GPU and accelerators when available.
- Interconnect – The interconnect between computing nodes. InfiniBand is most used (38%) by performance share, while Gigabit Ethernet is most used (54%) by number of computers.
- Manufacturer – The manufacturer of the platform and hardware.
- Site – The name of the facility operating the supercomputer.
- Country – The country in which the computer is located.
- Year – The year of installation or last major update.
- Operating system – The operating system that the computer uses.
Top countries
[edit]Numbers below represent the number of computers in the TOP500 that are in each of the listed countries or territories. As of 2025, United States has the most supercomputers on the list, with 174 machines. The United States has the highest aggregate computational power at 6,696 Petaflops Rmax with Japan second (1,229 Pflop/s) and Germany third (1,201 Pflop/s).
| Country or territory | Number of systems |
|---|---|
| 175 | |
| 137 | |
| 47 | |
| 41 | |
| 39 | |
| 25 | |
| 17 | |
| 15 | |
| 13 | |
| 13 | |
| 9 | |
| 9 | |
| 9 | |
| 8 | |
| 7 | |
| 7 | |
| 6 | |
| 6 | |
| 6 | |
| 5 | |
| 4 | |
| 4 | |
| 4 | |
| 3 | |
| 3 | |
| 3 | |
| 2 | |
| 2 | |
| 2 | |
| 2 | |
| 2 | |
| 2 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |
Other rankings
[edit]| Country / Region |
Jun 2025[3]
|
Nov 2024[3]
|
Jun 2024[3]
|
Nov 2023[3]
|
Jun 2023[3]
|
Nov 2022[3]
|
Jun 2022[3]
|
Nov 2021[3]
|
Jun 2021[3]
|
Nov 2020[3]
|
Jun 2020[3]
|
Nov 2019[3]
|
Jun 2019[3]
|
Nov 2018[3]
|
Jun 2018[3]
|
Nov 2017[3]
|
Jun 2017[3]
|
Nov 2016[3]
|
Jun 2016[3]
|
Nov 2015[3]
|
Jun 2015[3]
|
Nov 2014[3]
|
Jun 2014[3]
|
Nov 2013[3]
|
Jun 2013[3]
|
Nov 2012[3]
|
Jun 2012[3]
|
Nov 2011[3]
|
Jun 2011[3]
|
Nov 2010[3]
|
Jun 2010[3]
|
Nov 2009[3]
|
Jun 2009[3]
|
Nov 2008[3]
|
Jun 2008[3]
|
Nov 2007[3]
|
Jun 2007[3]
|
Nov 2006[3]
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 175 | 173 | 171 | 161 | 150 | 127 | 128 | 149 | 122 | 113 | 114 | 117 | 116 | 109 | 124 | 143 | 168 | 171 | 165 | 199 | 233 | 231 | 232 | 264 | 252 | 251 | 252 | 263 | 255 | 274 | 282 | 277 | 291 | 290 | 257 | 283 | 281 | 309 | |
| 128 | 129 | 123 | 112 | 103 | 101 | 92 | 83 | 93 | 79 | 79 | 87 | 92 | 91 | 93 | 86 | 99 | 95 | 93 | 94 | 122 | 110 | 103 | 89 | 97 | 89 | 96 | 95 | 109 | 108 | 126 | 137 | 134 | 140 | 169 | 133 | 115 | 82 | |
| 47 | 63 | 80 | 104 | 134 | 162 | 173 | 173 | 188 | 214 | 226 | 228 | 220 | 227 | 206 | 202 | 160 | 171 | 168 | 109 | 37 | 61 | 76 | 63 | 66 | 72 | 68 | 74 | 61 | 41 | 24 | 21 | 21 | 15 | 12 | 10 | 13 | 18 | |
| 41 | 40 | 40 | 36 | 36 | 34 | 31 | 26 | 23 | 17 | 16 | 16 | 13 | 17 | 21 | 21 | 28 | 31 | 26 | 33 | 37 | 26 | 22 | 20 | 19 | 19 | 20 | 20 | 30 | 26 | 24 | 27 | 29 | 25 | 46 | 31 | 24 | 18 | |
| 39 | 34 | 29 | 32 | 33 | 31 | 33 | 32 | 34 | 34 | 29 | 29 | 28 | 31 | 36 | 35 | 33 | 27 | 29 | 37 | 40 | 32 | 30 | 28 | 30 | 32 | 35 | 30 | 26 | 26 | 18 | 16 | 15 | 17 | 22 | 20 | 23 | 30 | |
| 25 | 24 | 24 | 23 | 24 | 24 | 22 | 19 | 16 | 18 | 19 | 18 | 20 | 18 | 18 | 18 | 18 | 20 | 18 | 18 | 27 | 30 | 27 | 22 | 23 | 21 | 22 | 23 | 25 | 26 | 27 | 26 | 23 | 26 | 34 | 17 | 13 | 12 | |
| 17 | 14 | 11 | 12 | 7 | 7 | 6 | 6 | 6 | 6 | 7 | 5 | 5 | 6 | 5 | 6 | 8 | 6 | 5 | 4 | 4 | 3 | 5 | 5 | 6 | 7 | 8 | 4 | 5 | 6 | 7 | 6 | 6 | 11 | 6 | 6 | 5 | 8 | |
| 15 | 13 | 13 | 12 | 8 | 8 | 6 | 7 | 5 | 3 | 3 | 3 | 5 | 6 | 7 | 5 | 8 | 4 | 7 | 10 | 9 | 9 | 8 | 5 | 4 | 4 | 3 | 3 | 4 | 3 | 1 | 2 | 0 | 1 | 1 | 1 | 5 | 6 | |
| 13 | 14 | 16 | 15 | 14 | 15 | 12 | 11 | 11 | 12 | 10 | 11 | 18 | 20 | 22 | 15 | 17 | 13 | 11 | 18 | 29 | 30 | 30 | 23 | 29 | 24 | 25 | 27 | 27 | 25 | 38 | 45 | 44 | 46 | 53 | 48 | 42 | 30 | |
| 13 | 9 | 10 | 10 | 10 | 10 | 14 | 11 | 11 | 12 | 12 | 9 | 8 | 9 | 6 | 5 | 6 | 1 | 1 | 6 | 6 | 6 | 9 | 10 | 9 | 11 | 10 | 9 | 8 | 6 | 7 | 9 | 8 | 2 | 2 | 5 | 10 | 8 | |
| 9 | 9 | 8 | 9 | 9 | 8 | 6 | 5 | 6 | 4 | 4 | 3 | 3 | 1 | 1 | 0 | 2 | 3 | 4 | 6 | 6 | 4 | 4 | 3 | 3 | 2 | 3 | 2 | 2 | 2 | 1 | 1 | 0 | 2 | 1 | 1 | 2 | 4 | |
| 9 | 8 | 7 | 6 | 6 | 6 | 5 | 4 | 3 | 2 | 2 | 2 | 2 | 4 | 3 | 5 | 5 | 4 | 5 | 3 | 5 | 5 | 3 | 5 | 7 | 6 | 4 | 3 | 5 | 6 | 8 | 7 | 10 | 8 | 9 | 7 | 10 | 1 | |
| 9 | 6 | 5 | 5 | 4 | 3 | 2 | 1 | 3 | 3 | 3 | 2 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 0 | 1 | 3 | 2 | 2 | 2 | 2 | 2 | 3 | 2 | 3 | |
| 8 | 7 | 6 | 5 | 2 | 2 | 2 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 3 | 3 | 2 | 2 | 0 | 0 | 0 | 1 | 2 | 3 | 11 | 10 | 2 | |
| 7 | 10 | 9 | 10 | 8 | 8 | 6 | 11 | 16 | 15 | 15 | 15 | 13 | 6 | 9 | 6 | 4 | 3 | 3 | 2 | 3 | 5 | 5 | 3 | 2 | 0 | 0 | 0 | 1 | 2 | 4 | 3 | 3 | 3 | 5 | 6 | 8 | 2 | |
| 7 | 8 | 8 | 4 | 3 | 3 | 5 | 4 | 4 | 2 | 1 | 1 | 1 | 4 | 4 | 5 | 6 | 7 | 6 | 6 | 7 | 2 | 2 | 2 | 3 | 4 | 5 | 6 | 5 | 6 | 5 | 3 | 4 | 6 | 3 | 1 | 0 | 0 | |
| 6 | 7 | 8 | 7 | 6 | 6 | 6 | 6 | 6 | 5 | 3 | 3 | 3 | 3 | 4 | 4 | 6 | 5 | 5 | 6 | 7 | 4 | 4 | 3 | 4 | 3 | 3 | 3 | 4 | 6 | 4 | 4 | 2 | 0 | 0 | 0 | 2 | 4 | |
| 6 | 6 | 4 | 4 | 4 | 3 | 3 | 3 | 3 | 3 | 2 | 2 | 3 | 4 | 5 | 4 | 4 | 5 | 9 | 11 | 11 | 9 | 9 | 12 | 11 | 8 | 5 | 2 | 2 | 4 | 5 | 3 | 6 | 8 | 6 | 9 | 8 | 10 | |
| 6 | 6 | 7 | 7 | 7 | 7 | 7 | 7 | 3 | 2 | 2 | 3 | 2 | 3 | 4 | 3 | 3 | 5 | 7 | 7 | 8 | 9 | 5 | 5 | 8 | 8 | 5 | 5 | 12 | 11 | 11 | 8 | 5 | 8 | 9 | 7 | 5 | 2 | |
| 5 | 4 | 3 | 3 | 3 | 3 | 3 | 1 | 4 | 4 | 4 | 4 | 5 | 3 | 2 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 2 | 2 | 1 | 1 | 1 | 0 | 0 | 1 | 2 | 2 | |
| 4 | 5 | 5 | 3 | 4 | 4 | 4 | 3 | 3 | 3 | 2 | 2 | 4 | 2 | 3 | 3 | 3 | 4 | 3 | 6 | 6 | 7 | 6 | 5 | 4 | 4 | 1 | 3 | 4 | 4 | 5 | 5 | 4 | 4 | 6 | 7 | 5 | 5 | |
| 4 | 3 | 2 | 1 | 1 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | |
| 4 | 4 | 5 | 6 | 5 | 5 | 5 | 3 | 2 | 2 | 2 | 3 | 5 | 5 | 5 | 4 | 4 | 3 | 5 | 4 | 6 | 9 | 6 | 5 | 5 | 7 | 6 | 4 | 6 | 4 | 1 | 1 | 1 | 1 | 1 | 1 | 4 | 4 | |
| 3 | 3 | 3 | 3 | 3 | 3 | 4 | 3 | 2 | 2 | 2 | 1 | 2 | 1 | 1 | 2 | 3 | 2 | 5 | 2 | 2 | 3 | 2 | 2 | 2 | 3 | 1 | 1 | 2 | 1 | 3 | 2 | 1 | 1 | 1 | 5 | 3 | 1 | |
| 3 | 3 | 3 | 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 2 | 2 | 3 | 2 | 4 | 3 | 2 | 3 | 3 | 6 | 5 | 6 | 7 | 9 | 6 | 7 | |
| 3 | 3 | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 2 | 3 | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 2 | 3 | 3 | 5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 1 | 2 | 8 | 5 | 0 | 0 | 0 | 0 | 0 | |
| 2 | 2 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | |
| 2 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 | 1 | 1 | |
| 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | |
| 2 | 4 | 4 | 4 | 5 | 5 | 3 | 1 | 14 | 14 | 14 | 14 | 13 | 12 | 7 | 4 | 2 | 1 | 3 | 0 | 0 | 0 | 1 | 2 | 0 | 0 | 3 | 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | |
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 3 | 0 | 0 | 3 | 0 | 1 | 0 | 1 | |
| 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | |
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 2 | 2 | 2 | 2 | 1 | 3 | 3 | 2 | 0 | 2 | 2 | 1 | 1 | 0 | 0 | 0 | 2 | |
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 1 | 1 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |
| 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 2 | 0 | 1 | 1 | 2 | 1 | 1 | 1 | 2 | 1 | 2 | 2 | 0 | 1 | 1 | 2 | 2 | 1 | 4 | 1 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 2 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 3 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 7 | 8 | 5 | 4 | 6 | 1 | 1 | 1 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 | 1 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 2 | 3 | 4 | 3 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Fastest supercomputer in TOP500 by country
[edit](As of November 2023[34])
| Country/Territory | Fastest supercomputer of country/territory (name) | Rank in TOP500 | Rmax Rpeak (TFlop/s) |
Site |
|---|---|---|---|---|
| El Capitan | 1 | 1,742,000.0 2,746,380.0 |
Lawrence Livermore National Laboratory | |
| Fugaku | 4 | 442,010.0 537,210.0 |
RIKEN | |
| LUMI | 5 | 379,700.0 531,510.0 |
Center for Scientific Computing | |
| Leonardo | 6 | 238,700.0 304,470.0 |
CINECA | |
| MareNostrum | 8 | 138,200.0 265,570.0 |
Barcelona Supercomputing Center | |
| Sunway TaihuLight | 11 | 93,010.0 125,440.0 |
National Supercomputing Center, Wuxi | |
| ISEG | 16 | 46,540.0 86,790.0 |
Nebius | |
| Adastra | 17 | 46,100.0 61,610.0 |
GENCI-CINES | |
| JUWELS (booster module) | 18 | 44,120.0 70,980.0 |
Forschungszentrum Jülich | |
| Shaheen III | 20 | 35,660.0 39,610.0 |
King Abdullah University of Science and Technology | |
| Sejong | 22 | 32,970.0 40,770.0 |
Naver Corporation | |
| Setonix | 25 | 27,160.0 35,000.0 |
Pawsey Supercomputing Centre | |
| DeepL Mercury | 34 | 21,850.0 33,850.0 |
DeepL SE | |
| Chervonenkis | 36 | 21,530.0 29,420.0 |
Yandex | |
| Piz Daint | 37 | 21,230.0 27,150.0 |
Swiss National Supercomputing Centre | |
| ARCHER2 | 39 | 19,540.0 25,800.0 |
EPSRC/University of Edinburgh | |
| Pégaso | 45 | 19,070.0 42,000.0 |
Petróleo Brasileiro S.A | |
| PRIMEHPC FX1000 | 69 | 11,160.0 12,980.0 |
Central Weather Administration | |
| MeluXina - Accelerator Module | 71 | 10,520.0 15,290.0 |
LuxProvide | |
| Airawat | 90 | 8,500.0 13,170.0 |
Centre for Development of Advanced Computing | |
| Lanta | 94 | 8,150.0 13,770.0 |
NECTEC | |
| Underhill | 102 | 7,760.0 10,920.0 |
Shared Services Canada | |
| Artemis | 107 | 7,260.0 9,490.0 |
Group 42 | |
| Karolina, GPU partition | 113 | 6,750.0 9,080.0 |
IT4Innovations National Supercomputing Center, VSB-Technical University of Ostrava | |
| Athena | 155 | 5,050.0 7,710.0 |
AGH University of Science and Technology | |
| Betzy | 161 | 4,720.0 6,190.0 |
UNINETT Sigma2 AS | |
| Discoverer | 166 | 4,520.0 5,940.0 |
Consortium Petascale Supercomputer Bulgaria | |
| Clementina XXI | 196 | 3,880.0 5,990.0 |
Servicio Meteorológico Nacional | |
| VEGA HPC CPU | 198 | 3,820.0 5,370.0 |
IZUM | |
| AIC1 | 218 | 3,550.0 6,970.0 |
Software Company MIR | |
| Aspire 2A | 233 | 3,330.0 6,480.0 |
National Supercomputing Centre Singapore | |
| Toubkal | 246 | 3,160.0 5,010.0 |
Mohammed VI Polytechnic University - African Supercomputing Centre | |
| Komondor | 266 | 3,100.0 4,510.0 |
Governmental Information Technology Development Agency (KIFÜ) | |
| VSC-4 | 319 | 2,730.0 3,760.0 |
Vienna Scientific Cluster | |
| Lucia | 322 | 2,720.0 5,310.0 |
Cenaero |
Systems ranked No. 1
[edit]This section needs additional citations for verification. (September 2019) |
- HPE Cray El Capitan (Lawrence Livermore National Laboratory
United States, November 2024 – Present)[35] - HPE Cray Frontier (Oak Ridge National Laboratory
United States, June 2022 – November 2024)[36] - Supercomputer Fugaku (Riken Center for Computational Science
Japan, June 2020 – June 2022)[37] - IBM Summit (Oak Ridge National Laboratory
United States, June 2018 – June 2020)[38] - NRCPC Sunway TaihuLight (National Supercomputing Center in Wuxi
China, June 2016 – November 2017) - NUDT Tianhe-2A (National Supercomputing Center of Guangzhou
China, June 2013 – June 2016) - Cray Titan (Oak Ridge National Laboratory
United States, November 2012 – June 2013)[39] - IBM Sequoia Blue Gene/Q (Lawrence Livermore National Laboratory
United States, June 2012 – November 2012)[40] - Fujitsu K computer (Riken Advanced Institute for Computational Science
Japan, June 2011 – June 2012) - NUDT Tianhe-1A (National Supercomputing Center of Tianjin
China, November 2010 – June 2011) - Cray Jaguar (Oak Ridge National Laboratory
United States, November 2009 – November 2010) - IBM Roadrunner (Los Alamos National Laboratory
United States, June 2008 – November 2009) - IBM Blue Gene/L (Lawrence Livermore National Laboratory
United States, November 2004 – June 2008)[41] - NEC Earth Simulator (Earth Simulator Center
Japan, June 2002 – November 2004) - IBM ASCI White (Lawrence Livermore National Laboratory
United States, November 2000 – June 2002) - Intel ASCI Red (Sandia National Laboratories
United States, June 1997 – November 2000)[42] - Hitachi CP-PACS (University of Tsukuba
Japan, November 1996 – June 1997) - Hitachi SR2201 (University of Tokyo
Japan, June 1996 – November 1996) - Fujitsu Numerical Wind Tunnel (National Aerospace Laboratory of Japan
Japan, November 1994 – June 1996) - Intel Paragon XP/S140 (Sandia National Laboratories
United States, June 1994 – November 1994) - Fujitsu Numerical Wind Tunnel (National Aerospace Laboratory of Japan
Japan, November 1993 – June 1994) - TMC CM-5 (Los Alamos National Laboratory
United States, June 1993 – November 1993)
Additional statistics
[edit]By number of systems as of June 2025[update]:[43]
| Accelerator | Systems |
|---|---|
| NVIDIA AMPERE A100 (Launched: 2020) | |
| NVIDIA HOPPER H100 SXM5 80 GB (Launched: 2022) | |
| NVIDIA HOPPER H100 (Launched: 2022) | |
| NVIDIA AMPERE A100 SXM4 40 GB (Launched: 2020) | |
| NVIDIA TESLA V100 (Launched: 2017) |
| Manufacturer | Systems |
|---|---|
| Lenovo | |
| Hewlett Packard Enterprise | |
| EVIDEN | |
| DELL | |
| Nvidia |
| Operating System | Systems |
|---|---|
| Linux | |
| CentOS | |
| HPE Cray OS | |
| Red Hat Enterprise Linux | |
| Cray Linux Environment |
Note: All operating systems of the TOP500 systems are Linux-family based, but Linux above is generic Linux.
Sunway TaihuLight is the system with the most CPU cores (10,649,600). Tianhe-2 has the most GPU/accelerator cores (4,554,752). Aurora is the system with the greatest power consumption with 38,698 kilowatts.
New developments in supercomputing
[edit]In November 2014, it was announced that the United States was developing two new supercomputers to exceed China's Tianhe-2 in its place as world's fastest supercomputer. The two computers, Sierra and Summit, will each exceed Tianhe-2's 55 peak petaflops. Summit, the more powerful of the two, will deliver 150–300 peak petaflops.[44] On 10 April 2015, US government agencies banned selling chips, from Nvidia to supercomputing centers in China as "acting contrary to the national security ... interests of the United States";[45] and Intel Corporation from providing Xeon chips to China due to their use, according to the US, in researching nuclear weapons – research to which US export control law bans US companies from contributing – "The Department of Commerce refused, saying it was concerned about nuclear research being done with the machine."[46]
On 29 July 2015, President Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale (1000 petaflop) system and funding research into post-semiconductor computing.[47]
In June 2016, Japanese firm Fujitsu announced at the International Supercomputing Conference that its future exascale supercomputer will feature processors of its own design that implement the ARMv8 architecture. The Flagship2020 program, by Fujitsu for RIKEN plans to break the exaflops barrier by 2020 through the Fugaku supercomputer, (and "it looks like China and France have a chance to do so and that the United States is content – for the moment at least – to wait until 2023 to break through the exaflops barrier."[48]) These processors will also implement extensions to the ARMv8 architecture equivalent to HPC-ACE2 that Fujitsu is developing with Arm.[48]
In June 2016, Sunway TaihuLight became the No. 1 system with 93 petaflop/s (PFLOP/s) on the Linpack benchmark.[49]
In November 2016, Piz Daint was upgraded, moving it from 8th to 3rd, leaving the US with no systems under the TOP3 for the 2nd time.[50][51]
Inspur, based out of Jinan, China, is one of the largest HPC system manufacturers. As of May 2017[update], Inspur has become the third manufacturer to have manufactured a 64-way system – a record that has previously been held by IBM and HP. The company has registered over $10B in revenue and has provided a number of systems to countries such as Sudan, Zimbabwe, Saudi Arabia and Venezuela. Inspur was also a major technology partner behind both the Tianhe-2 and Taihu supercomputers, occupying the top 2 positions of the TOP500 list up until November 2017. Inspur and Supermicro released a few platforms aimed at HPC using GPU such as SR-AI and AGX-2 in May 2017.[52]
In June 2018, Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, US, took the No. 1 spot with a performance of 122.3 petaflop/s (PFLOP/s), and Sierra, a very similar system at the Lawrence Livermore National Laboratory, CA, US took #3. These systems also took the first two spots on the HPCG benchmark. Due to Summit and Sierra, the US took back the lead as consumer of HPC performance with 38.2% of the overall installed performance while China was second with 29.1% of the overall installed performance. For the first time ever, the leading HPC manufacturer was not a US company. Lenovo took the lead with 23.8% of systems installed. It is followed by HPE with 15.8%, Inspur with 13.6%, Cray with 11.2%, and Sugon with 11%.[53]
On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOP supercomputer would be operational at Argonne National Laboratory by the end of 2021. The computer, named Aurora, was delivered to Argonne by Intel and Cray.[54][55]
On 7 May 2019, The U.S. Department of Energy announced a contract with Cray to build the "Frontier" supercomputer at Oak Ridge National Laboratory. Frontier, originally anticipated to be operational in 2021, was projected to be the world's most powerful computer, with a peak performance of greater than 1.5 exaflops.[56]
Since June 2019, all TOP500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.[57]
In May 2022, the Frontier supercomputer broke the exascale barrier, completing more than a quintillion 64-bit floating point arithmetic calculations per second. Frontier clocked in at approximately 1.1 exaflops, beating out the previous record-holder, Fugaku.[58][59] In June 2024, Aurora was the second computer on the TOP500 to post an exascale Rmax value, at 1.012 exaflops.[60]
Since then, Frontier has been dethroned by El Capitan, hosted at Lawrence Livermore National Laboratory, with an HPL score of 1.742 exaflops.[61]
Large machines not on the list
[edit]Some major systems are not on the list. A prominent example is the NCSA's Blue Waters which publicly announced the decision not to participate in the list[62] because they do not feel it accurately indicates the ability of any system to do useful work.[63]
Other organizations decide not to list systems for security and/or commercial competitiveness reasons. One such example is the National Supercomputing Center at Qingdao's OceanLight supercomputer, completed in March 2021, which was submitted for, and won, the Gordon Bell Prize. The computer is an exaflop computer, but was not submitted to the TOP500 list; the first exaflop machine submitted to the TOP500 list was Frontier. Analysts suspected that the reason the NSCQ did not submit what would otherwise have been the world's first exascale supercomputer was to avoid inflaming political sentiments and fears within the United States, in the context of the United States – China trade war.[64] Similarly, government agencies like the National Security Agency formerly submitted their devices to the TOP500, only to stop after 1998.[65]
Additional purpose-built machines that are not capable or do not run the benchmark were not included, such as RIKEN MDGRAPE-3 and MDGRAPE-4.
A Google Tensor Processing Unit v4 pod is capable of 1.1 exaflops of peak performance,[66] while TPU v5p claims over 4 exaflops in Bfloat16 floating-point format,[67] however, these units are highly specialized to run machine learning workloads and the TOP500 measures a specific benchmark algorithm using a specific numeric precision.
Tesla Dojo's primary unnamed cluster using 5,760 Nvidia A100 graphics processing units (GPUs) was touted by Andrej Karpathy in 2021 at the fourth International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2021) to be "roughly the number five supercomputer in the world"[68] at approximately 81.6 petaflops, based on scaling the performance of the Nvidia Selene supercomputer, which uses similar components.[69]
In March 2024, Meta AI disclosed the operation of two datacenters with 24,576 H100 GPUs,[70] which is almost 2x as on the Microsoft Azure Eagle (#3 as of September 2024), which could have made them occupy 3rd and 4th places in TOP500, but neither have been benchmarked. During company's Q3 2024 earnings call in October, M. Zuckerberg disclosed usage of a cluster with over 100,000 H100s.[71]
xAI Memphis Supercluster (also known as "Colossus") allegedly features 100,000 of the same H100 GPUs, which could have put it in the first place, but it is reportedly not in full operation due to power shortages.[72]
After the onset of US-China Trade War, China has largely shrouded its newly online supercomputers and data centers in secrecy, opting out of reporting to the TOP500 list.[4] This is partly driven by fears of being targeted by US sanctions placed on Chinese domestic suppliers.[73][5]
Computers and architectures that have dropped off the list
[edit]IBM Roadrunner[74] is no longer on the list (nor is any other using the Cell coprocessor, or PowerXCell).
Although Itanium-based systems reached second rank in 2004,[75][76] none now remain.
Similarly (non-SIMD-style) vector processors (NEC-based such as the Earth simulator that was fastest in 2002[77]) have also fallen off the list. Also the Sun Starfire computers that occupied many spots in the past now no longer appear.
The last non-Linux computers on the list – the two AIX ones – running on POWER7 (in July 2017 ranked 494th and 495th,[78] originally 86th and 85th), dropped off the list in November 2017.
Notes
[edit]- The first edition of TOP500 to feature only 64-bit supercomputers was the 59th edition of TOP500, which was published in June 2022.
See also
[edit]References
[edit]- ^ A. Petitet; R. C. Whaley; J. Dongarra; A. Cleary (24 February 2016). "HPL – A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers". ICL – UTK Computer Science Department. Archived from the original on 2 November 2000. Retrieved 22 September 2016.
- ^ "November 2024 | TOP500". www.top500.org. Retrieved 10 June 2025.
- ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao "List Statistics". top500. top500.org. Retrieved 11 July 2025.
- ^ a b 王礼耀 (14 June 2022). "超级计算机最新"TOP500"榜单美国第一,中国超算掉队了吗?". 上观新闻 (in Simplified Chinese). Retrieved 1 July 2025.
- ^ a b Shah, Agam. "Top500: China Opts Out of Global Supercomputer Race". THENEWSTACK. Retrieved 11 July 2025.
- ^ "Lists". top500. top500.org. Retrieved 11 July 2025.
- ^ "An Interview with Jack Dongarra by Alan Beck, editor in chief HPCwire". 27 September 1996. Archived from the original on 28 September 2007.
- ^ a b "Statistics on Manufacturers and Continents". Archived from the original on 18 September 2007. Retrieved 10 March 2007.
- ^ "The TOP25 Supercomputer Sites". Archived from the original on 23 January 2016. Retrieved 4 January 2015.
- ^ "Where does Asia stand? This rising supercomputing power is reaching for real-world HPC leadership". Archived from the original on 29 November 2014. Retrieved 4 January 2015.
- ^ Rpeak – This is the theoretical peak performance of the system. Measured in PetaFLOPS.
- ^ "Sublist Generator". Archived from the original on 27 August 2012. Retrieved 26 June 2018.
- ^ "Matrix-2000 - NUDT". WikiChip. Archived from the original on 19 July 2019. Retrieved 6 October 2019.
- ^ "Advanced Computing System(PreE) - Sugon TC8600, Hygon Dhyana 32C 2GHz, Deep Computing Processor, 200Gb 6D-Torus | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 6 December 2018. Retrieved 6 December 2018.
- ^ "Astra - Apollo 70, Cavium ThunderX2 CN9975-2000 28C 2GHz, 4xEDR Infiniband | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 6 December 2018. Retrieved 6 December 2018.
- ^ "Top500 – List Statistics". www.top500.org. November 2017. Archived from the original on 19 November 2012. Retrieved 30 November 2017.
- ^ "Linux Runs All of the World's Fastest Supercomputers". Linux Foundation. The Linux Foundation. 20 November 2017. Archived from the original on 26 November 2017. Retrieved 26 March 2020.
- ^ "Microsoft Windows Azure". Archived from the original on 21 February 2019. Retrieved 6 December 2018.
- ^ "Magic Cube – Dawning 5000A, QC Opteron 1.9 GHz, Infiniband, Windows HPC 2008". Archived from the original on 21 February 2019. Retrieved 6 December 2018.
- ^ "System X - 1100 Dual 2.3 GHz Apple XServe/Mellanox Infiniband 4X/Cisco GigE | TOP500". www.top500.org. Archived from the original on 28 June 2021. Retrieved 28 June 2021.
- ^ "Origin 2000 195/250 MHz". Top500. Archived from the original on 16 November 2017. Retrieved 15 November 2017.
- ^ "Gyoukou - ZettaScaler-2.2 HPC system, Xeon D-1571 16C 1.3 GHz, Infiniband EDR, PEZY-SC2 700 MHz". Top 500. Archived from the original on 28 September 2021. Retrieved 1 September 2021.
- ^ "PEZY-SC2 - PEZY". Archived from the original on 14 November 2017. Retrieved 17 November 2017.
- ^ "The 2,048-core PEZY-SC2 sets a Green500 record". WikiChip Fuse. 1 November 2017. Archived from the original on 16 November 2017. Retrieved 15 November 2017.
Powering the ZettaScaler-2.2 is the PEZY-SC2. The SC2 is a second-generation chip featuring twice as many cores – i.e., 2,048 cores with 8-way SMT for a total of 16,384 threads. […] The first-generation SC incorporated two ARM926 cores and while that was sufficient for basic management and debugging its processing power was inadequate for much more. The SC2 uses a hexa-core P-Class P6600 MIPS processor which share the same memory address as the PEZY cores, improving performance and reducing data transfer overhead. With the powerful MIPS management cores, it is now also possible to entirely eliminate the Xeon host processor. However, PEZY has not done so yet.
- ^ "November 2020 | TOP500". www.top500.org. Archived from the original on 11 May 2021. Retrieved 21 November 2020.
- ^ "TOP500 Becomes a Petaflop Club for Supercomputers | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 24 June 2019. Retrieved 29 June 2019.
- ^ "Explorer-WUS3 - ND96_amsr_MI200_v4, AMD EPYC 7V12 48C 2.45GHz, AMD Instinct MI250X, Infiniband HDR | TOP500". www.top500.org. Retrieved 23 July 2023.
- ^ "Voyager-EUS2". www.top500.org. Archived from the original on 5 April 2022. Retrieved 28 June 2022.
- ^ "Wisteria/BDEC-01 (Odyssey) - PRIMEHPC FX1000, A64FX 48C 2.2GHz, Tofu interconnect D | TOP500". www.top500.org. Retrieved 23 July 2023.
- ^ "June 2025 | TOP500". www.top500.org. Retrieved 10 June 2025.
- ^ Turisini, Matteo; Cestari, Mirko; Amati, Giorgio (15 January 2024). "LEONARDO: A Pan-European Pre-Exascale Supercomputer for HPC and AI applications". Journal of Large-scale Research Facilities JLSRF. 9 (1). doi:10.17815/jlsrf-8-186. ISSN 2364-091X.
- ^ "TOP500 DESCRIPTION". www.top500.org. Archived from the original on 23 June 2020. Retrieved 23 June 2020.
- ^ "FREQUENTLY ASKED QUESTIONS". www.top500.org. Archived from the original on 3 April 2021. Retrieved 23 June 2020.
- ^ "TOP500 Supercomputer Sites". 13 November 2023. Archived from the original on 13 November 2023. Retrieved 28 January 2024.
- ^ "November 2024 | TOP500". top500.org. Retrieved 19 November 2024.
- ^ "TOP500 List - November 2023". November 2023. Archived from the original on 1 March 2024. Retrieved 24 April 2024.
- ^ "June 2020". June 2020. Archived from the original on 1 September 2022. Retrieved 25 June 2020.
- ^ Summit, an IBM-built supercomputer now running at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot June 2018 with a performance of 122.3 petaflops on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core POWER9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network."TOP500 List - June 2018". The TOP500 List of the 500 most powerful commercially available computer systems known. The TOP500 project. 30 June 2018. Archived from the original on 25 June 2018. Retrieved 10 July 2019.
- ^ Advanced reports that Oak Ridge National Laboratory was fielding the world's fastest supercomputer were proven correct when the 40th edition of the twice-yearly TOP500 List of the world's top supercomputers was released today (Nov. 12, 2012). Titan, a Cray XK7 system installed at Oak Ridge, achieved 17.59 Petaflop/s (quadrillions of calculations per second) on the Linpack benchmark. Titan has 560,640 processors, including 261,632 NVIDIA K20x accelerator cores."TOP500 List - November 2012". The TOP500 List of the 500 most powerful commercially available computer systems known. The TOP500 project. 12 November 2012. Archived from the original on 2 July 2018. Retrieved 10 July 2019.
- ^ For the first time since November 2009, a United States supercomputer sits atop the TOP500 list of the world's top supercomputers. Named Sequoia, the IBM BlueGene/Q system installed at the Department of Energy's Lawrence Livermore National Laboratory achieved an impressive 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores."TOP500 List - June 2012". The TOP500 List of the 500 most powerful commercially available computer systems known. The TOP500 project. 30 June 2012. Archived from the original on 2 July 2018. Retrieved 10 July 2019.
- ^ The DOE/IBM BlueGene/L beta-System was able to claim the No. 1 position on the new TOP500 list with its record Linpack benchmark performance of 70.72 Tflop/s ("teraflops" or trillions of calculations per second). This system, once completed, will be moved to the DOE's Lawrence Livermore National Laboratory in Livermore, Calif."TOP500 List - November 2004". The TOP500 List of the 500 most powerful commercially available computer systems known. The TOP500 project. 30 November 2004. Archived from the original on 2 July 2019. Retrieved 10 July 2019.
- ^ ASCI Red a Sandia National Laboratories machine with 7264 Intel cores nabbed the #1 position in June of 1997."TOP500 List -June 1997". The TOP500 List of the 500 most powerful commercially available computer systems known. The TOP500 project. 30 June 1997. Archived from the original on 2 July 2019. Retrieved 10 July 2019.
- ^ "List Statistics". Archived from the original on 18 July 2018. Retrieved 10 June 2025.
- ^ Balthasar, Felix. "US Government Funds $425 million to build two new Supercomputers". News Maine. Archived from the original on 19 November 2014. Retrieved 16 November 2014.
- ^ "Nuclear worries stop Intel from selling chips to Chinese supercomputers". CNN. 10 April 2015. Archived from the original on 8 December 2018. Retrieved 17 August 2016.
- ^ "US nuclear fears block Intel China supercomputer update". BBC News. 10 April 2015. Archived from the original on 16 June 2018. Retrieved 21 June 2018.
- ^ "Executive Order -- Creating a National Strategic Computing Initiative" (Executive order). The White House – Office of the Press Secretary. 29 July 2015. Archived from the original on 30 November 2018.
- ^ a b Morgan, Timothy Prickett (23 June 2016). "Inside Japan's Future Exascale ARM Supecomputer". The Next Platform. Archived from the original on 27 June 2016. Retrieved 13 July 2016.
- ^ "Highlights - June 2016 | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 5 April 2020. Retrieved 28 June 2019.
- ^ "Highlights - June 2017 | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 28 June 2019. Retrieved 28 June 2019.
- ^ "TOP500 List Refreshed, US Edged Out of Third Place | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 25 October 2020. Retrieved 28 June 2019.
- ^ "Supermicro, Inspur, Boston Limited Unveil High-Density GPU Servers | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 3 July 2017. Retrieved 13 June 2017.
- ^ "Highlights - June 2018 | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 5 April 2020. Retrieved 28 June 2019.
- ^ "U.S. Department of Energy and Intel to deliver first exascale supercomputer1". Argonne National Laboratory. 18 March 2019. Archived from the original on 8 July 2019. Retrieved 27 March 2019.
- ^ Clark, Don (18 March 2019). "Racing Against China, U.S. Reveals Details of $500 Million Supercomputer". The New York Times. Archived from the original on 28 June 2019. Retrieved 28 June 2019.
- ^ "U.S. Department of Energy and Cray to Deliver Record-Setting Frontier Supercomputer at ORNL". Oak Ridge National Laboratory. 7 May 2019. Archived from the original on 8 May 2019. Retrieved 8 May 2019.
- ^ "TOP500 Becomes a Petaflop Club for Supercomputers | TOP500 Supercomputer Sites". www.top500.org. Archived from the original on 24 June 2019. Retrieved 28 June 2019.
- ^ Emily Conover (1 June 2022). "The world's fastest supercomputer just broke the exascale barrier". ScienceNews. Archived from the original on 1 June 2022. Retrieved 1 June 2022.
- ^ Don Clark (30 May 2022). "U.S. Retakes Top Spot in Supercomputer Race". New York Times. Archived from the original on 1 June 2022. Retrieved 1 June 2022.
- ^ "June 2024 | TOP500". top500.org. Retrieved 10 October 2025.
- ^ "November 2024 | TOP500". top500.org. Retrieved 10 October 2025.
- ^ Blue Waters Opts Out of TOP500 (article), 16 November 2012, archived from the original on 13 December 2019, retrieved 30 June 2016
- ^ Kramer, William, Top500 versus Sustained Performance – Or the Ten Problems with the TOP500 List – And What to Do About Them. 21st International Conference On Parallel Architectures And Compilation Techniques (PACT12), 19–23 September 2012, Minneapolis, MN, US
- ^ "Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed". 24 November 2021. Archived from the original on 2 September 2022. Retrieved 7 September 2022.
- ^ "National Security Agency | TOP500". top500.org. Retrieved 10 October 2025.
- ^ Google demonstrates leading performance in latest MLPerf Benchmarks (article), 30 June 2021, archived from the original on 10 July 2021, retrieved 10 July 2021
- ^ "TPU v5p". Google Cloud. Retrieved 7 September 2024.
- ^ Peckham, Oliver (22 June 2021). "Ahead of 'Dojo,' Tesla Reveals Its Massive Precursor Supercomputer". HPCwire.
- ^ Swinhoe, Dan (23 June 2021). "Tesla details pre-Dojo supercomputer, could be up to 80 petaflops". Data Center Dynamics. Retrieved 14 April 2023.
- ^ Andreadis, Kosta (13 March 2024). "Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs". TweakTown. Retrieved 7 September 2024.
- ^ "Meta Platforms (META) Q3 2024 Earnings Call Transcript". Yahoo Finance.
- ^ Gooding, Matthew (27 August 2024). "Elon Musk's xAI data center 'adding to Memphis air quality problems' - campaign group". Data Centre Dynamics.
- ^ Shah, Agam. "China's HPC Iron Curtain Creating a Top500 Problem". HPCwire. Retrieved 11 July 2025.
- ^ "Roadrunner – BladeCenter QS22/LS21 Cluster, PowerXCell 8i 3.2 GHz / Opteron DC 1.8 GHz, Voltaire Infiniband". Archived from the original on 2 January 2015. Retrieved 4 January 2015.
- ^ "Thunder – Intel Itanium2 Tiger4 1.4 GHz – Quadrics". Archived from the original on 2 January 2015. Retrieved 4 January 2015.
- ^ "Columbia – SGI Altix 1.5/1.6/1.66 GHz, Voltaire Infiniband". Archived from the original on 3 January 2015. Retrieved 4 January 2015.
- ^ "Japan Agency for Marine -Earth Science and Technology". Archived from the original on 2 January 2015. Retrieved 4 January 2015.
- ^ "IBM Flex System p460, POWER7 8C 3.550 GHz, Infiniband QDR". TOP500 Supercomputer Sites. Archived from the original on 3 October 2017. Retrieved 6 September 2017.
External links
[edit]- Official website
- LINPACK benchmarks at TOP500
TOP500
View on GrokipediaOverview
Definition and Purpose
The TOP500 is a biannual compilation ranking the 500 most powerful non-classified supercomputer systems worldwide, based on their measured performance using the High-Performance Linpack (HPL) benchmark.[2] This benchmark evaluates sustained computational capability by solving a dense system of linear equations, reporting results as Rmax, the achieved floating-point operations per second (FLOPS) under standardized conditions.[1] Unlike theoretical peak performance (Rpeak), which represents maximum hardware potential without workload constraints, Rmax captures realistic efficiency on a specific, memory-bound task, serving as a proxy for high-performance computing (HPC) hardware prowess rather than diverse real-world application performance.[1] Initiated in 1993 by Hans Werner Meuer of the University of Mannheim, Erich Strohmaier, and Jack Dongarra, the project built upon earlier supercomputer statistics to establish a consistent, verifiable metric for HPC progress.[4] [7] The ranking excludes classified military systems, focusing instead on publicly disclosed, commercially oriented installations to provide transparency into accessible technology frontiers.[2] The primary purpose of the TOP500 is to deliver an empirical overview of evolving HPC landscapes, including dominant processor architectures, system scales, and performance trajectories, thereby enabling researchers, vendors, and policymakers to identify trends in hardware innovation and deployment.[8] Lists are released every June during the International Supercomputing Conference (ISC) and every November at the Supercomputing Conference (SC), fostering community benchmarking and competition without prescribing operational utility beyond the HPL metric.[3] This approach prioritizes standardized comparability over comprehensive workload representation, highlighting aggregate shifts like the rise of accelerator-based designs while acknowledging HPL's limitations in mirroring scientific simulations.[9]Ranking Methodology
The TOP500 list ranks supercomputers based on their performance in the High Performance Linpack (HPL) benchmark, which solves a dense system of linear equations Ax = b, where A is an n × n nonsymmetric matrix, using LU factorization with partial pivoting and iterative refinement to estimate the solution.[1] The measured performance, denoted Rmax, represents the highest achieved floating-point rate in gigaflops (GFlop/s) from a valid HPL run, with the problem size Nmax selected to maximize this value while ensuring numerical stability and convergence.[1] Theoretical peak performance, Rpeak, is calculated as the product of the number of cores, clock frequency in GHz, and the maximum double-precision floating-point operations per cycle per core (typically 8 for vectorized units or 16 with AVX-512 extensions), using advertised base clock rates without accounting for turbo boosts unless specified.[2][10] System owners or vendors submit HPL results voluntarily via the official TOP500 portal, including detailed hardware specifications such as core count, processor architecture, interconnect topology, memory capacity, and power consumption measured at the facility level during the benchmark run.[11] Submissions occur biannually, with deadlines preceding the June and November releases, a schedule maintained since the inaugural list in June 1993.[11] Classified military systems are excluded, as their performance data is not publicly verifiable or submitted, ensuring the list reflects only disclosed, civilian-accessible installations.[12] Rankings are determined by sorting submissions in descending order of Rmax; ties are resolved first by descending Rpeak, then by memory size per core, installation date, and alphabetical order of system name.[2] While HPL implementations may incorporate vendor-specific optimizations for libraries like BLAS or communication routines, the TOP500 requires reproducible results under standard conditions, with the project coordinators reserving the right to audit submissions for compliance, though no formal efficiency threshold (e.g., 80% of Rpeak) is mandated—top-ranked systems typically achieve 70-90% efficiency through balanced scaling of compute, memory bandwidth, and network performance.[1] Collected metadata beyond Rmax and Rpeak enables trend analyses, such as aggregate installed capacity (sum of Rmax across all 500 entries) and shifts in processor families or operating systems.[2]History
Inception and Early Development
The TOP500 project originated in spring 1993, initiated by Hans Werner Meuer and Erich Strohmaier of the University of Mannheim, Germany, to systematically track advancements in high-performance computing through biannual rankings of the world's most powerful systems based on the Linpack benchmark.[8] Jack Dongarra, developer of the Linpack software, contributed to its methodology from the outset.[13] The inaugural list was published on June 24, 1993, during the International Supercomputing Conference (ISC'93) in Mannheim, amid a period of increasing commercialization in high-performance computing following the end of the Cold War, which facilitated greater transparency and reporting of system capabilities previously constrained by classification.[8] The June 1993 list ranked systems primarily using massively parallel processors, with the top entry being the Thinking Machines CM-5/1024 at Los Alamos National Laboratory, delivering 59.7 GFLOPS of sustained Linpack performance.[14] Early editions highlighted a pivotal shift from specialized vector processors—dominant in prior decades via vendors like Cray Research—to scalable massively parallel architectures, such as those from Thinking Machines and Intel, driven by the need for higher concurrency to handle growing computational demands in scientific simulations.[15] This transition reflected underlying engineering realities: vector systems excelled in sequential floating-point operations but scaled poorly beyond certain limits, whereas parallel designs leveraged commodity-like components for cost-effective expansion, though initial implementations faced challenges in interconnect efficiency and programming complexity.[16] By June 1997, the ninth list featured Intel's ASCI Red at Sandia National Laboratories as the first system to surpass 1 TFLOPS, achieving 1.068 TFLOPS with 7,264 Pentium Pro processors, underscoring the viability of microprocessor-based clusters for terascale computing.[17] Sustained submissions from global HPC sites enabled the lists to consistently reach 500 entries by the mid-1990s, transforming TOP500 into a de facto indicator of technological leadership and institutional prestige in supercomputing.[8]Major Performance Milestones
The aggregate performance of the TOP500 list began modestly, totaling approximately 60 teraflops (TFLOPS) in June 1993.[18] This marked the inception of tracked exponential growth in high-performance computing (HPC), roughly paralleling advancements in semiconductor scaling akin to Moore's Law, with performance doubling approximately every 14 months through the 1990s and early 2000s.[18] A pivotal milestone occurred in June 2008 when the IBM Roadrunner supercomputer achieved 1.026 petaflops (PLOPS), becoming the first system to surpass the petaflop barrier on the High Performance LINPACK (HPL) benchmark and topping the TOP500 list.[19] Roadrunner's hybrid architecture, combining AMD Opteron processors with IBM Cell chips, signaled the waning dominance of specialized vector processors, as commodity clusters began leveraging heterogeneous computing for superior scalability. By June 2019, every system on the TOP500 delivered at least 1 PLOPS, establishing the list as a universal "petaflop club."[20] The integration of graphics processing units (GPUs) post-2009 accelerated growth, with systems like China's Tianhe-1A in 2010 incorporating NVIDIA Fermi GPUs, contributing to sharper inflection points in aggregate performance. This shift propelled total TOP500 performance from under 100 exaflops (EFLOPS) in the early 2010s to multi-exaflop scales by the mid-2020s, while x86 architectures achieved near-total dominance over custom designs by the 2010s, comprising over 95% of systems due to their cost-effectiveness and ecosystem maturity. The exaflop era dawned in June 2022 with the U.S. Department of Energy's Frontier supercomputer debuting at over 1 EFLOPS, specifically 1.102 EFLOPS on HPL, as the first verified exascale system.[21] Frontier's AMD-based design underscored the efficacy of integrated CPU-GPU processors for extreme-scale HPC. By June 2025, aggregate TOP500 performance exceeded 20 EFLOPS, driven by multiple exascale deployments, with El Capitan claiming the top spot at 1.742 EFLOPS, further exemplifying sustained scaling through advanced accelerators and interconnects.[22][22]Current Statistics and Trends
Top Systems as of June 2025
As of the June 2025 TOP500 list, the El Capitan supercomputer at Lawrence Livermore National Laboratory, operated by the U.S. Department of Energy's National Nuclear Security Administration, ranks first with a LINPACK Rmax performance of 1.742 exaFLOPS.[22] This HPE Cray EX255a system employs AMD 4th Generation EPYC processors (24 cores at 1.8 GHz), AMD Instinct MI300A accelerators, Slingshot-11 interconnects, and the TOSS operating system, marking it as the third publicly verified exascale system following Frontier's deployment in 2022 and Aurora's in 2023.[22] El Capitan's architecture emphasizes integrated CPU-GPU computing for nuclear stockpile stewardship and high-energy physics simulations.[23] Frontier, at Oak Ridge National Laboratory under the DOE's Office of Science, holds the second position with 1.353 exaFLOPS Rmax, utilizing HPE Cray EX235a nodes with AMD 3rd Generation EPYC processors (64 cores at 2 GHz), AMD Instinct MI250X accelerators, and Slingshot-11 networking on HPE Cray OS.[22] Aurora, installed at Argonne National Laboratory and also DOE-funded, remains third at approximately 1 exaFLOPS Rmax, based on HPE Cray EX architecture with Intel Xeon CPU Max processors and Intel Data Center GPU Max accelerators.[22] These top three systems, all U.S. Department of Energy installations, represent the only verified exascale capabilities on the list, underscoring a concentration of leading-edge performance in American federally sponsored facilities amid global competition constraints.[23] Beyond the top three, performance declines sharply, with the fourth-ranked JUPITER system— a Fujitsu PRIMEHPC FX1000 deployment for Japan's RIKEN and the University of Tokyo—achieving under 0.5 exaFLOPS Rmax using A64FX processors and Tofu interconnects.[22] No non-U.S. systems reach exascale thresholds, reflecting submission gaps from major competitors; for instance, China's Sunway TaihuLight, once the list leader in 2017, has not reappeared since November 2018 due to unverifiable High-Performance LINPACK results, exacerbated by U.S. export controls limiting access to advanced semiconductors for benchmark validation. This pattern highlights reliance on transparent, reproducible testing protocols in TOP500 rankings, which prioritize empirical verifiability over unconfirmed domestic claims.[23]| Rank | System Name | Site | Rmax (exaFLOPS) | Architecture | Cores (millions) | Country |
|---|---|---|---|---|---|---|
| 1 | El Capitan | LLNL (DOE/NNSA) | 1.742 | HPE Cray EX255a (AMD EPYC + MI300A) | ~9.2 | United States[22] |
| 2 | Frontier | ORNL (DOE/SC) | 1.353 | HPE Cray EX235a (AMD EPYC + MI250X) | 8.7 | United States[22] |
| 3 | Aurora | ANL (DOE/SC) | ~1.0 | HPE Cray EX (Intel Xeon Max + GPU Max) | ~10 | United States[22] |
| 4 | JUPITER | RIKEN/U. Tokyo | <0.5 | Fujitsu PRIMEHPC FX1000 (A64FX) | ~4 | Japan[22] |
Aggregate Performance and Growth Rates
The aggregate Rmax performance of the TOP500 list reached 13.84 exaflops (EFlop/s) as of the June 2025 edition, surpassing the previous November 2024 total of 11.72 EFlop/s and marking a semi-annual increase of approximately 18%.[23] This cumulative performance reflects the sustained scaling of high-performance computing (HPC) systems, driven primarily by accelerator integration and architectural optimizations, though constrained by power dissipation limits that have tempered growth in recent exascale-era lists.[23] Historically, the total Rmax has exhibited exponential growth since the inaugural June 1993 list, which recorded 1.13 TFlop/s across the top systems.[18] Over the subsequent 32 years, this represents a multiplication factor exceeding 12 million, implying a long-term compound annual growth rate (CAGR) of roughly 58%, calculated as , where the exponent derives from the number of years between lists.[18] Early decades saw annual doublings or faster due to rapid advances in processor density and parallelism, outpacing Moore's Law; however, post-2022 exascale deployments have slowed this to semi-annual gains of 15-20%, or an annualized rate near 30-40%, attributable to diminishing returns from thermal and electrical power envelopes that cap feasible clock speeds and node densities.[18][24] Efficiency metrics, measured as the ratio of achieved Rmax to theoretical Rpeak, have trended upward across the list, rising from averages below 50% in vector-processor eras to over 60-70% in recent GPU-accelerated systems.[25] This improvement stems from specialized hardware like tensor cores and optimized linear algebra libraries that better exploit dense matrix operations in the High-Performance LINPACK (HPL) benchmark, with top entries routinely achieving 75-80% fractions.[26] Parallel scaling is evidenced by escalating core counts, with the average system concurrency reaching 275,414 cores in June 2025, up from 257,970 six months prior and a far cry from the thousands typical in 1990s lists.[23] Aggregate cores across the TOP500 now exceed 100 million, enabling massive parallelism but highlighting reliance on heterogeneous computing to mitigate Amdahl's Law bottlenecks in communication overhead.[23]Distribution and Dominance
By Country
As of the June 2025 TOP500 list, the United States maintains overwhelming dominance in both the number of listed systems and their aggregate computational performance, reflecting sustained federal investments in high-performance computing through agencies like the Department of Energy. The U.S. hosts 171 systems, comprising 34% of the total entries, and accounts for over 60% of the list's combined Rmax performance, driven by exascale machines such as El Capitan, Frontier, and Aurora.[22][10] This leadership underscores policy priorities favoring unrestricted access to cutting-edge semiconductor technologies and substantial public funding, enabling rapid scaling to multi-exaflop capabilities. China's representation has sharply declined from its mid-2010s peak, when it held over 200 systems in November 2016, often comprising a mix of mid-tier installations that inflated entry counts but contributed modestly to performance shares. By June 2025, China fields only 7 systems, or 1.4% of entries, with a collective Rmax of approximately 158 PFlop/s, equating to under 2% of the total—far below 10% since U.S. export controls on advanced chips took effect in 2019. These restrictions, aimed at curbing proliferation of high-end processors like those from NVIDIA and AMD, have limited verified submissions of competitive systems, as Chinese supercomputers increasingly rely on domestic alternatives with inferior scaling.[10][27][28] Other nations trail significantly, with Europe's fragmented efforts—bolstered by EU-funded initiatives—yielding collective shares below U.S. levels despite standout entries like Germany's JUPITER at rank 4. Japan follows with 37 systems (7.4%), anchored by Fugaku at rank 7, while Germany has 47 (9.4%), France 23 (4.6%), and the United Kingdom 17 (3.4%). These distributions highlight how national policies on R&D funding and international tech collaborations shape outcomes, with no single non-U.S. country exceeding 10% of systems or performance.[22][10]| Country | Systems | % of Systems | Approx. Total Rmax (PFlop/s) | % of Rmax |
|---|---|---|---|---|
| United States | 171 | 34.2 | 6,500 | >60 |
| Germany | 47 | 9.4 | 1,200 | ~10 |
| Japan | 37 | 7.4 | 900 | ~7 |
| France | 23 | 4.6 | 400 | ~3 |
| China | 7 | 1.4 | 158 | <2 |
By Institution and Funding Source
The leading positions in the TOP500 list are overwhelmingly occupied by supercomputers operated by U.S. Department of Energy (DOE) national laboratories, underscoring a heavy dependence on federal public funding for peak performance achievements. As of the June 2025 ranking, the top three exascale systems—El Capitan (1,742 PFlop/s at Lawrence Livermore National Laboratory), Frontier (1,353 PFlop/s at Oak Ridge National Laboratory), and Aurora (1,012 PFlop/s at Argonne National Laboratory)—are all deployed at DOE facilities under the Exascale Computing Project, a multiyear initiative that has secured over $1.8 billion in DOE appropriations since 2017 to deliver these systems for national security and scientific applications.[22][29][30] Beyond DOE labs, other government-backed research entities play secondary but significant roles, with funding drawn from national or supranational public sources. Japan's RIKEN Center for Computational Science operates systems like the former Fugaku (which held the top spot from 2020 to 2022), supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT) investments exceeding $1 billion for prior generations, reflecting Japan's strategy of state-directed high-performance computing development.[31] In Europe, the EuroHPC Joint Undertaking—a entity co-funded by the European Union (contributing the majority via its multiannual budget) and participating member states—manages multiple TOP500 entrants, including the fourth-ranked JUPITER (deployed in Germany) and others in the top 50, with total program funding approaching €8 billion through 2027 for petascale and exascale infrastructure.[32][33] Private industry involvement as operators remains marginal in the upper echelons of the TOP500, as commercial entities prioritize proprietary clusters optimized for workloads like artificial intelligence training over the High-Performance Linpack benchmark, often declining submissions to protect competitive advantages or due to internal classification. While vendors such as HPE, IBM, and Fujitsu supply hardware under government contracts, the scale of leading systems—requiring coordinated public subsidies in the billions across major economies—demonstrates that sustained dominance relies on taxpayer-funded programs rather than market-driven private investment alone.[22][34]Technical Specifications
Processor Architectures and Vendors
The x86 architecture remains predominant in TOP500 supercomputers, powering over 90% of total cores across listed systems due to its established ecosystem and performance in high-performance computing workloads.[35] Intel processors equip 58.8% of the June 2025 list's systems, a decline from 61.8% in the prior edition, while AMD's EPYC series appears in 162 systems, including exascale machines like El Capitan and Frontier.[23][36] ARM-based designs hold a niche role, exemplified by Japan's Fugaku, which briefly topped the list in 2020 but now ranks lower as x86 hybrids with accelerators dominate top performance tiers.[35] Accelerators have become integral to top systems since the 2010s, with CPU-GPU hybrids enabling exaflop-scale computing; 232 of the June 2025 entries incorporate such accelerators.[37] NVIDIA GPUs historically lead adoption, powering a majority of accelerated systems through architectures like the H100, though AMD's Instinct MI300A has surged in compute share, notably in the top-ranked El Capitan with its integrated CPU-GPU design.[38][39] This shift reflects vendor strategies prioritizing unified memory and high-bandwidth integration for dense floating-point operations. Processor vendors Intel and AMD control the bulk of CPU deployments, with accelerators split between NVIDIA's CUDA ecosystem and AMD's ROCm platform, the latter gaining traction in U.S. Department of Energy systems amid diversification efforts.[22] System integrators like HPE, incorporating Cray EX platforms, dominate top placements, with seven of the top ten June 2025 systems using HPE hardware featuring Slingshot-11 interconnects for low-latency scaling.[40][41] InfiniBand holds a 34% share of interconnects, favored for its remote direct memory access capabilities, marking a transition from proprietary networks like older Cray designs to commoditized high-speed fabrics.[42] Domestic Chinese processors, such as Phytium's ARM-derived chips in systems like Sunway, face marginalization in global rankings due to U.S. export controls enacted since 2020, which restrict access to advanced fabrication and components, limiting scalability and performance against Western x86-GPU stacks.[43][44] These sanctions, including entity list placements, have prompted TSMC to halt orders from Phytium, forcing reliance on older nodes and reducing China's presence in upper TOP500 echelons.[43]Operating Systems and Interconnects
Linux-based operating systems have dominated the TOP500 lists since November 2017, with every one of the 500 fastest supercomputers running a Linux variant as of June 2025.[45] This complete market share reflects Linux's advantages in scalability, customizability, and open-source ecosystem support for high-performance computing (HPC) workloads. Common distributions include customized versions such as the Tri-Lab Operating System Suite (TOSS), developed for U.S. Department of Energy laboratories, SUSE Linux Enterprise Server for HPC, and Red Hat Enterprise Linux with HPC optimizations.[46] [47] Earlier lists featured Unix derivatives and proprietary systems, but these were supplanted by Linux by the mid-2010s due to superior performance tuning and community-driven development.[48] High-speed interconnects enable efficient communication among thousands of nodes, with InfiniBand holding primacy for low-latency, high-bandwidth needs in top-ranked systems. NVIDIA's InfiniBand solutions, including HDR variants post-Mellanox acquisition, power 254 of the TOP500 systems as of November 2024, outperforming Ethernet in performance-critical deployments.[49] RoCE-enabled Ethernet connects 111 systems but trails in share among the highest performers, as InfiniBand's remote direct memory access (RDMA) features minimize overhead for parallel computing.[49] Specialized alternatives like HPE Cray Slingshot-11 underpin U.S. exascale machines such as El Capitan and Frontier, delivering sub-microsecond latencies optimized for extreme-scale simulations.[46] Recent trends emphasize ecosystem standardization, with containerization via tools like Apptainer gaining traction on Linux stacks to facilitate reproducible environments without compromising security or performance isolation. Remnants of non-Linux HPC OS, including Windows variants, have vanished from the lists, underscoring Linux's unchallenged position.[50]Related and Alternative Rankings
Energy Efficiency via Green500
The Green500 list complements the TOP500 by ranking the same supercomputers according to their energy efficiency, calculated as HPL performance in gigaflops divided by power consumption in watts during the benchmark run (GFlops/W). This metric reveals the substantial electrical demands underlying high-performance computing, which the TOP500's focus on raw flops omits, thereby emphasizing trade-offs in system design where power efficiency may conflict with peak throughput.[51] In the June 2025 Green500 edition, the top-ranked system is JEDI (JUPITER Exascale Development Instrument), a prototype module of the EuroHPC JUPITER supercomputer operated by Forschungszentrum Jülich in Germany, attaining 72.73 GFlops/W alongside 4.5 PFlops of performance.[52] By contrast, El Capitan—the June 2025 TOP500 leader with over 2 exaflops—ranks 25th on the Green500 at 58.89 GFlops/W, underscoring a weak correlation between peak performance and efficiency.[53][22] Such disparities arise because HPL favors dense linear algebra computations that underutilize I/O, memory bandwidth, and other subsystems critical to overall workload viability, allowing efficiency-optimized systems to outperform raw-power giants in flops-per-watt despite lower absolute speeds. Historical trends in the Green500 demonstrate energy efficiency roughly doubling with successive supercomputer generations, driven by advances in processors, accelerators, and cooling, though absolute power draw has escalated.[51] Exascale systems exemplify this, typically requiring 20-30 megawatts (MW) at peak—such as Frontier's approximately 21 MW or El Capitan's 30 MW—potentially scaling to 60 MW for future iterations amid denser integrations and higher clock rates.[54][55][56] This progression highlights ongoing challenges in balancing computational density with sustainable power envelopes, as efficiency gains lag behind performance scaling.[51]Specialized Benchmarks for AI and Other Workloads
The HPL-MxP benchmark, an evolution of the HPL-AI proposal, adapts the High-Performance Linpack test for mixed-precision floating-point operations and sparse matrix structures common in AI training and inference.[57] This variant measures sustained performance in lower-precision computations (e.g., FP16 or BF16), yielding higher throughput than double-precision HPL while better approximating AI workload demands.[58] As of November 2024, the Aurora system at Argonne National Laboratory topped HPL-MxP rankings with 11.6 Exaflop/s, followed by Frontier at 11.4 Exaflop/s, demonstrating exascale capabilities tailored for mixed workloads.[59] However, HPL-MxP submissions remain optional and sparse, with fewer than a dozen systems reporting results per TOP500 list, highlighting limited integration despite growing AI relevance.[57] Complementary benchmarks expose gaps in TOP500's dense linear algebra focus, emphasizing I/O, irregular access patterns, and end-to-end AI pipelines. The IO500 suite assesses holistic storage performance through bandwidth, metadata operations, and I/O patterns representative of HPC and AI data movement, with production lists updated biannually at ISC and SC conferences.[60] Systems like those powered by DDN storage have dominated recent IO500 rankings, achieving superior results in real-world AI/HPC scenarios where data ingestion bottlenecks exceed compute limits.[61] Similarly, the Graph500 evaluates breadth-first search and single-source shortest path kernels on large-scale graphs, targeting analytics workloads that stress irregular memory access over sustained FLOPS.[62] Top performers, such as NVIDIA-based clusters, underscore hardware optimizations for big data traversal, contrasting TOP500's bias toward predictable, compute-bound tasks.[63] MLPerf benchmarks provide rigorous, vendor-agnostic evaluations of AI training and inference across diverse models, including large language models and vision tasks, prioritizing time-to-train metrics over raw FLOPS.[64] In MLPerf Training v5.0 (June 2025), NVIDIA's Blackwell GPUs set records for scaling to thousands of accelerators, reflecting hardware tuned for tensor operations and massive parallelism in AI pipelines.[65] Unlike TOP500, MLPerf incorporates full-stack system effects like interconnect latency and software efficiency, revealing divergences where HPL-optimized machines underperform in sparse, memory-bound AI scenarios.[66] These adjunct lists illustrate HPC diversification, as AI-driven submissions to TOP500 incorporate MxP testing but retain HPL primacy, with exascale systems like Frontier prioritizing simulation fidelity over iterative model training demands.[59]Criticisms and Limitations
Methodological Flaws in Linpack Benchmark
The High-Performance Linpack (HPL) benchmark, which underpins TOP500 rankings, primarily evaluates floating-point arithmetic throughput by solving dense systems of linear equations via LU factorization with partial pivoting, emphasizing compute-intensive operations over other system capabilities.[1] This focus renders HPL largely compute-bound, with arithmetic intensity around 2.5 flops per byte for matrix multiplications, imposing modest demands on memory bandwidth—typically requiring 40-80 GB/s per socket for optimal runs—while largely disregarding irregular memory access patterns, latency sensitivities, and I/O dependencies prevalent in scientific simulations.[67][68] Real-world high-performance computing (HPC) applications, such as climate modeling or molecular dynamics, often exhibit memory-bound or communication-bound behaviors, achieving sustained performance at 10-30% of a system's HPL-measured Rmax (the benchmark's reported flops rate), compared to HPL's own 50-90% efficiency relative to theoretical peak (Rpeak).[69][70] This divergence arises because HPL's regular, predictable data access allows near-peak utilization of compute units, whereas applications involve sparse matrices, non-local dependencies, and filesystem interactions that amplify bandwidth and latency constraints, sometimes limiting effective throughput to fractions of HPL scores.[37] Vendors and system integrators extensively optimize HPL implementations—tuning parameters like block sizes (NB), process grids, and BLAS libraries (e.g., via vendor-specific accelerations)—to maximize Rmax, often at the expense of generalizability to untuned workloads.[1] Such "benchmark gaming" has led to architectures prioritized for HPL scalability over balanced I/O or sustained application performance, with reports of systems engineered specifically to inflate TOP500 entries rather than enhance broad HPC utility.[68][71] Additionally, TOP500 submissions exclude classified supercomputers, which national security programs operate without public disclosure, thereby skewing rankings toward unclassified, often academic or open-science systems and underrepresenting total global or national HPC capacity.[72][73] This omission favors transparent installations while potentially distorting perceptions of technological leadership in opaque domains like defense simulations.[69]Broader Interpretive and Geopolitical Issues
The TOP500 list is frequently misinterpreted as a comprehensive proxy for national technological innovation or overall computing prowess, despite measuring only peak performance on the High-Performance Linpack benchmark, which correlates poorly with real-world scientific utility or broader innovation capacity.[74][37] This overreliance has fueled geopolitical narratives, such as viewing supercomputer rankings as indicators of military or economic dominance, yet the list excludes undisclosed systems, private-sector deployments, and non-submitted entries, distorting assessments of aggregate national compute resources.[75] The sustained United States dominance in recent TOP500 rankings—holding the top three positions as of November 2024—owes significantly to export controls imposed since 2019, which have restricted China's access to advanced semiconductors and components, leading to a sharp decline in verified Chinese submissions from over 200 in 2016 to fewer than 10 by 2023.[76][77] These measures, expanded under both Trump and Biden administrations to target high-performance chips like NVIDIA GPUs, have isolated China's high-performance computing sector and prompted non-participation in TOP500 submissions since around 2019, though smuggling and circumvention may partially offset impacts on unlisted systems.[78][79] Pursuit of TOP500 prestige often prioritizes benchmark optimization over productive scientific output, with exascale systems like the U.S. Department of Energy's Frontier costing approximately $600 million yet yielding marginal advancements relative to input, as evidenced by the benchmark's narrow focus amid escalating expenses for power and maintenance exceeding $100 million annually per site.[80][81] China's assertions of superior unlisted supercomputing capacity, estimated by TOP500 co-founder Jack Dongarra to potentially exceed global totals, remain unverified due to opacity and lack of independent benchmarking, raising doubts about their scale and military applications amid U.S. sanctions.[82][83] The list's emphasis on government-submitted, publicly benchmarked systems biases it toward state-funded initiatives, understating commercial high-performance computing where private entities now control about 80% of global AI-oriented clusters by 2025, many of which—such as those from hyperscalers like Microsoft Azure or undisclosed corporate AI training setups—eschew TOP500 participation to avoid revealing proprietary capabilities or due to incompatible workloads.[84][85] This private-sector shift highlights how TOP500 captures only a fraction of deployable compute, particularly in AI-driven applications where clustered GPUs prioritize training efficiency over Linpack scores.[86]Impact and Future Outlook
Contributions to Scientific Computing
Supercomputers tracked by the TOP500 list have enabled empirical advancements in plasma physics, particularly for fusion energy research, by performing simulations that capture multiscale turbulence effects unattainable with prior computational scales. The Frontier system, ranked first on the TOP500 since May 2022, facilitated gyrokinetic modeling using the CGYRO code to simulate plasma temperature fluctuations driven by ion-temperature-gradient turbulence, yielding data on particle and heat transport that inform confinement optimization in tokamak devices.[87] Similarly, Frontier-supported optimizations of fusion codes have pushed predictive modeling of energy losses in plasmas, aiding performance enhancements in experimental reactors.[88] In drug discovery, TOP500 systems like Frontier deliver exascale performance for molecular simulations, accelerating virtual screening and binding affinity predictions that process terabytes of chemical data in hours rather than years.[89] This capability stems from heterogeneous architectures combining CPUs and GPUs, as seen in DOE facilities, where such compute resolves protein-ligand interactions at atomic resolutions previously limited by classical methods.[90] Frontier's role exemplifies how TOP500-tracked exascale platforms enable precision medicine workflows, with outputs validated in peer-reviewed studies on therapeutic candidates. Climate modeling benefits from TOP500 systems' capacity for high-fidelity, petabyte-scale simulations of atmospheric and oceanic dynamics, resolving fine-scale phenomena like cloud microphysics that distributed computing clusters cannot match in resolution or speed. Systems such as Alps, ranked in the global top 10, integrate AI-driven parameterizations to refine ensemble forecasts, improving predictive accuracy for extreme weather events.[91] NVIDIA-accelerated TOP500 machines further these efforts by handling coupled Earth system models, producing verifiable hindcasts that align with observational data from satellites and ground stations.[92] The standardization of GPU architectures in TOP500 environments has spurred parallel computing optimizations that extend to broader scientific workflows, though return on investment relative to alternatives like cloud-distributed systems requires case-specific economic analysis beyond raw performance metrics.[92] Causal evidence for these contributions lies in domain-specific publications citing TOP500 hardware, rather than list rankings alone, underscoring the need for reproducible simulations over aggregate flops.Exascale Achievements and Challenges
The United States achieved the first verified exascale supercomputers in the TOP500 list, with Frontier at Oak Ridge National Laboratory reaching 1.102 EFlop/s on the High-Performance Linpack benchmark in June 2022, marking the initial operational milestone for sustained exascale performance at 64-bit precision.[93] Aurora at Argonne National Laboratory followed, entering the TOP500 in 2023 and achieving 1.012 EFlop/s by June 2025, while El Capitan at Lawrence Livermore National Laboratory became the third system to surpass 1 EFlop/s, debuting at 1.742 EFlop/s in November 2024 and retaining the top position through June 2025.[39][94] These three Department of Energy systems, all built by Hewlett Packard Enterprise, dominate the list's upper ranks, with no other nations reporting independently verified exascale capabilities in TOP500 submissions as of mid-2025.[22] Europe and China have pursued exascale systems but lag in verified performance; EuroHPC's JUPITER, touted as Europe's first exascale machine, ranked in the global top 10 by June 2025 but did not exceed 1 EFlop/s on Linpack, while Chinese efforts remain unverified in TOP500 despite prior claims of advanced prototypes.[95][96] This U.S. lead stems from coordinated investments under the Exascale Computing Project, enabling full deployment of heterogeneous architectures combining AMD and Intel processors with advanced accelerators. Exascale systems face persistent challenges in power consumption, with Frontier operating at approximately 21 MW to deliver its performance, though ideal targets aimed for 20 MW per exaflop, necessitating efficiencies around 50 GFLOPS/W that remain difficult to scale uniformly.[97] Fault tolerance poses another barrier, as systems with millions of cores (e.g., Frontier's 8.7 million) experience mean times between failures dropping to minutes during full-scale runs, requiring software mechanisms for checkpointing and recovery amid extreme parallelism involving billions of tasks.[98] Cooling innovations, such as direct liquid cooling in El Capitan, address heat dissipation from dense node packing, reducing energy overheads compared to air-based methods but introducing complexities in maintenance and scalability for future zettascale designs.[40]Evolving Role in AI and Geopolitical Competition
As artificial intelligence workloads proliferate, the TOP500's reliance on the High Performance Linpack (HPL) benchmark, optimized for dense linear algebra in double precision, increasingly misaligns with AI demands for sparse operations and mixed-precision computing. Variants like HPL-MxP, which emulate AI training through reduced precision, have gained traction in submissions; for instance, Frontier achieved 8.73 EFlop/s on HPL-AI in June 2025 evaluations, highlighting HPC-AI convergence.[57][68] Yet TOP500 has not integrated these as core metrics, limiting its relevance amid commercial shifts where NVIDIA's AI-optimized hardware, powering over half the top systems by November 2024, favors proprietary benchmarks like MLPerf over standardized HPC tests.[99] Geopolitical rivalries amplify these dynamics, with U.S. export controls since 2022 curtailing China's acquisition of advanced GPUs and interconnects, resulting in fewer disclosed Chinese entries and a pivot to indigenous chips like those from Huawei.[77][78] This has preserved U.S. leadership, with American systems claiming the top three spots in June 2025, while Europe advances sovereignty via projects like JUPITER, Europe's first exascale machine activated in September 2025 at Forschungszentrum Jülich, delivering over 1 exaFLOP/s for AI and simulations under EU control.[100][101][102] Prospects for zettaflop-scale systems face thermodynamic and cost barriers, with energy demands exceeding practical limits for on-premises deployments; cloud AI clusters, such as Oracle's Zettascale10 unveiled in October 2025 with 16 zettaFLOPs peak from 800,000 NVIDIA GPUs, exemplify a trend toward scalable, proprietary infrastructure that bypasses TOP500 scrutiny.[104] If such clouds dominate AI innovation, TOP500 risks marginalization, supplanted by workload-specific rankings that better reflect economic viability over raw peak flops.[105]References
- https://www.[techradar](/page/TechRadar).com/pro/oracle-claims-to-have-the-largest-ai-supercomputer-in-the-cloud-with-16-zettaflops-of-peak-performance-800-000-nvidia-gpus