Hubbry Logo
TOP500TOP500Main
Open search
TOP500
Community hub
TOP500
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
TOP500
TOP500
from Wikipedia

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks,[1] a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

Key Information

The most recent edition of TOP500 was published in June 2025 as the 65th edition of TOP500, while the next edition of TOP500 will be published in November 2025 as the 66th edition of TOP500. As of June 2025, the United States' El Capitan is the most powerful supercomputer in the TOP500, reaching 1742 petaFlops (1.742 exaFlops) on the LINPACK benchmarks.[2] As of submitted data until June 2025, the United States has the highest number of systems with 175 supercomputers; China is in second place with 47, and Germany is third at 41; the United States has by far the highest share of total computing power on the list (48.4%).[3] Due to secrecy of the latest Chinese programs, publicly known supercomputer performance share in China represents only 2% that of global as of June 2025.[3][4][5]

The TOP500 list is compiled by Jack Dongarra of the University of Tennessee, Knoxville, Erich Strohmaier and Horst Simon of the National Energy Research Scientific Computing Center (NERSC) and Lawrence Berkeley National Laboratory (LBNL), and, until his death in 2014, Hans Meuer of the University of Mannheim, Germany.[citation needed] The TOP500 project also includes lists such as Green500 (measuring energy efficiency) and HPCG (measuring I/O bandwidth).[6]

History

[edit]
Rapid growth of supercomputer performance, based on data from the top500.org website. The loga­rithmic y-axis shows performance in GFLOPS.
  Combined performance of 500 largest supercomputers
  Fastest supercomputer
  Supercomputer in 500th place

In the early 1990s, a new definition of supercomputer was needed to produce meaningful statistics. After experimenting with metrics based on processor count in 1992, the idea arose at the University of Mannheim to use a detailed listing of installed systems as the basis. In early 1993, Jack Dongarra was persuaded to join the project with his LINPACK benchmarks. A first test version was produced in May 1993, partly based on data available on the Internet, including the following sources:[7][8]

  • "List of the World's Most Powerful Computing Sites" maintained by Gunter Ahrendt[9]
  • David Kahaner, the director of the Asian Technology Information Program (ATIP);[10] published a report in 1992, titled "Kahaner Report on Supercomputer in Japan"[8] which had an immense amount of data.[citation needed]

The information from those sources was used for the first two lists. Since June 1993, the TOP500 is produced bi-annually based on site and vendor submissions only. Since 1993, performance of the No. 1 ranked position has grown steadily in accordance with Moore's law, doubling roughly every 14 months. In June 2018, Summit was fastest with an Rpeak[11] of 187.6593 PFLOPS. For comparison, this is over 1,432,513 times faster than the Connection Machine CM-5/1024 (1,024 cores), which was the fastest system in November 1993 (twenty-five years prior) with an Rpeak of 131.0 GFLOPS.[12]

Architecture and operating systems

[edit]
Share of processor families in TOP500 supercomputers by year[needs update]

While Intel, or at least the x86-64 CPU architecture has previously dominated the supercomputer list, by now AMD has more systems using that same architecture on top10, including 1st and 2nd place. And Microsoft Azure has 8 systems on top100, thereof only two with Intel CPUs, including though its most performant by far in 4th place (previously 3rd place). AMDs CPUs are usually coupled with AMD's GPU accelerators, while Intel's CPUs have historically been very often coupled with NVidia's GPU, though current Intel's third place (previously 2nd place) system notably uses Intel Data Center GPU Max. Arm-based system are also notable on the list in 4th, 7th (Fugaku, previously nr. 1) and 8th place and in total at least 23 not just from Fujitsu that introduced Arm-based the top spot; Nvidia has others with their "Superchip" CPU, not just GPUs.

As of June 2022, all supercomputers on TOP500 are 64-bit supercomputers, mostly based on CPUs with the x86-64 instruction set architecture, 384 of which are Intel EMT64-based and 101 of which are AMD AMD64-based, with the latter including the top eight supercomputers. 15 other supercomputers are all based on RISC architectures, including six based on ARM64 and seven based on the Power ISA used by IBM Power microprocessors.[citation needed]

In recent years, heterogeneous computing has dominated the TOP500, mostly using Nvidia's graphics processing units (GPUs) or Intel's x86-based Xeon Phi as coprocessors. This is because of better performance per watt ratios and higher absolute performance. AMD GPUs have taken the top 1 and displaced Nvidia in top 10 part of the list. The recent exceptions include the aforementioned Fugaku, Sunway TaihuLight, and K computer. Tianhe-2A is also an interesting exception, as US sanctions prevented use of Xeon Phi; instead, it was upgraded to use the Chinese-designed Matrix-2000[13] accelerators.[citation needed]

Two computers which first appeared on the list in 2018 were based on architectures new to the TOP500. One was a new x86-64 microarchitecture from Chinese manufacturer Sugon, using Hygon Dhyana CPUs (these resulted from a collaboration with AMD, and are a minor variant of Zen-based AMD EPYC) and was ranked 38th, now 117th,[14] and the other was the first ARM-based computer on the list – using Cavium ThunderX2 CPUs.[15] Before the ascendancy of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made up most TOP500 supercomputers, including SPARC, MIPS, PA-RISC, and Alpha.

Share of operating systems families in TOP500 supercomputers by time trend

All the fastest supercomputers since the Earth Simulator supercomputer (gained top spot in 2002, kept it for 2 and a half years until June 2004, was decommissioned in 2009; though other non-Linux on the list for longer) have used operating systems based on Linux. Since November 2017, all the listed supercomputers use an operating system based on the Linux kernel.[16][17]

Since November 2015, no computer on the list runs Windows (while Microsoft reappeared on the list in 2021 with Ubuntu based on Linux). In November 2014, Windows Azure[18] cloud computer was no longer on the list of fastest supercomputers (its best rank was 165th in 2012), leaving the Shanghai Supercomputer Center's Magic Cube as the only Windows-based supercomputer on the list, until it also dropped off the list. It was ranked 436th in its last appearance on the list released in June 2015, while its best rank was 11th in 2008.[19] There are no longer any Mac OS computers on the list. It had at most five such systems at a time, one more than the Windows systems that came later, while the total performance share for Windows was higher. Their relative performance share of the whole list was however similar, and never high for either. In 2004, the System X supercomputer based on Mac OS X (Xserve, with 2,200 PowerPC 970 processors) once ranked 7th place.[20]

It has been well over a decade since MIPS systems dropped entirely off the list[21] though the Gyoukou supercomputer that jumped to 4th place[22] in November 2017 had a MIPS-based design as a small part of the coprocessors. Use of 2,048-core coprocessors (plus 8× 6-core MIPS, for each, that "no longer require to rely on an external Intel Xeon E5 host processor"[23]) made the supercomputer much more energy efficient than the other top 10 (i.e. it was 5th on Green500 and other such ZettaScaler-2.2-based systems take first three spots).[24] At 19.86 million cores, it was by far the largest system by core-count, with almost double that of the then-best manycore system, the Chinese Sunway TaihuLight.

TOP500

[edit]

As of June 2025, the number one supercomputer is El Capitan, the leader on Green500 is JEDI, a Bull Sequana XH3000 system using the Nvidia Grace Hopper GH200 Superchip. In June 2022, the top 4 systems of Graph500 used both AMD CPUs and AMD accelerators. After an upgrade, for the 56th TOP500 in November 2020,

Fugaku grew its HPL performance to 442 petaflops, a modest increase from the 416 petaflops the system achieved when it debuted in June 2020. More significantly, the ARMv8.2 based Fugaku increased its performance on the new mixed precision HPC-AI benchmark to 2.0 exaflops, besting its 1.4 exaflops mark recorded six months ago. These represent the first benchmark measurements above one exaflop for any precision on any type of hardware.[25]

Summit, a previously fastest supercomputer, is currently highest-ranked IBM-made supercomputer; with IBM POWER9 CPUs. Sequoia became the last IBM Blue Gene/Q model to drop completely off the list; it had been ranked 10th on the 52nd list (and 1st on the June 2012, 41st list, after an upgrade).

For the first time, all 500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops." However, for a different benchmark "Summit and Sierra remain the only two systems to exceed a petaflop on the HPCG benchmark, delivering 2.9 petaflops and 1.8 petaflops, respectively. The average HPCG result on the current list is 213.3 teraflops, a marginal increase from 211.2 six months ago.[26]

Microsoft is back on the TOP500 list with six Microsoft Azure instances (that use/are benchmarked with Ubuntu, so all the supercomputers are still Linux-based), with CPUs and GPUs from same vendors, the fastest one currently 11th,[27] and another older/slower previously made 10th.[28] And Amazon with one AWS instance currently ranked 64th (it was previously ranked 40th). The number of Arm-based supercomputers is 6; currently all Arm-based supercomputers use the same Fujitsu CPU as in the number 2 system, with the next one previously ranked 13th, now 25th.[29]

Top 10 positions of the 65th TOP500 in June 2025[30]
Rank (previous) Rmax
Rpeak
(PetaFLOPS)
Name Model CPU cores Accelerator (e.g. GPU) cores Total cores (CPUs + accelerators) Interconnect Manufacturer Site
country
Year Operating
system
1 1,742.00
2,746.38
El Capitan HPE Cray EX255a 1,051,392
(43,808 × 24-core Optimized 4th Generation EPYC 24C @1.8 GHz)
9,988,224
(43,808 × 228 AMD Instinct MI300A)
11,039,616 Slingshot-11 HPE Lawrence Livermore National Laboratory
 United States
2024 Linux (TOSS)
2 1,353.00
2,055.72
Frontier HPE Cray EX235a 614,656
(9,604 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz)
8,451,520
(38,416 × 220 AMD Instinct MI250X)
9,066,176 Slingshot-11 HPE Oak Ridge National Laboratory
 United States
2022 Linux (HPE Cray OS)
3 1,012.00
1,980.01
Aurora HPE Cray EX 1,104,896
(21,248 × 52-core Intel Xeon Max 9470 @2.4 GHz)
8,159,232
(63,744 × 128 Intel Max 1550)
9,264,128 Slingshot-11 HPE Argonne National Laboratory
 United States
2023 Linux (SUSE Linux Enterprise Server 15 SP4)
4 New entry 793.40
930.00
JUPITER BullSequana XH3000 1,694,592
(23,536 × 72-Arm Neoverse V2 cores Nvidia Grace @3 GHz)
3,106,752
(23,536 × 132 Nvidia Hopper H100)
4,801,344 Quad-rail NVIDIA NDR200 Infiniband Atos EuroHPC JU
 European Union, Jülich,  Germany
2025 Linux (RHEL)
5 Decrease 561.20
846.84
Eagle Microsoft NDv5 172,800
(3,600 × 48-core Intel Xeon Platinum 8480C @2.0 GHz)
1,900,800
(14,400 × 132 Nvidia Hopper H100)
2,073,600 NVIDIA Infiniband NDR Microsoft Microsoft
 United States
2023 Linux (Ubuntu 22.04 LTS)
6 Decrease 477.90
606.97
HPC6 HPE Cray EX235a 213,120
(3,330 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz)
2,930,400
(13,320 × 220 AMD Instinct MI250X)
3,143,520 Slingshot-11 HPE Eni S.p.A
 European Union, Ferrera Erbognone,  Italy
2024 Linux (RHEL 8.9)
7 Decrease 442.01
537.21
Fugaku Supercomputer Fugaku 7,630,848
(158,976 × 48-core Fujitsu A64FX @2.2 GHz)
- 7,630,848 Tofu interconnect D Fujitsu Riken Center for Computational Science
 Japan
2020 Linux (RHEL)
8 Decrease 434.90
574.84
Alps HPE Cray EX254n 748,800
(10,400 × 72-Arm Neoverse V2 cores Nvidia Grace @3.1 GHz)
1,372,800
(10,400 × 132 Nvidia Hopper H100)
2,121,600 Slingshot-11 HPE CSCS Swiss National Supercomputing Centre
 Switzerland
2024 Linux (HPE Cray OS)
9 Decrease 379.70
531.51
LUMI HPE Cray EX235a 186,624
(2,916 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz)
2,566,080
(11,664 × 220 AMD Instinct MI250X)
2,752,704 Slingshot-11 HPE EuroHPC JU
 European Union, Kajaani,  Finland
2022 Linux (HPE Cray OS)
10 Decrease 241.20
306.31
Leonardo BullSequana XH2000 110,592
(3,456 × 32-core Xeon Platinum 8358 @2.6 GHz)
1,714,176
(15,872 × 108 Nvidia Ampere A100)
1,824,768 Quad-rail NVIDIA HDR100 Infiniband Atos EuroHPC JU
 European Union, Bologna,  Italy
2023 Linux (RHEL 8)[31]

Legend:[32]

  • Rank – Position within the TOP500 ranking. In the TOP500 list table, the computers are ordered first by their Rmax value. In the case of equal performances (Rmax value) for different computers, the order is by Rpeak. For sites that have the same computer, the order is by memory size and then alphabetically.
  • Rmax – The highest score measured using the LINPACK benchmarks suite. This is the number that is used to rank the computers. Measured in quadrillions of 64-bit floating point operations per second, i.e., petaFLOPS.[33]
  • Rpeak – This is the theoretical peak performance of the system. Computed in petaFLOPS.
  • Name – Some supercomputers are unique, at least on its location, and are thus named by their owner.
  • Model – The computing platform as it is marketed.
  • Processor – The instruction set architecture or processor microarchitecture, alongside GPU and accelerators when available.
  • Interconnect – The interconnect between computing nodes. InfiniBand is most used (38%) by performance share, while Gigabit Ethernet is most used (54%) by number of computers.
  • Manufacturer – The manufacturer of the platform and hardware.
  • Site – The name of the facility operating the supercomputer.
  • Country – The country in which the computer is located.
  • Year – The year of installation or last major update.
  • Operating system – The operating system that the computer uses.

Top countries

[edit]

Numbers below represent the number of computers in the TOP500 that are in each of the listed countries or territories. As of 2025, United States has the most supercomputers on the list, with 174 machines. The United States has the highest aggregate computational power at 6,696 Petaflops Rmax with Japan second (1,229 Pflop/s) and Germany third (1,201 Pflop/s).

Distribution of supercomputers in the TOP500 list by country (as of June 2025)[3]
Country or territory Number of systems
United States 175
European Union 137
China 47
Germany 41
Japan 39
France 25
Italy 17
South Korea 15
Canada 13
United Kingdom 13
Brazil 9
Norway 9
Sweden 9
Taiwan 8
Poland 7
Netherlands 7
Russia 6
Saudi Arabia 6
India 6
Singapore 5
Australia 4
United Arab Emirates 4
 Switzerland 4
Czechia 3
Spain 3
Finland 3
Thailand 2
Ireland 2
Slovenia 2
Turkey 2
Bulgaria 2
Austria 2
Israel 1
Denmark 1
Iceland 1
Morocco 1
Luxembourg 1
Belgium 1
Portugal 1
Argentina 1
Hungary 1
Vietnam 1

Other rankings

[edit]
Distribution of supercomputers in the TOP500 list by country and by year
Country
/
Region
Jun 2025[3]
Nov 2024[3]
Jun 2024[3]
Nov 2023[3]
Jun 2023[3]
Nov 2022[3]
Jun 2022[3]
Nov 2021[3]
Jun 2021[3]
Nov 2020[3]
Jun 2020[3]
Nov 2019[3]
Jun 2019[3]
Nov 2018[3]
Jun 2018[3]
Nov 2017[3]
Jun 2017[3]
Nov 2016[3]
Jun 2016[3]
Nov 2015[3]
Jun 2015[3]
Nov 2014[3]
Jun 2014[3]
Nov 2013[3]
Jun 2013[3]
Nov 2012[3]
Jun 2012[3]
Nov 2011[3]
Jun 2011[3]
Nov 2010[3]
Jun 2010[3]
Nov 2009[3]
Jun 2009[3]
Nov 2008[3]
Jun 2008[3]
Nov 2007[3]
Jun 2007[3]
Nov 2006[3]
United States 175 173 171 161 150 127 128 149 122 113 114 117 116 109 124 143 168 171 165 199 233 231 232 264 252 251 252 263 255 274 282 277 291 290 257 283 281 309
EU 128 129 123 112 103 101 92 83 93 79 79 87 92 91 93 86 99 95 93 94 122 110 103 89 97 89 96 95 109 108 126 137 134 140 169 133 115 82
China 47 63 80 104 134 162 173 173 188 214 226 228 220 227 206 202 160 171 168 109 37 61 76 63 66 72 68 74 61 41 24 21 21 15 12 10 13 18
Germany 41 40 40 36 36 34 31 26 23 17 16 16 13 17 21 21 28 31 26 33 37 26 22 20 19 19 20 20 30 26 24 27 29 25 46 31 24 18
Japan 39 34 29 32 33 31 33 32 34 34 29 29 28 31 36 35 33 27 29 37 40 32 30 28 30 32 35 30 26 26 18 16 15 17 22 20 23 30
France 25 24 24 23 24 24 22 19 16 18 19 18 20 18 18 18 18 20 18 18 27 30 27 22 23 21 22 23 25 26 27 26 23 26 34 17 13 12
Italy 17 14 11 12 7 7 6 6 6 6 7 5 5 6 5 6 8 6 5 4 4 3 5 5 6 7 8 4 5 6 7 6 6 11 6 6 5 8
South Korea 15 13 13 12 8 8 6 7 5 3 3 3 5 6 7 5 8 4 7 10 9 9 8 5 4 4 3 3 4 3 1 2 0 1 1 1 5 6
United Kingdom 13 14 16 15 14 15 12 11 11 12 10 11 18 20 22 15 17 13 11 18 29 30 30 23 29 24 25 27 27 25 38 45 44 46 53 48 42 30
Canada 13 9 10 10 10 10 14 11 11 12 12 9 8 9 6 5 6 1 1 6 6 6 9 10 9 11 10 9 8 6 7 9 8 2 2 5 10 8
Brazil 9 9 8 9 9 8 6 5 6 4 4 3 3 1 1 0 2 3 4 6 6 4 4 3 3 2 3 2 2 2 1 1 0 2 1 1 2 4
Sweden 9 8 7 6 6 6 5 4 3 2 2 2 2 4 3 5 5 4 5 3 5 5 3 5 7 6 4 3 5 6 8 7 10 8 9 7 10 1
Norway 9 6 5 5 4 3 2 1 3 3 3 2 0 1 1 1 1 1 1 1 2 3 3 3 3 3 3 0 1 3 2 2 2 2 2 3 2 3
Taiwan 8 7 6 5 2 2 2 2 2 3 2 2 2 2 1 1 0 0 0 0 1 1 1 1 1 3 3 2 2 0 0 0 1 2 3 11 10 2
Netherlands 7 10 9 10 8 8 6 11 16 15 15 15 13 6 9 6 4 3 3 2 3 5 5 3 2 0 0 0 1 2 4 3 3 3 5 6 8 2
Poland 7 8 8 4 3 3 5 4 4 2 1 1 1 4 4 5 6 7 6 6 7 2 2 2 3 4 5 6 5 6 5 3 4 6 3 1 0 0
Saudi Arabia 6 7 8 7 6 6 6 6 6 5 3 3 3 3 4 4 6 5 5 6 7 4 4 3 4 3 3 3 4 6 4 4 2 0 0 0 2 4
India 6 6 4 4 4 3 3 3 3 3 2 2 3 4 5 4 4 5 9 11 11 9 9 12 11 8 5 2 2 4 5 3 6 8 6 9 8 10
Russia 6 6 7 7 7 7 7 7 3 2 2 3 2 3 4 3 3 5 7 7 8 9 5 5 8 8 5 5 12 11 11 8 5 8 9 7 5 2
Singapore 5 4 3 3 3 3 3 1 4 4 4 4 5 3 2 1 1 1 1 0 0 0 0 0 0 0 1 1 2 2 1 1 1 0 0 1 2 2
 Switzerland 4 5 5 3 4 4 4 3 3 3 2 2 4 2 3 3 3 4 3 6 6 7 6 5 4 4 1 3 4 4 5 5 4 4 6 7 5 5
United Arab Emirates 4 3 2 1 1 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1
Australia 4 4 5 6 5 5 5 3 2 2 2 3 5 5 5 4 4 3 5 4 6 9 6 5 5 7 6 4 6 4 1 1 1 1 1 1 4 4
Finland 3 3 3 3 3 3 4 3 2 2 2 1 2 1 1 2 3 2 5 2 2 3 2 2 2 3 1 1 2 1 3 2 1 1 1 5 3 1
Spain 3 3 3 3 1 1 1 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 3 2 4 3 2 3 3 6 5 6 7 9 6 7
Czechia 3 3 3 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Austria 2 3 2 2 2 2 2 1 1 1 1 1 1 0 0 2 3 3 5 1 1 1 1 1 1 1 1 2 2 1 2 8 5 0 0 0 0 0
Thailand 2 2 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Bulgaria 2 2 2 2 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0
Turkey 2 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 2 1 1
Slovenia 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0
Ireland 2 4 4 4 5 5 3 1 14 14 14 14 13 12 7 4 2 1 3 0 0 0 1 2 0 0 3 3 1 1 1 1 1 1 1 0 0 1
Denmark 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2 2 2 2 2 2 1 1 1 1 1 2 2 2 3 0 0 3 0 1 0 1
Vietnam 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
Israel 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 1 3 3 2 0 2 2 1 1 0 0 0 2
Iceland 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Luxembourg 1 1 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
Argentina 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Portugal 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Morocco 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Hungary 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Belgium 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 2 0 1 1 2 1 1 1 2 1 2 2 0 1 1 2 2 1 4 1
Hong Kong 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 2 1 0 0 0 0 1 1 1 1 0 0 0 0 0
South Africa 0 0 0 0 0 0 0 0 0 0 0 0 3 2 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 1 1 0 1 0 0 1 2
New Zealand 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 3 1 1 0 0 0 0 0 0 0 0 0 5 7 8 5 4 6 1 1 1
Mexico 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 2 1
Croatia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Greece 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Malaysia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 1 2 3 4 3
Slovak Republic 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
Cyprus 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
Egypt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0
Indonesia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0
Philippines 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0

Fastest supercomputer in TOP500 by country

[edit]

(As of November 2023[34])

Systems ranked No. 1

[edit]

Additional statistics

[edit]

By number of systems as of June 2025:[43]

Top five accelerators/co-processors
Accelerator Systems
NVIDIA AMPERE A100 (Launched: 2020)
24
NVIDIA HOPPER H100 SXM5 80 GB (Launched: 2022)
23
NVIDIA HOPPER H100 (Launched: 2022)
16
NVIDIA AMPERE A100 SXM4 40 GB (Launched: 2020)
16
NVIDIA TESLA V100 (Launched: 2017)
16
Top five manufacturers by system quantity
Manufacturer Systems
Lenovo
135
Hewlett Packard Enterprise
132
EVIDEN
55
DELL
41
Nvidia
27
Top five operating systems
Operating System Systems
Linux
152
CentOS
33
HPE Cray OS
32
Red Hat Enterprise Linux
24
Cray Linux Environment
17

Note: All operating systems of the TOP500 systems are Linux-family based, but Linux above is generic Linux.

Sunway TaihuLight is the system with the most CPU cores (10,649,600). Tianhe-2 has the most GPU/accelerator cores (4,554,752). Aurora is the system with the greatest power consumption with 38,698 kilowatts.

New developments in supercomputing

[edit]

In November 2014, it was announced that the United States was developing two new supercomputers to exceed China's Tianhe-2 in its place as world's fastest supercomputer. The two computers, Sierra and Summit, will each exceed Tianhe-2's 55 peak petaflops. Summit, the more powerful of the two, will deliver 150–300 peak petaflops.[44] On 10 April 2015, US government agencies banned selling chips, from Nvidia to supercomputing centers in China as "acting contrary to the national security ... interests of the United States";[45] and Intel Corporation from providing Xeon chips to China due to their use, according to the US, in researching nuclear weapons – research to which US export control law bans US companies from contributing – "The Department of Commerce refused, saying it was concerned about nuclear research being done with the machine."[46]

On 29 July 2015, President Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale (1000 petaflop) system and funding research into post-semiconductor computing.[47]

In June 2016, Japanese firm Fujitsu announced at the International Supercomputing Conference that its future exascale supercomputer will feature processors of its own design that implement the ARMv8 architecture. The Flagship2020 program, by Fujitsu for RIKEN plans to break the exaflops barrier by 2020 through the Fugaku supercomputer, (and "it looks like China and France have a chance to do so and that the United States is content – for the moment at least – to wait until 2023 to break through the exaflops barrier."[48]) These processors will also implement extensions to the ARMv8 architecture equivalent to HPC-ACE2 that Fujitsu is developing with Arm.[48]

In June 2016, Sunway TaihuLight became the No. 1 system with 93 petaflop/s (PFLOP/s) on the Linpack benchmark.[49]

In November 2016, Piz Daint was upgraded, moving it from 8th to 3rd, leaving the US with no systems under the TOP3 for the 2nd time.[50][51]

Inspur, based out of Jinan, China, is one of the largest HPC system manufacturers. As of May 2017, Inspur has become the third manufacturer to have manufactured a 64-way system – a record that has previously been held by IBM and HP. The company has registered over $10B in revenue and has provided a number of systems to countries such as Sudan, Zimbabwe, Saudi Arabia and Venezuela. Inspur was also a major technology partner behind both the Tianhe-2 and Taihu supercomputers, occupying the top 2 positions of the TOP500 list up until November 2017. Inspur and Supermicro released a few platforms aimed at HPC using GPU such as SR-AI and AGX-2 in May 2017.[52]

In June 2018, Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, US, took the No. 1 spot with a performance of 122.3 petaflop/s (PFLOP/s), and Sierra, a very similar system at the Lawrence Livermore National Laboratory, CA, US took #3. These systems also took the first two spots on the HPCG benchmark. Due to Summit and Sierra, the US took back the lead as consumer of HPC performance with 38.2% of the overall installed performance while China was second with 29.1% of the overall installed performance. For the first time ever, the leading HPC manufacturer was not a US company. Lenovo took the lead with 23.8% of systems installed. It is followed by HPE with 15.8%, Inspur with 13.6%, Cray with 11.2%, and Sugon with 11%.[53]

On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOP supercomputer would be operational at Argonne National Laboratory by the end of 2021. The computer, named Aurora, was delivered to Argonne by Intel and Cray.[54][55]

On 7 May 2019, The U.S. Department of Energy announced a contract with Cray to build the "Frontier" supercomputer at Oak Ridge National Laboratory. Frontier, originally anticipated to be operational in 2021, was projected to be the world's most powerful computer, with a peak performance of greater than 1.5 exaflops.[56]

Since June 2019, all TOP500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.[57]

In May 2022, the Frontier supercomputer broke the exascale barrier, completing more than a quintillion 64-bit floating point arithmetic calculations per second. Frontier clocked in at approximately 1.1 exaflops, beating out the previous record-holder, Fugaku.[58][59] In June 2024, Aurora was the second computer on the TOP500 to post an exascale Rmax value, at 1.012 exaflops.[60]

Since then, Frontier has been dethroned by El Capitan, hosted at Lawrence Livermore National Laboratory, with an HPL score of 1.742 exaflops.[61]

Large machines not on the list

[edit]

Some major systems are not on the list. A prominent example is the NCSA's Blue Waters which publicly announced the decision not to participate in the list[62] because they do not feel it accurately indicates the ability of any system to do useful work.[63]

Other organizations decide not to list systems for security and/or commercial competitiveness reasons. One such example is the National Supercomputing Center at Qingdao's OceanLight supercomputer, completed in March 2021, which was submitted for, and won, the Gordon Bell Prize. The computer is an exaflop computer, but was not submitted to the TOP500 list; the first exaflop machine submitted to the TOP500 list was Frontier. Analysts suspected that the reason the NSCQ did not submit what would otherwise have been the world's first exascale supercomputer was to avoid inflaming political sentiments and fears within the United States, in the context of the United States – China trade war.[64] Similarly, government agencies like the National Security Agency formerly submitted their devices to the TOP500, only to stop after 1998.[65]

Additional purpose-built machines that are not capable or do not run the benchmark were not included, such as RIKEN MDGRAPE-3 and MDGRAPE-4.

A Google Tensor Processing Unit v4 pod is capable of 1.1 exaflops of peak performance,[66] while TPU v5p claims over 4 exaflops in Bfloat16 floating-point format,[67] however, these units are highly specialized to run machine learning workloads and the TOP500 measures a specific benchmark algorithm using a specific numeric precision.

Tesla Dojo's primary unnamed cluster using 5,760 Nvidia A100 graphics processing units (GPUs) was touted by Andrej Karpathy in 2021 at the fourth International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2021) to be "roughly the number five supercomputer in the world"[68] at approximately 81.6 petaflops, based on scaling the performance of the Nvidia Selene supercomputer, which uses similar components.[69]

In March 2024, Meta AI disclosed the operation of two datacenters with 24,576 H100 GPUs,[70] which is almost 2x as on the Microsoft Azure Eagle (#3 as of September 2024), which could have made them occupy 3rd and 4th places in TOP500, but neither have been benchmarked. During company's Q3 2024 earnings call in October, M. Zuckerberg disclosed usage of a cluster with over 100,000 H100s.[71]

xAI Memphis Supercluster (also known as "Colossus") allegedly features 100,000 of the same H100 GPUs, which could have put it in the first place, but it is reportedly not in full operation due to power shortages.[72]

After the onset of US-China Trade War, China has largely shrouded its newly online supercomputers and data centers in secrecy, opting out of reporting to the TOP500 list.[4] This is partly driven by fears of being targeted by US sanctions placed on Chinese domestic suppliers.[73][5]

Computers and architectures that have dropped off the list

[edit]

IBM Roadrunner[74] is no longer on the list (nor is any other using the Cell coprocessor, or PowerXCell).

Although Itanium-based systems reached second rank in 2004,[75][76] none now remain.

Similarly (non-SIMD-style) vector processors (NEC-based such as the Earth simulator that was fastest in 2002[77]) have also fallen off the list. Also the Sun Starfire computers that occupied many spots in the past now no longer appear.

The last non-Linux computers on the list – the two AIX ones – running on POWER7 (in July 2017 ranked 494th and 495th,[78] originally 86th and 85th), dropped off the list in November 2017.

Notes

[edit]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The TOP500 is a project that biannually ranks the 500 most powerful non-distributed supercomputers in the world based on voluntary submissions of their measured performance using the High-Performance LINPACK (HPL) benchmark, which evaluates the sustained floating-point operations per second (FLOPS) achieved when solving a dense system of linear equations. Launched in 1993 by researchers Hans Werner Meuer, Erich Strohmaier, Jack Dongarra, and Horst Simon to update and standardize earlier supercomputer statistics from the University of Mannheim, the TOP500 provides a reliable, comparable metric for tracking advancements in high-performance computing hardware, architectures, and vendors. The lists are published every June and November, coinciding with major international supercomputing conferences, and have become the de facto standard for assessing global HPC capabilities, revealing trends such as the shift toward accelerator-based systems and the progression toward exascale computing. While the HPL benchmark prioritizes peak theoretical performance under idealized conditions, it has been noted for not fully capturing diverse real-world workloads, though its consistency enables long-term trend analysis across decades of exponential growth in computational power.

Overview

Definition and Purpose

The TOP500 is a biannual compilation ranking the 500 most powerful non-classified systems worldwide, based on their measured performance using the High-Performance Linpack (HPL) benchmark. This benchmark evaluates sustained computational capability by solving a dense , reporting results as Rmax, the achieved floating-point operations per second (FLOPS) under standardized conditions. Unlike theoretical peak performance (Rpeak), which represents maximum hardware potential without workload constraints, Rmax captures realistic efficiency on a specific, memory-bound task, serving as a proxy for (HPC) hardware prowess rather than diverse real-world application performance. Initiated in 1993 by Hans Werner Meuer of the , Erich Strohmaier, and , the built upon earlier statistics to establish a consistent, verifiable metric for HPC progress. The ranking excludes classified military systems, focusing instead on publicly disclosed, commercially oriented installations to provide transparency into accessible technology frontiers. The primary purpose of the TOP500 is to deliver an empirical overview of evolving HPC landscapes, including dominant processor architectures, system scales, and trajectories, thereby enabling researchers, vendors, and policymakers to identify trends in hardware innovation and deployment. Lists are released every June during the International Supercomputing Conference (ISC) and every November at the Supercomputing Conference (SC), fostering community benchmarking and competition without prescribing operational utility beyond the HPL metric. This approach prioritizes standardized comparability over comprehensive workload representation, highlighting aggregate shifts like the rise of accelerator-based designs while acknowledging HPL's limitations in mirroring scientific simulations.

Ranking Methodology

The TOP500 list ranks supercomputers based on their performance in the High Performance Linpack (HPL) benchmark, which solves a dense system of linear equations Ax = b, where A is an n × n nonsymmetric matrix, using LU factorization with partial pivoting and iterative refinement to estimate the solution. The measured performance, denoted Rmax, represents the highest achieved floating-point rate in gigaflops (GFlop/s) from a valid HPL run, with the problem size Nmax selected to maximize this value while ensuring numerical stability and convergence. Theoretical peak performance, Rpeak, is calculated as the product of the number of cores, clock frequency in GHz, and the maximum double-precision floating-point operations per cycle per core (typically 8 for vectorized units or 16 with AVX-512 extensions), using advertised base clock rates without accounting for turbo boosts unless specified. System owners or vendors submit HPL results voluntarily via the official TOP500 portal, including detailed hardware specifications such as core count, processor architecture, interconnect topology, memory capacity, and power consumption measured at the facility level during the benchmark run. Submissions occur biannually, with deadlines preceding the June and November releases, a schedule maintained since the inaugural list in June 1993. Classified military systems are excluded, as their performance data is not publicly verifiable or submitted, ensuring the list reflects only disclosed, civilian-accessible installations. Rankings are determined by sorting submissions in descending order of Rmax; ties are resolved first by descending Rpeak, then by size per core, installation date, and alphabetical order of system name. While HPL implementations may incorporate vendor-specific optimizations for libraries like BLAS or communication routines, the TOP500 requires reproducible results under standard conditions, with the project coordinators reserving the right to audit submissions for compliance, though no formal threshold (e.g., 80% of Rpeak) is mandated—top-ranked systems typically achieve 70-90% through balanced scaling of compute, , and . Collected metadata beyond Rmax and Rpeak enables trend analyses, such as aggregate installed capacity (sum of Rmax across all 500 entries) and shifts in processor families or operating systems.

History

Inception and Early Development

The TOP500 project originated in spring 1993, initiated by Hans Werner Meuer and Erich Strohmaier of the , , to systematically track advancements in through biannual rankings of the world's most powerful systems based on the Linpack benchmark. , developer of the Linpack software, contributed to its methodology from the outset. The inaugural list was published on June 24, 1993, during the International Supercomputing Conference (ISC'93) in , amid a period of increasing commercialization in following the end of the , which facilitated greater transparency and reporting of system capabilities previously constrained by classification. The June 1993 list ranked systems primarily using processors, with the top entry being the Thinking Machines CM-5/1024 at , delivering 59.7 GFLOPS of sustained Linpack performance. Early editions highlighted a pivotal shift from specialized vector processors—dominant in prior decades via vendors like Research—to scalable architectures, such as those from Thinking Machines and , driven by the need for higher concurrency to handle growing computational demands in scientific simulations. This transition reflected underlying engineering realities: vector systems excelled in sequential floating-point operations but scaled poorly beyond certain limits, whereas parallel designs leveraged commodity-like components for cost-effective expansion, though initial implementations faced challenges in interconnect efficiency and programming complexity. By June 1997, the ninth list featured Intel's at as the first system to surpass 1 TFLOPS, achieving 1.068 TFLOPS with 7,264 processors, underscoring the viability of microprocessor-based clusters for terascale . Sustained submissions from global HPC sites enabled the lists to consistently reach 500 entries by the mid-1990s, transforming TOP500 into a indicator of technological leadership and institutional prestige in supercomputing.

Major Performance Milestones

The aggregate performance of the TOP500 list began modestly, totaling approximately 60 teraflops (TFLOPS) in June 1993. This marked the inception of tracked exponential growth in high-performance computing (HPC), roughly paralleling advancements in semiconductor scaling akin to Moore's Law, with performance doubling approximately every 14 months through the 1990s and early 2000s. A pivotal occurred in June 2008 when the Roadrunner supercomputer achieved 1.026 petaflops (PLOPS), becoming the first system to surpass the petaflop barrier on the High Performance LINPACK (HPL) benchmark and topping the TOP500 list. Roadrunner's hybrid architecture, combining AMD Opteron processors with Cell chips, signaled the waning dominance of specialized vector processors, as commodity clusters began leveraging for superior scalability. By June 2019, every system on the TOP500 delivered at least 1 PLOPS, establishing the list as a universal "petaflop club." The integration of graphics processing units (GPUs) post-2009 accelerated growth, with systems like China's Tianhe-1A in 2010 incorporating Fermi GPUs, contributing to sharper inflection points in aggregate performance. This shift propelled total TOP500 performance from under 100 exaflops (EFLOPS) in the early to multi-exaflop scales by the mid-2020s, while x86 architectures achieved near-total dominance over custom designs by the , comprising over 95% of systems due to their cost-effectiveness and ecosystem maturity. The exaflop era dawned in June 2022 with the U.S. Department of Energy's Frontier supercomputer debuting at over 1 EFLOPS, specifically 1.102 EFLOPS on HPL, as the first verified exascale system. Frontier's AMD-based design underscored the efficacy of integrated CPU-GPU processors for extreme-scale HPC. By June 2025, aggregate TOP500 performance exceeded 20 EFLOPS, driven by multiple exascale deployments, with El Capitan claiming the top spot at 1.742 EFLOPS, further exemplifying sustained scaling through advanced accelerators and interconnects.

Top Systems as of June 2025

As of the June 2025 TOP500 list, the El Capitan supercomputer at Lawrence Livermore National Laboratory, operated by the U.S. Department of Energy's National Nuclear Security Administration, ranks first with a LINPACK Rmax performance of 1.742 exaFLOPS. This HPE Cray EX255a system employs AMD 4th Generation EPYC processors (24 cores at 1.8 GHz), AMD Instinct MI300A accelerators, Slingshot-11 interconnects, and the TOSS operating system, marking it as the third publicly verified exascale system following Frontier's deployment in 2022 and Aurora's in 2023. El Capitan's architecture emphasizes integrated CPU-GPU computing for nuclear stockpile stewardship and high-energy physics simulations. Frontier, at under the DOE's Office of , holds the second position with 1.353 exaFLOPS Rmax, utilizing HPE Cray EX235a nodes with AMD 3rd Generation EPYC processors (64 cores at 2 GHz), AMD Instinct MI250X accelerators, and Slingshot-11 networking on HPE Cray OS. Aurora, installed at and also DOE-funded, remains third at approximately 1 exaFLOPS Rmax, based on HPE Cray EX architecture with Intel Xeon CPU Max processors and Intel Data Center GPU Max accelerators. These top three systems, all U.S. Department of Energy installations, represent the only verified exascale capabilities on the list, underscoring a concentration of leading-edge performance in American federally sponsored facilities amid global competition constraints. Beyond the top three, performance declines sharply, with the fourth-ranked system— a PRIMEHPC FX1000 deployment for Japan's and the —achieving under 0.5 exaFLOPS Rmax using A64FX processors and interconnects. No non-U.S. systems reach exascale thresholds, reflecting submission gaps from major competitors; for instance, China's Sunway TaihuLight, once the list leader in 2017, has not reappeared since November 2018 due to unverifiable High-Performance LINPACK results, exacerbated by U.S. export controls limiting access to advanced semiconductors for benchmark validation. This pattern highlights reliance on transparent, reproducible testing protocols in TOP500 rankings, which prioritize empirical verifiability over unconfirmed domestic claims.
RankSystem NameSiteRmax (exaFLOPS)ArchitectureCores (millions)Country
1LLNL (DOE/NNSA)1.742HPE Cray EX255a ( + MI300A)~9.2
2ORNL (DOE/SC)1.353HPE EX235a ( + MI250X)8.7
3AuroraANL (DOE/SC)~1.0HPE EX (Intel Xeon Max + GPU Max)~10
4RIKEN/U. Tokyo<0.5Fujitsu PRIMEHPC FX1000 (A64FX)~4Japan

Aggregate Performance and Growth Rates

The aggregate Rmax performance of the TOP500 list reached 13.84 exaflops (EFlop/s) as of the June 2025 edition, surpassing the previous November 2024 total of 11.72 EFlop/s and marking a semi-annual increase of approximately 18%. This cumulative performance reflects the sustained scaling of high-performance computing (HPC) systems, driven primarily by accelerator integration and architectural optimizations, though constrained by power dissipation limits that have tempered growth in recent exascale-era lists. Historically, the total Rmax has exhibited exponential growth since the inaugural June 1993 list, which recorded 1.13 TFlop/s across the top systems. Over the subsequent 32 years, this represents a multiplication factor exceeding 12 million, implying a long-term compound annual growth rate (CAGR) of roughly 58%, calculated as (13.84×1018/1.13×1012)1/321(13.84 \times 10^{18} / 1.13 \times 10^{12})^{1/32} - 1, where the exponent derives from the number of years between lists. Early decades saw annual doublings or faster due to rapid advances in processor density and parallelism, outpacing Moore's Law; however, post-2022 exascale deployments have slowed this to semi-annual gains of 15-20%, or an annualized rate near 30-40%, attributable to diminishing returns from thermal and electrical power envelopes that cap feasible clock speeds and node densities. Efficiency metrics, measured as the ratio of achieved Rmax to theoretical Rpeak, have trended upward across the list, rising from averages below 50% in vector-processor eras to over 60-70% in recent GPU-accelerated systems. This improvement stems from specialized hardware like tensor cores and optimized linear algebra libraries that better exploit dense matrix operations in the High-Performance LINPACK (HPL) benchmark, with top entries routinely achieving 75-80% fractions. Parallel scaling is evidenced by escalating core counts, with the average system concurrency reaching 275,414 cores in June 2025, up from 257,970 six months prior and a far cry from the thousands typical in 1990s lists. Aggregate cores across the now exceed 100 million, enabling massive parallelism but highlighting reliance on heterogeneous computing to mitigate Amdahl's Law bottlenecks in communication overhead.

Distribution and Dominance

By Country

As of the June 2025 TOP500 list, the United States maintains overwhelming dominance in both the number of listed systems and their aggregate computational performance, reflecting sustained federal investments in high-performance computing through agencies like the Department of Energy. The U.S. hosts 171 systems, comprising 34% of the total entries, and accounts for over 60% of the list's combined Rmax performance, driven by exascale machines such as , , and Aurora. This leadership underscores policy priorities favoring unrestricted access to cutting-edge technologies and substantial public funding, enabling rapid scaling to multi-exaflop capabilities. China's representation has sharply declined from its mid-2010s peak, when it held over 200 systems in November 2016, often comprising a mix of mid-tier installations that inflated entry counts but contributed modestly to performance shares. By June 2025, fields only 7 systems, or 1.4% of entries, with a collective Rmax of approximately 158 PFlop/s, equating to under 2% of the total—far below 10% since U.S. export controls on advanced chips took effect in 2019. These restrictions, aimed at curbing proliferation of high-end processors like those from and , have limited verified submissions of competitive systems, as Chinese supercomputers increasingly rely on domestic alternatives with inferior scaling. Other nations trail significantly, with Europe's fragmented efforts—bolstered by EU-funded initiatives—yielding collective shares below U.S. levels despite standout entries like at rank 4. Japan follows with 37 systems (7.4%), anchored by Fugaku at rank 7, while has 47 (9.4%), 23 (4.6%), and the 17 (3.4%). These distributions highlight how national policies on R&D funding and international tech collaborations shape outcomes, with no single non-U.S. country exceeding 10% of systems or performance.
CountrySystems% of SystemsApprox. Total Rmax (PFlop/s)% of Rmax
17134.26,500>60
479.41,200~10
377.4900~7
234.6400~3
71.4158<2

By Institution and Funding Source

The leading positions in the TOP500 list are overwhelmingly occupied by supercomputers operated by U.S. Department of Energy (DOE) national laboratories, underscoring a heavy dependence on federal public funding for peak performance achievements. As of the June 2025 ranking, the top three exascale systems—El Capitan (1,742 PFlop/s at Lawrence Livermore National Laboratory), Frontier (1,353 PFlop/s at Oak Ridge National Laboratory), and Aurora (1,012 PFlop/s at Argonne National Laboratory)—are all deployed at DOE facilities under the Exascale Computing Project, a multiyear initiative that has secured over $1.8 billion in DOE appropriations since 2017 to deliver these systems for national security and scientific applications. Beyond DOE labs, other government-backed research entities play secondary but significant roles, with funding drawn from national or supranational public sources. Japan's Center for Computational Science operates systems like the former Fugaku (which held the top spot from 2020 to 2022), supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT) investments exceeding $1 billion for prior generations, reflecting Japan's strategy of state-directed development. In , the EuroHPC Undertaking—a entity co-funded by the (contributing the majority via its multiannual budget) and participating member states—manages multiple TOP500 entrants, including the fourth-ranked (deployed in ) and others in the top 50, with total program funding approaching €8 billion through 2027 for petascale and exascale infrastructure. Private industry involvement as operators remains marginal in the upper echelons of the TOP500, as commercial entities prioritize proprietary clusters optimized for workloads like training over the High-Performance Linpack benchmark, often declining submissions to protect competitive advantages or due to internal classification. While vendors such as HPE, , and supply hardware under government contracts, the scale of leading systems—requiring coordinated public subsidies in the billions across major economies—demonstrates that sustained dominance relies on taxpayer-funded programs rather than market-driven private investment alone.

Technical Specifications

Processor Architectures and Vendors

The x86 architecture remains predominant in TOP500 supercomputers, powering over 90% of total cores across listed systems due to its established ecosystem and performance in high-performance computing workloads. Intel processors equip 58.8% of the June 2025 list's systems, a decline from 61.8% in the prior edition, while AMD's EPYC series appears in 162 systems, including exascale machines like El Capitan and Frontier. ARM-based designs hold a niche role, exemplified by Japan's Fugaku, which briefly topped the list in 2020 but now ranks lower as x86 hybrids with accelerators dominate top performance tiers. Accelerators have become integral to top systems since the , with CPU-GPU hybrids enabling exaflop-scale computing; 232 of the June 2025 entries incorporate such accelerators. GPUs historically lead adoption, powering a majority of accelerated systems through architectures like the H100, though AMD's Instinct MI300A has surged in compute share, notably in the top-ranked with its integrated CPU-GPU design. This shift reflects vendor strategies prioritizing unified memory and high-bandwidth integration for dense floating-point operations. Processor vendors and control the bulk of CPU deployments, with accelerators split between NVIDIA's ecosystem and AMD's platform, the latter gaining traction in U.S. Department of Energy systems amid diversification efforts. System integrators like HPE, incorporating EX platforms, dominate top placements, with seven of the top ten June 2025 systems using HPE hardware featuring Slingshot-11 interconnects for low-latency scaling. holds a 34% share of interconnects, favored for its capabilities, marking a transition from proprietary networks like older designs to commoditized high-speed fabrics. Domestic Chinese processors, such as Phytium's ARM-derived chips in systems like Sunway, face marginalization in global rankings due to U.S. export controls enacted since , which restrict access to advanced fabrication and components, limiting scalability and performance against Western x86-GPU stacks. These sanctions, including placements, have prompted to halt orders from Phytium, forcing reliance on older nodes and reducing China's presence in upper TOP500 echelons.

Operating Systems and Interconnects

Linux-based operating systems have dominated the TOP500 lists since November 2017, with every one of the 500 fastest supercomputers running a variant as of June 2025. This complete market share reflects Linux's advantages in scalability, customizability, and open-source ecosystem support for (HPC) workloads. Common distributions include customized versions such as the Tri-Lab Operating System Suite (TOSS), developed for U.S. Department of Energy laboratories, SUSE Linux Enterprise Server for HPC, and with HPC optimizations. Earlier lists featured Unix derivatives and proprietary systems, but these were supplanted by Linux by the mid-2010s due to superior and community-driven development. High-speed interconnects enable efficient communication among thousands of nodes, with InfiniBand holding primacy for low-latency, high-bandwidth needs in top-ranked systems. NVIDIA's InfiniBand solutions, including HDR variants post-Mellanox acquisition, power 254 of the TOP500 systems as of November 2024, outperforming Ethernet in performance-critical deployments. RoCE-enabled Ethernet connects 111 systems but trails in share among the highest performers, as InfiniBand's remote direct memory access (RDMA) features minimize overhead for parallel computing. Specialized alternatives like HPE Cray Slingshot-11 underpin U.S. exascale machines such as El Capitan and Frontier, delivering sub-microsecond latencies optimized for extreme-scale simulations. Recent trends emphasize ecosystem standardization, with via tools like Apptainer gaining traction on stacks to facilitate reproducible environments without compromising or performance isolation. Remnants of non- HPC OS, including Windows variants, have vanished from the lists, underscoring 's unchallenged position.

Energy Efficiency via

The list complements the TOP500 by ranking the same supercomputers according to their energy efficiency, calculated as HPL performance in gigaflops divided by power consumption in watts during the benchmark run (GFlops/W). This metric reveals the substantial electrical demands underlying , which the TOP500's focus on raw flops omits, thereby emphasizing trade-offs in system design where power efficiency may conflict with peak throughput. In the June 2025 Green500 edition, the top-ranked system is (JUPITER Exascale Development Instrument), a module of the EuroHPC operated by in , attaining 72.73 GFlops/W alongside 4.5 PFlops of performance. By contrast, —the June 2025 TOP500 leader with over 2 exaflops—ranks 25th on the at 58.89 GFlops/W, underscoring a weak between peak performance and efficiency. Such disparities arise because HPL favors dense linear algebra computations that underutilize I/O, , and other subsystems critical to overall workload viability, allowing efficiency-optimized systems to outperform raw-power giants in flops-per-watt despite lower absolute speeds. Historical trends in the demonstrate energy efficiency roughly doubling with successive generations, driven by advances in processors, accelerators, and cooling, though absolute power draw has escalated. Exascale systems exemplify this, typically requiring 20-30 megawatts (MW) at peak—such as Frontier's approximately 21 MW or El Capitan's 30 MW—potentially scaling to 60 MW for future iterations amid denser integrations and higher clock rates. This progression highlights ongoing challenges in balancing computational density with sustainable power envelopes, as efficiency gains lag behind performance scaling.

Specialized Benchmarks for AI and Other Workloads

The HPL-MxP benchmark, an evolution of the HPL-AI proposal, adapts the High-Performance Linpack test for mixed-precision floating-point operations and structures common in AI training and . This variant measures sustained performance in lower-precision computations (e.g., FP16 or BF16), yielding higher throughput than double-precision HPL while better approximating AI workload demands. As of 2024, the Aurora system at topped HPL-MxP rankings with 11.6 Exaflop/s, followed by at 11.4 Exaflop/s, demonstrating exascale capabilities tailored for mixed workloads. However, HPL-MxP submissions remain optional and sparse, with fewer than a dozen systems reporting results per TOP500 list, highlighting limited integration despite growing AI relevance. Complementary benchmarks expose gaps in TOP500's dense linear algebra focus, emphasizing I/O, irregular access patterns, and end-to-end AI pipelines. The IO500 suite assesses holistic storage performance through bandwidth, metadata operations, and I/O patterns representative of HPC and data movement, with production lists updated biannually at ISC and SC conferences. Systems like those powered by DDN storage have dominated recent IO500 rankings, achieving superior results in real-world /HPC scenarios where ingestion bottlenecks exceed compute limits. Similarly, the Graph500 evaluates and single-source shortest path kernels on large-scale graphs, targeting analytics workloads that stress irregular memory access over sustained FLOPS. Top performers, such as NVIDIA-based clusters, underscore hardware optimizations for traversal, contrasting TOP500's bias toward predictable, compute-bound tasks. MLPerf benchmarks provide rigorous, vendor-agnostic evaluations of AI and across diverse models, including large models and vision tasks, prioritizing time-to-train metrics over raw FLOPS. In MLPerf v5.0 (June 2025), NVIDIA's Blackwell GPUs set records for scaling to thousands of accelerators, reflecting hardware tuned for tensor operations and massive parallelism in AI pipelines. Unlike TOP500, MLPerf incorporates full-stack system effects like interconnect latency and software efficiency, revealing divergences where HPL-optimized machines underperform in sparse, memory-bound AI scenarios. These adjunct lists illustrate HPC diversification, as AI-driven submissions to TOP500 incorporate MxP testing but retain HPL primacy, with exascale systems like prioritizing simulation fidelity over iterative model demands.

Criticisms and Limitations

Methodological Flaws in Linpack Benchmark

The High-Performance Linpack (HPL) benchmark, which underpins TOP500 rankings, primarily evaluates throughput by solving dense systems of linear equations via LU factorization with partial pivoting, emphasizing compute-intensive operations over other system capabilities. This focus renders HPL largely compute-bound, with arithmetic intensity around 2.5 flops per byte for matrix multiplications, imposing modest demands on —typically requiring 40-80 GB/s per socket for optimal runs—while largely disregarding irregular memory access patterns, latency sensitivities, and I/O dependencies prevalent in scientific simulations. Real-world (HPC) applications, such as climate modeling or , often exhibit memory-bound or communication-bound behaviors, achieving sustained performance at 10-30% of a system's HPL-measured Rmax (the benchmark's reported flops rate), compared to HPL's own 50-90% efficiency relative to theoretical peak (Rpeak). This divergence arises because HPL's regular, predictable data access allows near-peak utilization of compute units, whereas applications involve sparse matrices, non-local dependencies, and filesystem interactions that amplify bandwidth and latency constraints, sometimes limiting effective throughput to fractions of HPL scores. Vendors and system integrators extensively optimize HPL implementations—tuning parameters like block sizes (NB), process grids, and BLAS libraries (e.g., via vendor-specific accelerations)—to maximize Rmax, often at the expense of generalizability to untuned workloads. Such "benchmark gaming" has led to architectures prioritized for HPL over balanced I/O or sustained application performance, with reports of systems engineered specifically to inflate TOP500 entries rather than enhance broad HPC utility. Additionally, TOP500 submissions exclude classified supercomputers, which national security programs operate without public disclosure, thereby skewing rankings toward unclassified, often academic or open-science systems and underrepresenting total global or national HPC capacity. This omission favors transparent installations while potentially distorting perceptions of technological leadership in opaque domains like defense simulations.

Broader Interpretive and Geopolitical Issues

The TOP500 list is frequently misinterpreted as a comprehensive proxy for national or overall prowess, despite measuring only peak performance on the High-Performance Linpack benchmark, which correlates poorly with real-world scientific utility or broader capacity. This overreliance has fueled geopolitical narratives, such as viewing rankings as indicators of military or economic dominance, yet the list excludes undisclosed systems, private-sector deployments, and non-submitted entries, distorting assessments of aggregate national compute resources. The sustained dominance in recent TOP500 rankings—holding the top three positions as of November 2024—owes significantly to export controls imposed since , which have restricted China's access to advanced semiconductors and components, leading to a sharp decline in verified Chinese submissions from over 200 in to fewer than 10 by 2023. These measures, expanded under both Trump and Biden administrations to target high-performance chips like GPUs, have isolated China's sector and prompted non-participation in TOP500 submissions since around , though smuggling and circumvention may partially offset impacts on unlisted systems. Pursuit of TOP500 prestige often prioritizes benchmark optimization over productive scientific output, with exascale systems like the U.S. Department of Energy's Frontier costing approximately $600 million yet yielding marginal advancements relative to input, as evidenced by the benchmark's narrow focus amid escalating expenses for power and maintenance exceeding $100 million annually per site. China's assertions of superior unlisted supercomputing capacity, estimated by TOP500 co-founder Jack Dongarra to potentially exceed global totals, remain unverified due to opacity and lack of independent benchmarking, raising doubts about their scale and military applications amid U.S. sanctions. The list's emphasis on government-submitted, publicly benchmarked systems biases it toward state-funded initiatives, understating commercial where private entities now control about 80% of global AI-oriented clusters by , many of which—such as those from hyperscalers like or undisclosed corporate AI training setups—eschew TOP500 participation to avoid revealing proprietary capabilities or due to incompatible workloads. This private-sector shift highlights how TOP500 captures only a fraction of deployable compute, particularly in AI-driven applications where clustered GPUs prioritize training efficiency over Linpack scores.

Impact and Future Outlook

Contributions to Scientific Computing

Supercomputers tracked by the TOP500 list have enabled empirical advancements in plasma physics, particularly for fusion energy research, by performing simulations that capture multiscale turbulence effects unattainable with prior computational scales. The Frontier system, ranked first on the TOP500 since May 2022, facilitated gyrokinetic modeling using the CGYRO code to simulate plasma temperature fluctuations driven by ion-temperature-gradient turbulence, yielding data on particle and heat transport that inform confinement optimization in tokamak devices. Similarly, Frontier-supported optimizations of fusion codes have pushed predictive modeling of energy losses in plasmas, aiding performance enhancements in experimental reactors. In , TOP500 systems like deliver exascale performance for molecular simulations, accelerating and binding affinity predictions that process terabytes of chemical in hours rather than years. This capability stems from heterogeneous architectures combining CPUs and GPUs, as seen in DOE facilities, where such compute resolves protein-ligand interactions at atomic resolutions previously limited by classical methods. 's role exemplifies how TOP500-tracked exascale platforms enable precision medicine workflows, with outputs validated in peer-reviewed studies on therapeutic candidates. Climate modeling benefits from TOP500 systems' capacity for high-fidelity, petabyte-scale simulations of atmospheric and oceanic dynamics, resolving fine-scale phenomena like microphysics that clusters cannot match in resolution or speed. Systems such as , ranked in the global top 10, integrate AI-driven parameterizations to refine ensemble forecasts, improving predictive accuracy for events. NVIDIA-accelerated TOP500 machines further these efforts by handling coupled Earth system models, producing verifiable hindcasts that align with observational data from satellites and ground stations. The standardization of GPU architectures in TOP500 environments has spurred optimizations that extend to broader scientific workflows, though relative to alternatives like cloud-distributed systems requires case-specific economic analysis beyond raw performance metrics. Causal evidence for these contributions lies in domain-specific publications citing TOP500 hardware, rather than rankings alone, underscoring the need for reproducible simulations over aggregate flops.

Exascale Achievements and Challenges

The United States achieved the first verified exascale supercomputers in the TOP500 list, with Frontier at Oak Ridge National Laboratory reaching 1.102 EFlop/s on the High-Performance Linpack benchmark in June 2022, marking the initial operational milestone for sustained exascale performance at 64-bit precision. Aurora at Argonne National Laboratory followed, entering the TOP500 in 2023 and achieving 1.012 EFlop/s by June 2025, while El Capitan at Lawrence Livermore National Laboratory became the third system to surpass 1 EFlop/s, debuting at 1.742 EFlop/s in November 2024 and retaining the top position through June 2025. These three Department of Energy systems, all built by Hewlett Packard Enterprise, dominate the list's upper ranks, with no other nations reporting independently verified exascale capabilities in TOP500 submissions as of mid-2025. Europe and China have pursued exascale systems but lag in verified performance; EuroHPC's , touted as Europe's first exascale machine, ranked in the global top 10 by June 2025 but did not exceed 1 EFlop/s on Linpack, while Chinese efforts remain unverified in TOP500 despite prior claims of advanced prototypes. This U.S. lead stems from coordinated investments under the Exascale Computing Project, enabling full deployment of heterogeneous architectures combining and processors with advanced accelerators. Exascale systems face persistent challenges in power consumption, with operating at approximately 21 MW to deliver its performance, though ideal targets aimed for 20 MW per exaflop, necessitating efficiencies around 50 GFLOPS/W that remain difficult to scale uniformly. poses another barrier, as systems with millions of cores (e.g., 's 8.7 million) experience mean times between failures dropping to minutes during full-scale runs, requiring software mechanisms for checkpointing and recovery amid extreme parallelism involving billions of tasks. Cooling innovations, such as direct liquid cooling in , address heat dissipation from dense node packing, reducing energy overheads compared to air-based methods but introducing complexities in maintenance and scalability for future zettascale designs.

Evolving Role in AI and Geopolitical Competition

As workloads proliferate, the TOP500's reliance on the High Performance Linpack (HPL) benchmark, optimized for dense linear algebra in double precision, increasingly misaligns with AI demands for sparse operations and mixed-precision computing. Variants like HPL-MxP, which emulate AI training through reduced precision, have gained traction in submissions; for instance, achieved 8.73 EFlop/s on HPL-AI in June 2025 evaluations, highlighting HPC-AI convergence. Yet TOP500 has not integrated these as core metrics, limiting its relevance amid commercial shifts where NVIDIA's AI-optimized hardware, powering over half the top systems by 2024, favors proprietary benchmarks like MLPerf over standardized HPC tests. Geopolitical rivalries amplify these dynamics, with U.S. export controls since 2022 curtailing China's acquisition of advanced GPUs and interconnects, resulting in fewer disclosed Chinese entries and a pivot to indigenous chips like those from Huawei. This has preserved U.S. leadership, with American systems claiming the top three spots in June 2025, while Europe advances sovereignty via projects like JUPITER, Europe's first exascale machine activated in September 2025 at Forschungszentrum Jülich, delivering over 1 exaFLOP/s for AI and simulations under EU control. Prospects for zettaflop-scale systems face thermodynamic and cost barriers, with energy demands exceeding practical limits for on-premises deployments; cloud AI clusters, such as Oracle's Zettascale10 unveiled in October 2025 with 16 zettaFLOPs peak from 800,000 GPUs, exemplify a trend toward scalable, proprietary infrastructure that bypasses TOP500 scrutiny. If such clouds dominate AI innovation, TOP500 risks marginalization, supplanted by workload-specific rankings that better reflect economic viability over raw peak flops.

References

  1. https://www.[techradar](/page/TechRadar).com/pro/oracle-claims-to-have-the-largest-ai-supercomputer-in-the-cloud-with-16-zettaflops-of-peak-performance-800-000-nvidia-gpus
Add your contribution
Related Hubs
User Avatar
No comments yet.