Computer performance by orders of magnitude

From Wikipedia, the free encyclopedia

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24

Milliscale computing (10−3)

  • 2×10−3: average human multiplication of two 10-digit numbers using pen and paper without aids[1]

Deciscale computing (10−1)

  • 1×10−1: multiplication of two 10-digit numbers by a 1940s electromechanical desk calculator[1]
  • 3×10−1: multiplication on Zuse Z3 and Z4, first programmable digital computers, 1941 and 1945 respectively
  • 5×10−1: computing power of the average human mental calculation[clarification needed] for multiplication using pen and paper

Scale computing (100)

  • 1.2 OP/S: addition on Z3, 1941, and multiplication on Bell Model V, 1946
  • 2.4 OP/S: addition on Z4, 1945

Decascale computing (101)

  • 1.8×101: ENIAC, first programmable electronic digital computer, 1945[2]
  • 5×101: upper end of serialized human perception computation (light bulbs do not flicker to the human observer)
  • 7×101: Whirlwind I 1951 vacuum tube computer and IBM 1620 1959 transistorized scientific minicomputer[2]

Hectoscale computing (102)

Kiloscale computing (103)

Megascale computing (106)

Gigascale computing (109)

Terascale computing (1012)

Petascale computing (1015)

  • 1.026×1015: IBM Roadrunner 2009 Supercomputer
  • 1.32×1015: Nvidia GeForce 40 series' RTX 4090 consumer graphics card achieves 1.32 petaflops in AI applications, October 2022[8]
  • 2×1015: Nvidia DGX-2 a 2 Petaflop Machine Learning system (the newer DGX A100 has 5 Petaflop performance)
  • 11.5×1015: Google TPU pod containing 64 second-generation TPUs, May 2017[9]
  • 17.17×1015: IBM Sequoia's LINPACK performance, June 2013[10]
  • 20×1015: roughly the hardware-equivalent of the human brain according to Ray Kurzweil. Published in his 1999 book: The Age of Spiritual Machines: When Computers Exceed Human Intelligence[11]
  • 33.86×1015: Tianhe-2's LINPACK performance, June 2013[10]
  • 36.8×1015: 2001 estimate of computational power required to simulate a human brain in real time.[12]
  • 93.01×1015: Sunway TaihuLight's LINPACK performance, June 2016[13]
  • 143.5×1015: Summit's LINPACK performance, November 2018[14]

Exascale computing (1018)

  • 1×1018: Fugaku 2020 Japanese supercomputer in single precision mode[15]
  • 1.1x1018: Frontier 2022 U.S. supercomputer
  • 1.88×1018: U.S. Summit achieves a peak throughput of this many operations per second, whilst analysing genomic data using a mixture of numerical precisions.[16]
  • 2.43×1018: Folding@home distributed computing system during COVID-19 pandemic response[17]
  • 1.72×1018: operations per second of El Capitan, the fastest non-distributed supercomputer in the world as of November 2024[18]

Zettascale computing (1021)

  • 1×1021: Accurate global weather estimation on the scale of approximately 2 weeks.[19] Assuming Moore's law remains applicable, such systems may be feasible around 2035.[20]

A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in the first quarter of 2011.[citation needed]

Beyond zettascale computing (>1021)

  • 1.12×1036: Estimated computational power of a Matrioshka brain, assuming 1.87×1026 watt power produced by solar panels and 6 GFLOPS/watt efficiency.[21]
  • 4×1048: Estimated computational power of a Matrioshka brain whose power source is the Sun, the outermost layer operates at 10 kelvins, and the constituent parts operate at or near the Landauer limit and draws power at the efficiency of a Carnot engine
  • 5×1058: Estimated power of a galaxy equivalent in luminosity to the Milky Way converted into Matrioshka brains.

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.