Top Qs
Timeline
Chat
Perspective

Timeline of scientific computing

Computational science history From Wikipedia, the free encyclopedia

Remove ads

The following is a timeline of scientific computing, also known as computational science.

Remove ads

Before modern computers

18th century

  • Simpson rediscovers Simpson's rule, a century after Johannes Kepler (who derived it in 1615 after seeing it used for wine barrels).
  • 1733 – The French naturalist Comte de Buffon poses his needle problem.[1][2]
  • Euler comes up with a simple numerical method for integrands.[3][4][5]
  • c. 500 BCE - Urdhva Tiryakbhyam algorithm, a Vedic method for fast integer multiplication; foundational for Indian mathematics.[6]
  • 300 BCE - Babylonian root extraction method, Earliest documented numerical algorithm for square roots.[7]
  • c. 250 BCE - Chinese Remainder Theorem    Systematic solution to simultaneous congruences; used in cryptography.[8]

19th century

  • First formulation of Gram-Schmidt orthogonalisation by Laplace,[9] to be further improved decades later.[10][11][12][13]
  • Babbage in 1822, began work on a machine made to compute/calculate values of polynomial functions automatically by using the method of finite differences. This was eventually called the Difference engine.
  • Lovelace's note G on the Analytical Engine (1842) describes an algorithm for generating Bernoulli numbers. It is considered the first algorithm ever specifically tailored for implementation on a computer, and thus the first-ever computer programme.[14][15] The engine was never completed, however, so her code was never tested.[16]
  • Adams-Bashforth method published.[17]
  • In applied mathematics, Jacobi develops technique for solving numerical equations.[18][19][20]
  • Gauss Seidel first published.
  • To help with computing tides, Harmonic Analyser is built in 1886.
  • 850 CE: Al-Kindi's frequency analysis – First systematic cryptanalysis technique for breaking substitution ciphers.[21]
  • 1206: Al-Jazari's programmable orchestra – Mechanical automata using pegged cylinders for sequence control (early program storage).[22]
  • 1676: Leibniz's chain rule – Foundation for calculus-based optimization later used in backpropagation.[22]
  • 1738/1763: Bernoulli's utility theory & Bayes' theorem – Probabilistic frameworks for decision-making algorithms.[22]

1900s (decade)

1910s (decade)

1920s

Remove ads

1930s

This decade marks the first major strides to a modern computer, and hence the start of the modern era.

Remove ads

1940s

  • 1947 – Metropolis algorithm for Monte Carlo simulation (named one of the top-10 algorithms of the 20th century)[30] invented at Los Alamos by von Neumann, Ulam and Metropolis.[31][32][33]
  • George Dantzig introduces the simplex method (named one of the top 10 algorithms of the 20th century)[30] in 1947.[34]
  • Ulam and von Neumann introduce the notion of cellular automata.[35]
  • Turing formulated the LU decomposition method.[36]
  • A. W. H. Phillips invents the MONIAC hydraulic computer at LSE, better known as "Phillips Hydraulic Computer".[37][38]
  • First hydro simulations occurred at Los Alamos.[39][40]

1950s

Remove ads

1960s

Remove ads

1970s

1980s

1990s

2000s

2010s


See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads