# Euclidean algorithm

## Algorithm for computing greatest common divisors / From Wikipedia, the free encyclopedia

#### Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Euclidean algorithm?

Summarize this article for a 10 year old

In mathematics, the **Euclidean algorithm**,^{[note 1]} or **Euclid's algorithm**, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his *Elements* (c. 300 BC).
It is an example of an *algorithm*, a step-by-step procedure for performing a calculation according to well-defined rules,
and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.

The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity.

The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem),^{[1]}^{[2]} and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.

The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations.

The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.

The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for GCD include *greatest common factor* (GCF), *highest common factor* (HCF), *highest common divisor* (HCD), and *greatest common measure* (GCM). The greatest common divisor is often written as gcd(*a*, *b*) or, more simply, as (*a*, *b*),^{[3]} although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.

If gcd(*a*, *b*) = 1, then a and b are said to be coprime (or relatively prime).^{[4]} This property does not imply that a or b are themselves prime numbers.^{[5]} For example, 6 and 35 factor as 6 = 2 × 3 and 35 = 5 × 7, so they are not prime, but their prime factors are different, so 6 and 35 are coprime, with no common factors other than 1.

Let *g* = gcd(*a*, *b*). Since a and b are both multiples of g, they can be written *a* = *mg* and *b* = *ng*, and there is no larger number *G* > *g* for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c.^{[6]}

The greatest common divisor can be visualized as follows.^{[7]} Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The GCD g is the largest value of c for which this is possible. For illustration, a 24×60 rectangular area can be divided into a grid of: 1×1 squares, 2×2 squares, 3×3 squares, 4×4 squares, 6×6 squares or 12×12 squares. Therefore, 12 is the GCD of 24 and 60. A 24×60 rectangular area can be divided into a grid of 12×12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).

The greatest common divisor of two numbers a and b is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as it divides both a and b.^{[8]} For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the GCD of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors (with 3 repeated since 3 × 3 divides both). If two numbers have no common prime factors, their GCD is 1 (obtained here as an instance of the empty product); in other words, they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors.^{[9]}^{[10]} Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.^{[11]}

Another definition of the GCD is helpful in advanced mathematics, particularly ring theory.^{[12]} The greatest common divisor g of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form *ua* + *vb* where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of *ua* + *vb*). The equivalence of this GCD definition with the other definitions is described below.

The GCD of three or more numbers equals the product of the prime factors common to all the numbers,^{[13]} but it can also be calculated by repeatedly taking the GCDs of pairs of numbers.^{[14]} For example,

- gcd(
*a*,*b*,*c*) = gcd(*a*, gcd(*b*,*c*)) = gcd(gcd(*a*,*b*),*c*) = gcd(gcd(*a*,*c*),*b*).

Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.

### Procedure

The Euclidean algorithm can be thought of as constructing a sequence of non-negative integers that begins with the two given integers $r_{-2}=a$ and $r_{-1}=b$ and will eventually terminate with the integer zero: $\{r_{-2}=a,\ r_{-1}=b,\ r_{0},\ r_{1},\ \cdots ,\ r_{n-1},\ r_{n}=0\}$ with $r_{k+1}<r_{k}$. The integer $r_{n-1}$ will then be the GCD and we can state ${\text{gcd}}(a,b)=r_{n-1}$. The algorithm indicates how to construct the intermediate remainders $r_{k}$ via division-with-remainder on the preceding pair $(r_{k-2},\ r_{k-1})$ by finding an integer quotient $q_{k}$ so that:

- $r_{k-2}=q_{k}\cdot r_{k-1}+r_{k}{\text{, with }}\ r_{k-1}>r_{k}\geq 0.$

Because the sequence of non-negative integers $\{r_{k}\}$ is strictly decreasing, it eventually must terminate. In other words, since $r_{k}\geq 0$ for every $k$, and each $r_{k}$ is an integer that is strictly smaller than the preceding $r_{k-1}$, there eventually cannot be a non-negative integer smaller than zero, and hence the algorithm must terminate. In fact, the algorithm will always terminate at the n-th step with $r_{n}$ equal to zero.^{[15]}

To illustrate, suppose the GCD of 1071 and 462 is requested. The sequence is initially $\{r_{-2}=1071,\ r_{-1}=462\}$ and in order to find $r_{0}$, we need to find integers $q_{0}$ and $r_{0}<r_{-1}$ such that:

- $1071=q_{0}\cdot 462+r_{0}$.

This is the quotient $q_{0}=2$ since $1071=2\cdot 462+147$. This determines $r_{0}=147$ and so the sequence is now $\{1071,\ 462,\ r_{0}=147\}$. The next step is to continue the sequence to find $r_{1}$ by finding integers $q_{1}$ and $r_{1}<r_{0}$ such that:

- $462=q_{1}\cdot 147+r_{1}$.

This is the quotient $q_{1}=3$ since $462=3\cdot 147+21$. This determines $r_{1}=21$ and so the sequence is now $\{1071,\ 462,\ 147,\ r_{1}=21\}$. The next step is to continue the sequence to find $r_{2}$ by finding integers $q_{2}$ and $r_{2}<r_{1}$ such that:

- $147=q_{2}\cdot 21+r_{2}$.

This is the quotient $q_{2}=7$ since $147=7\cdot 21+0$. This determines $r_{2}=0$ and so the sequence is completed as $\{1071,\ 462,\ 147,\ 21,\ r_{2}=0\}$ as no further non-negative integer smaller than $0$ can be found. The penultimate remainder $21$ is therefore the requested GCD:

- ${\text{gcd}}(1071,\ 462)=21.$

We can generalize slightly by dropping any ordering requirement on the initial two values $a$ and $b$. If $a=b$, the algorithm may continue and trivially find that ${\text{gcd}}(a,\ a)=a$ as the sequence of remainders will be $\{a,\ a,\ 0\}$. If $a<b$, then we can also continue since $a\equiv 0\cdot b+a$, suggesting the next remainder should be $a$ itself, and the sequence is $\{a,\ b,\ a,\ \cdots \}$. Normally, this would be invalid because it breaks the requirement $r_{0}<r_{-1}$ but now we have $a<b$ by construction, so the requirement is automatically satisfied and the Euclidean algorithm can continue as normal. Therefore, dropping any ordering between the first two integers does not affect the conclusion that the sequence must eventually terminate because the next remainder will always satisfy $r_{0}<b$ and everything continues as above. The only modifications that need to be made are that $r_{k}<r_{k-1}$ only for $k\geq 0$, and that the sub-sequence of non-negative integers $\{r_{k-1}\}$ for $k\geq 0$ is strictly decreasing, therefore excluding $a=r_{-2}$ from both statements.

### Proof of validity

The validity of the Euclidean algorithm can be proven by a two-step argument.^{[16]} In the first step, the final nonzero remainder *r*_{N−1} is shown to divide both *a* and *b*. Since it is a common divisor, it must be less than or equal to the greatest common divisor *g*. In the second step, it is shown that any common divisor of *a* and *b*, including *g*, must divide *r*_{N−1}; therefore, *g* must be less than or equal to *r*_{N−1}. These two opposite inequalities imply *r*_{N−1} = *g*.

To demonstrate that *r*_{N−1} divides both *a* and *b* (the first step), *r*_{N−1} divides its predecessor *r*_{N−2}

*r*_{N−2}=*q*_{N}*r*_{N−1}

since the final remainder *r*_{N} is zero. *r*_{N−1} also divides its next predecessor *r*_{N−3}

*r*_{N−3}=*q*_{N−1}*r*_{N−2}+*r*_{N−1}

because it divides both terms on the right-hand side of the equation. Iterating the same argument, *r*_{N−1} divides all the preceding remainders, including *a* and *b*. None of the preceding remainders *r*_{N−2}, *r*_{N−3}, etc. divide *a* and *b*, since they leave a remainder. Since *r*_{N−1} is a common divisor of *a* and *b*, *r*_{N−1} ≤ *g*.

In the second step, any natural number *c* that divides both *a* and *b* (in other words, any common divisor of *a* and *b*) divides the remainders *r*_{k}. By definition, *a* and *b* can be written as multiples of *c* : *a* = *mc* and *b* = *nc*, where *m* and *n* are natural numbers. Therefore, *c* divides the initial remainder *r*_{0}, since *r*_{0} = *a* − *q*_{0}*b* = *mc* − *q*_{0}*nc* = (*m* − *q*_{0}*n*)*c*. An analogous argument shows that *c* also divides the subsequent remainders *r*_{1}, *r*_{2}, etc. Therefore, the greatest common divisor *g* must divide *r*_{N−1}, which implies that *g* ≤ *r*_{N−1}. Since the first part of the argument showed the reverse (*r*_{N−1} ≤ *g*), it follows that *g* = *r*_{N−1}. Thus, *g* is the greatest common divisor of all the succeeding pairs:^{[17]}^{[18]}

*g*= gcd(*a*,*b*) = gcd(*b*,*r*_{0}) = gcd(*r*_{0},*r*_{1}) = … = gcd(*r*_{N−2},*r*_{N−1}) =*r*_{N−1}.

### Worked example

For illustration, the Euclidean algorithm can be used to find the greatest common divisor of *a* = 1071 and *b* = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (*q*_{0} = 2), leaving a remainder of 147:

- 1071 = 2 × 462 + 147.

Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (*q*_{1} = 3), leaving a remainder of 21:

- 462 = 3 × 147 + 21.

Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (*q*_{2} = 7), leaving no remainder:

- 147 = 7 × 21 + 0.

Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are:

**More information**Step k, Equation ...

Step k | Equation | Quotient and remainder |
---|---|---|

0 | 1071 = q_{0} 462 + r_{0} | q_{0} = 2 and r_{0} = 147 |

1 | 462 = q_{1} 147 + r_{1} | q_{1} = 3 and r_{1} = 21 |

2 | 147 = q_{2} 21 + r_{2} | q_{2} = 7 and r_{2} = 0; algorithm ends |

### Visualization

The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor.^{[19]} Assume that we wish to cover an *a*×*b* rectangle with square tiles exactly, where *a* is the larger of the two numbers. We first attempt to tile the rectangle using *b*×*b* square tiles; however, this leaves an *r*_{0}×*b* residual rectangle untiled, where *r*_{0} < *b*. We then attempt to tile the residual rectangle with *r*_{0}×*r*_{0} square tiles. This leaves a second residual rectangle *r*_{1}×*r*_{0}, which we attempt to tile using *r*_{1}×*r*_{1} square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21×21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green).

### Euclidean division

At every step *k*, the Euclidean algorithm computes a quotient *q*_{k} and remainder *r*_{k} from two numbers *r*_{k−1} and *r*_{k−2}

*r*_{k−2}=*q*_{k}*r*_{k−1}+*r*_{k}

where the *r*_{k} is non-negative and is strictly less than the absolute value of *r*_{k−1}. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique.^{[20]}

In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, *r*_{k−1} is subtracted from *r*_{k−2} repeatedly until the remainder *r*_{k} is smaller than *r*_{k−1}. After that *r*_{k} and *r*_{k−1} are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply

*r*_{k}=*r*_{k−2}mod*r*_{k−1}.

### Implementations

Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as^{[21]}

functiongcd(a, b)whileb ≠ 0 t := b b := amodb a := treturna

At the beginning of the *k*th iteration, the variable *b* holds the latest remainder *r*_{k−1}, whereas the variable *a* holds its predecessor, *r*_{k−2}. The step *b* := *a* mod *b* is equivalent to the above recursion formula *r*_{k} ≡ *r*_{k−2} mod *r*_{k−1}. The temporary variable *t* holds the value of *r*_{k−1} while the next remainder *r*_{k} is being calculated. At the end of the loop iteration, the variable *b* holds the remainder *r*_{k}, whereas the variable *a* holds its predecessor, *r*_{k−1}.

(If negative inputs are allowed, or if the **mod** function may return negative values, the last line must be changed into **return abs**(a).)

In the subtraction-based version, which was Euclid's original version, the remainder calculation (`b := a `

) is replaced by repeated subtraction.**mod** b^{[22]} Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when *a* = *b*:

functiongcd(a, b)whilea ≠ bifa > b a := a − belseb := b − areturna

The variables *a* and *b* alternate holding the previous remainders *r*_{k−1} and *r*_{k−2}. Assume that *a* is larger than *b* at the beginning of an iteration; then *a* equals *r*_{k−2}, since *r*_{k−2} > *r*_{k−1}. During the loop iteration, *a* is reduced by multiples of the previous remainder *b* until *a* is smaller than *b*. Then *a* is the next remainder *r*_{k}. Then *b* is reduced by multiples of *a* until it is again smaller than *a*, giving the next remainder *r*_{k+1}, and so on.

The recursive version^{[23]} is based on the equality of the GCDs of successive remainders and the stopping condition gcd(*r*_{N−1}, 0) = *r*_{N−1}.

functiongcd(a, b)ifb = 0returnaelsereturngcd(b, amodb)

(As above, if negative inputs are allowed, or if the **mod** function may return negative values, the instruction "**return** a" must be changed into "**return max**(a, −a)".)

For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21.

### Method of least absolute remainders

In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder.^{[24]}^{[25]} Previously, the equation

*r*_{k−2}=*q*_{k}*r*_{k−1}+*r*_{k}

assumed that |*r*_{k−1}| > *r*_{k} > 0. However, an alternative negative remainder *e*_{k} can be computed:

*r*_{k−2}= (*q*_{k}+ 1)*r*_{k−1}+*e*_{k}

if *r*_{k−1} > 0 or

*r*_{k−2}= (*q*_{k}− 1)*r*_{k−1}+*e*_{k}

if *r*_{k−1} < 0.

If *r*_{k} is replaced by *e*_{k}. when |*e*_{k}| < |*r*_{k}|, then one gets a variant of Euclidean algorithm such that

- |
*r*_{k}| ≤ |*r*_{k−1}| / 2

at each step.

Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm.^{[24]}^{[25]} More generally, it has been proven that, for every input numbers *a* and *b*, the number of steps is minimal if and only if *q*_{k} is chosen in order that $\left|{\frac {r_{k+1}}{r_{k}}}\right|<{\frac {1}{\varphi }}\sim 0.618,$ where $\varphi$ is the golden ratio.^{[26]}

The Euclidean algorithm is one of the oldest algorithms in common use.^{[27]} It appears in Euclid's *Elements* (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths *a* and *b* corresponds to the greatest length *g* that measures *a* and *b* evenly; in other words, the lengths *a* and *b* are both integer multiples of the length *g*.

The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his *Elements*.^{[28]}^{[29]} The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras.^{[30]} The algorithm was probably known by Eudoxus of Cnidus (about 375 BC).^{[27]}^{[31]} The algorithm may even pre-date Eudoxus,^{[32]}^{[33]} judging from the use of the technical term ἀνθυφαίρεσις (*anthyphairesis*, reciprocal subtraction) in works by Euclid and Aristotle.^{[34]}

Centuries later, Euclid's algorithm was discovered independently both in India and in China,^{[35]} primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer",^{[36]} perhaps because of its effectiveness in solving Diophantine equations.^{[37]} Although a special case of the Chinese remainder theorem had already been described in the Chinese book *Sunzi Suanjing*,^{[38]} the general solution was published by Qin Jiushao in his 1247 book *Shushu Jiuzhang* (數書九章 *Mathematical Treatise in Nine Sections*).^{[39]} The Euclidean algorithm was first described *numerically* and popularized in Europe in the second edition of Bachet's *Problèmes plaisants et délectables* (*Pleasant and enjoyable problems*, 1624).^{[36]} In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson,^{[40]} who attributed it to Roger Cotes as a method for computing continued fractions efficiently.^{[41]}

In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832.^{[42]} Gauss mentioned the algorithm in his *Disquisitiones Arithmeticae* (published 1801), but only as a method for continued fractions.^{[35]} Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory.^{[43]} Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied.^{[44]} Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers.^{[45]} Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals.^{[46]}

"[The Euclidean algorithm] is the granddaddy of all algorithms, because it is the oldest nontrivial algorithm that has survived to the present day." |

Donald Knuth, The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, 2nd edition (1981), p. 318. |

Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval.^{[47]}

The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979)^{[48]} and the LLL algorithm.^{[49]}^{[50]}

In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called *The Game of Euclid*,^{[51]} which has an optimal strategy.^{[52]} The players begin with two piles of *a* and *b* stones. The players take turns removing *m* multiples of the smaller pile from the larger. Thus, if the two piles consist of *x* and *y* stones, where *x* is larger than *y*, the next player can reduce the larger pile from *x* stones to *x* − *my* stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones.^{[53]}^{[54]}