Definitions

# Euclidean algorithm

In number theory, the Euclidean algorithm (also called Euclid's algorithm) is an algorithm to determine the greatest common divisor (GCD) of two elements of any Euclidean domain (for example, the integers). Its major significance is that it does not require factoring the two integers, and it is also significant in that it is one of the oldest algorithms known, dating back to the ancient Greeks.

## History of the Euclidean algorithm

The Euclidean algorithm is one of the oldest algorithms known, since it appeared in Euclid's Elements around 300 BC (7th book, Proposition 2). Euclid originally formulated the problem geometrically, as the problem of finding the greatest common "measure" for two line lengths (a line that could be used to measure both lines without a remainder), and his algorithm proceeded by repeated subtraction of the shorter from the longer segment. However, the algorithm was probably not discovered by Euclid and it may have been known up to 200 years earlier. It was almost certainly known by Eudoxus of Cnidus (about 375 BC), and Aristotle (about 330 BC) hinted at it in his Topics, 158b, 29–35.

## Description of the algorithm

Given two natural numbers a and b, not both equal to zero: check if b is zero; if yes, a is the gcd. If not, repeat the process using, respectively, b, and the remainder after dividing a by b. The remainder after dividing a by b is usually written as a mod b.

These algorithms can be used in any context where division with remainder is possible. This includes rings of polynomials over a field as well as the ring of Gaussian integers, and in general all Euclidean domains. Applying the algorithm to the more general case other than natural numbers will be discussed in more detail later in the article.

### Using recursion

Using recursion, the algorithm can be expressed:
` function gcd(a, b)`
`     if b = 0 return a`
`     else return gcd(b, a mod b)`

or in C/C++ as

int gcd(int a, int b) {

`  return (b != 0 ? gcd(b, a % b) : a );`
}

### Using iteration

An efficient, iterative method, for compilers that don't optimize tail recursion:

` function gcd(a, b)`
`     while b ≠ 0`
`         t := b`
`         b := a mod b`
`         a := t`
`     return a`

### The extended Euclidean algorithm

By keeping track of the quotients occurring during the algorithm, one can also determine integers p and q with ap + bq = gcd(ab). This is known as the extended Euclidean algorithm.

### Original algorithm

The original algorithm as described by Euclid treated the problem geometrically, using repeated subtraction rather than mod (remainder).

` function gcd(a, b)`
`     if a = 0 return b`
`     while b ≠ 0`
`         if a > b`
`             a := a − b`
`         else`
`             b := b − a`
`     return a`

## An example

As an example, consider computing the gcd of 1071 and 1029, which is 21. Recall that “mod” means “the remainder after dividing.”

With the recursive algorithm:

a b Explanations
gcd( 1071, 1029) The initial arguments
= gcd( 1029, 42) The second argument is 1071 mod 1029
= gcd( 42, 21) The second argument is 1029 mod 42
= gcd( 21, 0) The second argument is 42 mod 21
= 21 Since b=0, we return a

With the iterative algorithm:

a b Explanation
1071 1029 Step 1: The initial inputs
1029 42 Step 2: The remainder of 1071 divided by 1029 is 42, which is put on the right, and the divisor 1029 is put on the left.
42 21 Step 3: We repeat the loop, dividing 1029 by 42, and get 21 as remainder.
21 0 Step 4: Repeat the loop again, since 42 is divisible by 21, we get 0 as remainder, and the algorithm terminates. The number on the left, that is 21, is the gcd as required.

Observe that ab in each call. If initially, b > a, there is no problem; the first iteration effectively swaps the two values.

## Proof

Suppose a and b are the natural numbers whose gcd has to be determined. Now, suppose b > 0, and the remainder of the division of a by b is r. Therefore a = qb + r where q is the quotient of the division.

Any common divisor of a and b is also a divisor of r. To see why this is true, consider that r can be written as r = a − qb. Now, if there is a common divisor d of a and b such that a = sd and b = td, then r = (sqt)d. Since all these numbers, including sqt, are whole numbers, it can be seen that r is divisible by d.

The above analysis is true for any divisor d; thus, the greatest common divisor of a and b is also the greatest common divisor of b and r. Therefore it is enough if we continue searching for the greatest common divisor with the numbers b and r. Since r is smaller in absolute value than b, we will reach r = 0 after finitely many steps.

## Running time

When analyzing the running time of Euclid's algorithm, the inputs requiring the most divisions are two successive Fibonacci numbers (because their ratios are the convergents in the slowest continued fraction expansion to converge, that of the golden ratio) as proved by Gabriel Lamé, and the worst case requires O(n) divisions, where n is the number of digits in the input. However, the divisions themselves are not constant time operations; the actual time complexity of the algorithm is $O\left(n^2\right)$. The reason is that division of two n-bit numbers takes time $O\left(n\left(m+1\right)\right)$, where m is the length of the quotient. Consider the computation of gcd(a,b) where a and b have at most n bits, let $a_0,dots,a_k$ be the sequence of numbers produced by the algorithm, and let $n_0,dots,n_k$ be their lengths. Then $k=O\left(n\right)$, and the running time is bounded by

This is considerably better than Euclid's original algorithm, in which the modulus operation is effectively performed using repeated subtraction in $O\left(2^n\right)$ steps. Consequently, that version of the algorithm requires $O\left(2^n n\right)$ time for n-digit numbers, or $O\left(m log\left\{m\right\}\right)$ time for the number m.

Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. An alternative algorithm, the binary GCD algorithm, exploits the binary representation used by computers to avoid divisions and thereby increase efficiency, although it too is O(n²); it merely shrinks the constant hidden by the big-O notation on many real machines.

There are more complex algorithms that can reduce the running time to $O\left(n \left(log n\right)^2 \left(log log n\right)\right)$. See Computational complexity of mathematical operations for more details.

## Relation with continued fractions

The quotients that appear when the Euclidean algorithm is applied to the inputs a and b are precisely the numbers occurring in the continued fraction representation of a/b. Take for instance the example of a = 1071 and b = 1029 used above. Here is the calculation with highlighted quotients:
1071 = 1029 × 1 + 42
1029 = 42 × 24 + 21
42 = 21 × 2 + 0

Consequently,

$frac\left\{1071\right\}\left\{1029\right\} = mathbf\left\{1\right\} + frac\left\{1\right\}\left\{mathbf\left\{24\right\} + frac\left\{1\right\}\left\{mathbf\left\{2\right\}\right\}\right\}$.
This method applies to arbitrary real inputs a and nonzero b; if a/b is irrational, then the Euclidean algorithm does not terminate, but the computed sequence of quotients still represents the (now infinite) continued fraction representation of a/b.

The quotients 1,24,2 count certain squares nested within a rectangle R having length 1071 and width 1029, in the following manner:

(1) there is 1 1029×1029 square in R whose removal leaves a 42×1029 rectangle, R1;

(2) there are 24 42×42 squares in R1 whose removal leaves a 21×42 rectangle, R2;

(3) there are 2 21×21 squares in R2 whose removal leaves nothing.

The "visual Euclidean algorithm" of nested squares applies to an arbitrary rectangle R. If the (length)/(width) of R is an irrational number, then the visual Euclidean algorithm extends to a visual continued fraction.

## Generalization to Euclidean domains

The Euclidean algorithm can be applied to some rings, not just the integers. The most general context in which the algorithm terminates with the greatest common divisor is in a Euclidean domain. For instance, the Gaussian integers and polynomial rings over a field are both Euclidean domains.

As an example, consider the ring of polynomials with rational coefficients. In this ring, division with remainder is carried out using long division. The resulting polynomials are then made monic by factoring out the leading coefficient.

We calculate the greatest common divisor of

$x^4-4x^3+4x^2-3x+14 = \left(x^2-5x+7\right)\left(x^2+x+2\right)$

and

$x^4+8x^3+12x^2+17x+6 = \left(x^2+7x+3\right)\left(x^2+x+2\right).$

Following the algorithm gives these values:

a b
$x^4+8x^3+12x^2+17x+6$ $x^4-4x^3+4x^2-3x+14$
$x^4-4x^3+4x^2-3x+14$ $x^3+tfrac\left\{2\right\}\left\{3\right\}x^2+tfrac\left\{5\right\}\left\{3\right\}x-tfrac\left\{2\right\}\left\{3\right\}$
$x^3+tfrac\left\{2\right\}\left\{3\right\}x^2+tfrac\left\{5\right\}\left\{3\right\}x-tfrac\left\{2\right\}\left\{3\right\}$ $x^2+x+2$
$x^2+x+2$ $0$

This agrees with the explicit factorization. For general Euclidean domains, the proof of correctness is by induction on some size function. For the integers, this size function is just the identity. For rings of polynomials over a field, it is the degree of the polynomial (note that each step in the above table reduces the degree by at least one).