In number theory, the Euclidean algorithm (also called Euclid's algorithm) is an algorithm to determine the greatest common divisor (GCD) of two elements of any Euclidean domain (for example, the integers). Its major significance is that it does not require factoring the two integers, and it is also significant in that it is one of the oldest algorithms known, dating back to the ancient Greeks.
These algorithms can be used in any context where division with remainder is possible. This includes rings of polynomials over a field as well as the ring of Gaussian integers, and in general all Euclidean domains. Applying the algorithm to the more general case other than natural numbers will be discussed in more detail later in the article.
function gcd(a, b)
if b = 0 return a
else return gcd(b, a mod b)
or in C/C++ as
return (b != 0 ? gcd(b, a % b) : a );}
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
function gcd(a, b)
if a = 0 return b
while b ≠ 0
if a > b
a := a − b
else
b := b − a
return a
With the recursive algorithm:
a | b | Explanations | ||
---|---|---|---|---|
gcd( | 1071, | 1029) | The initial arguments | |
= | gcd( | 1029, | 42) | The second argument is 1071 mod 1029 |
= | gcd( | 42, | 21) | The second argument is 1029 mod 42 |
= | gcd( | 21, | 0) | The second argument is 42 mod 21 |
= | 21 | Since b=0, we return a |
With the iterative algorithm:
a | b | Explanation |
---|---|---|
1071 | 1029 | Step 1: The initial inputs |
1029 | 42 | Step 2: The remainder of 1071 divided by 1029 is 42, which is put on the right, and the divisor 1029 is put on the left. |
42 | 21 | Step 3: We repeat the loop, dividing 1029 by 42, and get 21 as remainder. |
21 | 0 | Step 4: Repeat the loop again, since 42 is divisible by 21, we get 0 as remainder, and the algorithm terminates. The number on the left, that is 21, is the gcd as required. |
Observe that a ≥ b in each call. If initially, b > a, there is no problem; the first iteration effectively swaps the two values.
Any common divisor of a and b is also a divisor of r. To see why this is true, consider that r can be written as r = a − qb. Now, if there is a common divisor d of a and b such that a = sd and b = td, then r = (s−qt)d. Since all these numbers, including s−qt, are whole numbers, it can be seen that r is divisible by d.
The above analysis is true for any divisor d; thus, the greatest common divisor of a and b is also the greatest common divisor of b and r. Therefore it is enough if we continue searching for the greatest common divisor with the numbers b and r. Since r is smaller in absolute value than b, we will reach r = 0 after finitely many steps.
When analyzing the running time of Euclid's algorithm, the inputs requiring the most divisions are two successive Fibonacci numbers (because their ratios are the convergents in the slowest continued fraction expansion to converge, that of the golden ratio) as proved by Gabriel Lamé, and the worst case requires O(n) divisions, where n is the number of digits in the input. However, the divisions themselves are not constant time operations; the actual time complexity of the algorithm is $O(n^2)$. The reason is that division of two n-bit numbers takes time $O(n(m+1))$, where m is the length of the quotient. Consider the computation of gcd(a,b) where a and b have at most n bits, let $a\_0,dots,a\_k$ be the sequence of numbers produced by the algorithm, and let $n\_0,dots,n\_k$ be their lengths. Then $k=O(n)$, and the running time is bounded by
This is considerably better than Euclid's original algorithm, in which the modulus operation is effectively performed using repeated subtraction in $O(2^n)$ steps. Consequently, that version of the algorithm requires $O(2^n\; n)$ time for n-digit numbers, or $O(m\; log\{m\})$ time for the number m.
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. An alternative algorithm, the binary GCD algorithm, exploits the binary representation used by computers to avoid divisions and thereby increase efficiency, although it too is O(n²); it merely shrinks the constant hidden by the big-O notation on many real machines.
There are more complex algorithms that can reduce the running time to $O(n\; (log\; n)^2\; (log\; log\; n))$. See Computational complexity of mathematical operations for more details.
Consequently,
The quotients 1,24,2 count certain squares nested within a rectangle R having length 1071 and width 1029, in the following manner:
(1) there is 1 1029×1029 square in R whose removal leaves a 42×1029 rectangle, R_{1};
(2) there are 24 42×42 squares in R_{1} whose removal leaves a 21×42 rectangle, R_{2};
(3) there are 2 21×21 squares in R_{2} whose removal leaves nothing.
The "visual Euclidean algorithm" of nested squares applies to an arbitrary rectangle R. If the (length)/(width) of R is an irrational number, then the visual Euclidean algorithm extends to a visual continued fraction.
As an example, consider the ring of polynomials with rational coefficients. In this ring, division with remainder is carried out using long division. The resulting polynomials are then made monic by factoring out the leading coefficient.
We calculate the greatest common divisor of
and
Following the algorithm gives these values:
a | b |
---|---|
$x^4+8x^3+12x^2+17x+6$ | $x^4-4x^3+4x^2-3x+14$ |
$x^4-4x^3+4x^2-3x+14$ | $x^3+tfrac\{2\}\{3\}x^2+tfrac\{5\}\{3\}x-tfrac\{2\}\{3\}$ |
$x^3+tfrac\{2\}\{3\}x^2+tfrac\{5\}\{3\}x-tfrac\{2\}\{3\}$ | $x^2+x+2$ |
$x^2+x+2$ | $0$ |
This agrees with the explicit factorization. For general Euclidean domains, the proof of correctness is by induction on some size function. For the integers, this size function is just the identity. For rings of polynomials over a field, it is the degree of the polynomial (note that each step in the above table reduces the degree by at least one).