Added to Favorites

Popular Searches

Nearby Words

In mathematics, Graeffe's method is an algorithm for finding multiple roots of a polynomial. It was developed independently by Karl Heinrich Gräffe, Dandelin, and Lobachevsky. The method separates the roots of a polynomial by squaring them repeatedly, and uses the Vieta relations in order to approximate the roots.
## Graeffe iteration

Let $p(x)$ be an $n^\{th\}$ degree polynomial.
## Classical Graeffe's method

## Tangential Graeffe method

## Renormalization

> is a complex number of unit length and $r=-2^\{-k\}log|c|$ is a positive real. Splitting off the power $2^k$ in the exponent reduces the absolute value of c to the corresponding dyadic root. Since this results in preserving the magnitude of the (representation of the) initial coefficients, this process was named renormalization.

{|c_1right)>, where $c\_1$ is chosen as the larger of both numbers, that is, $r\_1math>.\; Thus$

- $p(x)\; =\; (x-x\_1)(x-x\_2)...(x-x\_n)$

- $p(-x)\; =\; (-1)^n\; (x+x\_1)(x+x\_2)...(x+x\_n)$

- $q(x^2)\; =\; p(x)p(-x)\; =\; (-1)^n(x^2-x\_1^2)(x^2-x\_2^2)dots(x^2-x\_n^2)$

The roots of $q(x^2)$ (when viewed as a polynomial in the variable $x^2$) are $x\_1^2,\; x\_2^2,...,x\_n^2$. We have squared the roots of our original polynomial $p(x)$. Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating k times gives a polynomial in the variable $y\; :=\; x^\{2^k\}$ of degree n. Write this polynomial as

- $q^k(y)\; =\; y^n\; +\; a\_1\; y^\{n-1\}\; +\; ...\; +\; a\_n$

with roots $y\_1=x\_1^\{2^k\},,y\_2=x\_2^\{2^k\},,...,,y\_n=x\_n^\{2^k\}$.

Next the Vieta relations are used

- $a\_1\; =\; -(y\_1+y\_2+...+y\_n)$

- $a\_2\; =\; y\_1\; y\_2\; +\; y\_1\; y\_3+...+y\_\{n-1\}\; y\_n$

- $a\_n\; =\; (-1)^n(y\_1\; y\_2\; ...\; y\_n)$

If the roots $x\_1,dots,x\_n$ are sufficiently separated, say by a factor $c>1$, $|x\_m|ge\; c|x\_\{m+1\}|$, then the iterated powers $y\_1,y\_2,...,y\_n$ of the roots are separated by the factor $c^\{2^k\}$, which quickly becomes very big.

The coefficients of the iterated polynomial can then be approximated by their leading term,

- $a\_1\; approx\; -y\_1$

- $a\_2\; approx\; y\_1\; y\_2$ and so on.

Finally logarithms are used in order to find the roots of the original polynomial.

Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and double roots. This method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second concern is that many different polynomials lead to the same Graeffe iterates.

If $varepsilon$ is an „algebraic infinitesimal“ with $varepsilon^2=0$, then the polynomial $p(x+varepsilon)=p(x)+varepsilon,p\text{'}(x)$ has roots $x\_m-varepsilon$, with powers

- $(x\_m-varepsilon)^\{2^k\}=x\_m^\{2^k\}-varepsilon,\{2^k\},x\_m^\{2^k-1\}=y\_m+varepsilon,dot\; y\_m$.

This kind of computation with infinitesimals is easy to implement analogous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration.

Every polynomial can be scaled in domain and range such that in the result the first and the last coefficient have size one and all intermediate coefficients a size smaller then one. This implies that all roots are located in a ring between the radii 1/2 and 2. If the size of the inner coefficiens is bounded by M, then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by $nM^2$. After k stages one gets the bound $n^\{2^k-1\}M^\{2^k\}$ for the inner coefficients If the initial coefficients

To overcome the limit posed by the growth of the powers, Malajovich/Zubelli propose to represent coefficients and intermediate results in the kth stage of the algorithm by a scaled polar form

- $c=alpha,e^\{-2^k,r\}$

Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization $c\_3=c\_1+c\_2=|c\_1|cdotleft(alpha\_1+alpha\_2tfrac$

- $alpha\_3=tfrac\{s\}$
> and $r\_3=r\_1+2^\{-k\},log$> with $s=alpha\_1+alpha\_2,e^\{2^k(r\_1-r\_2)\}$.

The coefficients $a\_0,a\_1,dots,a\_n$ of the final stage k of the Graeffe iteration, for some reasonably large value of k, are represented by pairs $(alpha\_m,r\_m)$, $m=0,dots,n$. By identifying the corners of the convex envelope of the point set $\{(m,r\_m):;m=0,dots,n\}$ one can determine the multiplicities of the roots of the polynomial. Combining this renormalization with the tangent iteration one can extract directly from the coefficients at the corners of the envelope the roots of the original polynomial.

## See also

## References

- G. Malajovich, J. P. Zubelli: "Tangent Graeffe Iteration". Scientific Commons, Numerische Mathematik 89, No.4, 749-782 (2001). ISSN 0029-599X; ISSN 0945-3245
- Module for Graeffe's Method by John H. Mathews

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Tuesday April 15, 2008 at 04:52:22 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia FoundationCopyright © 2015 Dictionary.com, LLC. All rights reserved.