Added to Favorites

Related Searches

Definitions

In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial P can be expressed as a polynomial in elementary symmetric polynomials: P can be given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables for any d ≤ n, and it is formed by adding together all distinct products of d distinct variables.
## Definition

_{k} is used instead of e_{k}).
In general, for k ≥ 0 we define
## Examples

The following lists the $n$ elementary symmetric polynomials for the first four positive values of $n$. (In every case, $e\_0\; =\; 1$ is also one of the polynomials.)## Properties

## The fundamental theorem of symmetric polynomials

### Proof sketch

_{1}, ..., X_{n}, i.e., where one variable X_{j} is missing. _{n}, therefore in the three polynomials R, $tilde\{P\}$ and finally $P\_\{mbox\{lacunary\}\}$ all monomials involving only X_{1}, ... ,X_{n−1} are identical). Therefore the difference P−R has zero lacunary part and is of the form $X\_1\; cdots\; X\_n\; cdot\; Q(X\_1,\; ldots,\; X\_n)$. The first factor coincides with the elementary symmetric polynomial $sigma\_\{n,n\}$, the second factor Q is a homogeneous symmetric polynomial of lower degree $dmath>\; which\; by\; the\; induction\; assumption\; can\; be\; expressed\; as\; a\; polynomial\; in\; the\; elementary\; symmetric\; functions.\; Combining\; the\; representations\; for$P−R and R one finds a polynomial representation for P. ### An alternative proof

## See also

## References

The elementary symmetric polynomials in $n$ variables X_{1}, …, X_{n}, written e_{k}(X_{1}, …, X_{n}) for k = 0, 1, ..., n, can be defined as

- $begin\{align\}$

e_0 (X_1, X_2, dots,X_n) &= 1,e_1 (X_1, X_2, dots,X_n) &= textstylesum_{1 leq j leq n} X_j, e_2 (X_1, X_2, dots,X_n) &= textstylesum_{1 leq j < k leq n} X_j X_k, e_3 (X_1, X_2, dots,X_n) &= textstylesum_{1 leq j < k < l leq n} X_j X_k X_l, end{align} and so forth, down to

- $e\_n\; (X\_1,\; X\_2,\; dots,X\_n)\; =\; X\_1\; X\_2\; cdots\; X\_n$

- $e\_k\; (X\_1\; ,\; ldots\; ,\; X\_n\; )=sum\_\{1le\; j\_1\; <\; j\_2\; <\; cdots\; <\; j\_k\; le\; n\}\; X\_\{j\_1\}\; cdots\; X\_\{j\_k\}.$

Thus, for each positive integer $k,$ less than or equal to $n$, there exists exactly one elementary symmetric polynomial of degree $k$ in $n$ variables. To form the one which has degree $k$, we form all products of $k$-tuples of the $n$ variables and add up these terms.

The fact that $X\_1X\_2=X\_2X\_1$ and so forth is the defining feature of commutative algebra. That is, the polynomial ring formed by taking all linear combinations of products of the elementary symmetric polynomials is a commutative ring.

For $n\; =\; 1$:

- $e\_1(X\_1)\; =\; X\_1.$

For $n\; =\; 2$:

- $begin\{align\}$

e_1(X_1,X_2) &= X_1 + X_2,

e_2(X_1,X_2) &= X_1X_2.,end{align}

For $n\; =\; 3$:

- $begin\{align\}$

e_1(X_1,X_2,X_3) &= X_1 + X_2 + X_3,

e_2(X_1,X_2,X_3) &= X_1X_2 + X_1X_3 + X_2X_3,

e_3(X_1,X_2,X_3) &= X_1X_2X_3.,end{align}

For $n\; =\; 4$:

- $begin\{align\}$

e_1(X_1,X_2,X_3,X_4) &= X_1 + X_2 + X_3 + X_4,

e_2(X_1,X_2,X_3,X_4) &= X_1X_2 + X_1X_3 + X_1X_4 + X_2X_3 + X_2X_4 + X_3X_4,

e_3(X_1,X_2,X_3,X_4) &= X_1X_2X_3 + X_1X_2X_4 + X_1X_3X_4 + X_2X_3X_4,

e_4(X_1,X_2,X_3,X_4) &= X_1X_2X_3X_4.,end{align}

The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity

- $prod\_\{j=1\}^n\; (lambda-X\_j)=lambda^n-e\_1(X\_1,ldots,X\_n)lambda^\{n-1\}+e\_2(X\_1,ldots,X\_n)lambda^\{n-2\}-cdots+(-1)^n\; e\_n(X\_1,ldots,X\_n).$

The characteristic polynomial of a linear operator is an example of this. The roots are the eigenvalues of the operator. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain the coefficients of the characteristic polynomial, which are numerical invariants of the operator. This fact is useful in linear algebra and its applications and generalizations, like tensor algebra and disciplines which extensively employ tensor fields, such as differential geometry.

The set of elementary symmetric polynomials in $n$ variables generates the ring of symmetric polynomials in $n$ variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring $mathbb\; Z[e\_1(X\_1,ldots,X\_n),ldots,e\_n(X\_1,ldots,X\_n)].$ (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For other systems of symmetric polynomials with a similar property see power sum symmetric polynomials and complete homogeneous symmetric polynomials.

For any ring A denote the ring of symmetric polynomials in the variables $X\_1,ldots,X\_n$ with coefficients in A by $A[X\_1,ldots,X\_n]^\{S\_n\}$.

- $A[X\_1,ldots,X\_n]^\{S\_n\}$ is a polynomial ring in the n elementary symmetric polynomials $e\_k\; (X\_1\; ,\; ldots\; ,X\_n\; )$ for k = 1, ..., n.

This means that every symmetric polynomial $P(X\_1,ldots,\; X\_n)\; in\; A[X\_1,ldots,X\_n]^\{S\_n\}$ has a unique representation

- $P(X\_1,ldots,\; X\_n)=Q(e\_1(X\_1\; ,\; ldots\; ,X\_n),\; ldots,\; e\_n(X\_1\; ,\; ldots\; ,X\_n))$

The theorem may be proved for symmetric homogeneous polynomials by a double mathematical induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric).

In the case n = 1 the result is obvious because every polynomial in one variable is automatically symmetric.

Assume now that the theorem has been proved for all polynomials for $m\; <\; n$ variables and all symmetric polynomials in n variables with degree < d. Every homogeneous symmetric polynomial P in $A[X\_1,ldots,X\_n]^\{S\_n\}$ can be decomposed as a sum of homogeneous symmetric polynomials

- $P(X\_1,ldots,X\_n)=\; P\_\{mbox\{lacunary\}\}\; (X\_1,ldots,X\_n)\; +\; X\_1\; cdots\; X\_n\; cdot\; Q(X\_1,ldots,X\_n)$

Because P is symmetric the lacunary part is determined by the coefficients of all monomials which only depend on the variables X_{1}, ..., X_{n−1}. The sum of all these monomials is equal to a polynomial $tilde\{P\}(X\_1,\; ldots,\; X\_\{n-1\})=P(X\_1,\; ldots,X\_\{n-1\},0)$ which is symmetric in n−1 variables. According to the induction assumption $tilde\{P\}(X\_1,\; ldots.\; X\_\{n-1\})$ can be written as

- $tilde\{P\}(X\_1,\; ldots,\; X\_\{n-1\})=tilde\{Q\}(sigma\_\{1,n-1\},\; ldots,\; sigma\_\{n-1,n-1\})$

Consider now the polynomial

- $R(X\_1,\; ldots,\; X\_\{n\}):=\; tilde\{Q\}(sigma\_\{1,n\},\; ldots,\; sigma\_\{n-1,n\})\; .$

The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the n polynomials $e\_1,\; ldots,\; e\_n$ are algebraically independent over the quotient field of A.) The fact that the polynomial representation is unique implies that $A[X\_1,ldots,X\_n]^\{S\_n\}$ is isomorphic to $A[Y\_1,ldots,Y\_n]$.

The following proof is also inductive, but does not involve other polynomials than those symmetric in $X\_1,ldots,X\_n$, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogenous of degree d; different homogeneous components can be decomposed separately. Order the monomials in the variables X_{i} lexicographically, where the individual variables are ordered $X\_1>cdots>X\_n$, in other words the dominant term of a polynomial is one with the highest occurring power of X_{1}, and among those the one with the highest power of X_{2}, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree d (they are in fact homogeneous) as follows by partitions of d. Order the individual elementary symmetric polynomials $e\_i(X\_1,ldots,X\_n)$ in the product so that those with larger indices i come first, then build for each such factor a column of i boxes, and arrange those columns from left to right to form a Young diagram containing d boxes in all. The shape of this diagram is a partition of d, and each parition λ of d arises for exactly one product of elementary symmetric polynomials, which we shall denote by e_{λt} (X_{1},…,X_{n}) (the "t" is present only because traditionally this product is associated to the transpose partition of λ). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables X_{i}.

Lemma. The leading term of e_{λt} (X_{1},…,X_{n}) is X^{λ}.

- Proof. To get the leading term of the product one must select the leading term in each factor $e\_i(X\_1,ldots,X\_n)$, which is clearly $X\_1X\_2cdots\; X\_i$, and multiply these together. To count the occurences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1,…,i of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is X
^{λ}(its coefficient is 1 because there is only one choice that leads to this monomial).

Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogenous symmetric polynomial P of degree d can be written as polynomial in the elementary symmetric polynomials. Since P is symmetric, its leading monomial has weakly decreasing exponents, so it is some X^{λ} with λ a partition of d. Let the coefficient of this term be c, then P–ce_{λt} (X_{1},…,X_{n}) is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back ce_{λt} (X_{1},…,X_{n}) to it, one obtains the sought for polynomial expression for P.

The fact that this expression is unique, or equivalently that all the products (monomials) e_{λt} (X_{1},…,X_{n}) of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the e_{λt} (X_{1},…,X_{n}) were zero, one focusses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables X_{i}) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.

- Macdonald, I.G. (1995), Symmetric Functions and Hall Polynomials, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998).
- Richard P. Stanley (1999), Enumerative Combinatorics, Vol. 2. Camridge: Cambridge University Press. ISBN 0-521-56069-1

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Sunday June 15, 2008 at 16:28:05 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Sunday June 15, 2008 at 16:28:05 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.