Definitions

# Intermediate treatment of tensors

 Note The following is a modern component-based treatment of tensors (sometimes called the "classical treatment" of tensors). Read the article tensor for a simple description of tensors, or see the component-free treatment of tensors for a more abstract treatment. For an even more traditional approach, see classical treatment of tensors. Note that the word "tensor" is often used as a shorthand for "tensor field", a concept which defines a tensor value at every point in a manifold. To understand tensor fields, you need to first understand tensors.
In mathematics and physics, a tensor is an idealized geometric or physical quantity whose numerical description, relative to a particular frame of reference, consists of a multiple indexed array of numbers. A vector, for example, is a tensor with a single index; thus, tensors can be regarded as a multi-index generalization of the vector concept. Just as it is with vectors, a change of reference frame induces a transformation of the components.

This way of viewing tensors, called tensor analysis, was used by Einstein and is generally preferred by physicists. It is, very grossly, a generalization of the concept of vectors, matrices, and linear transformations, and allows the writing of equations independently of any given coordinate system.

## Overview

Tensor quantities may be categorized by considering the number of indices inherent in their description. The scalar quantities are those that can be represented by a single number (indices are not needed) —speed, mass, temperature, for example. There are also vector-like quantities such as force that require a list of numbers for their description (one index is required so that direction can be accounted for). Finally, quantities such as quadratic forms naturally require a multiply-indexed array for their representation. These latter quantities can only be conceived of as tensors. Some well known examples of tensors in geometry are quadratic forms, and the curvature tensor. Examples of physical tensors are the energy-momentum tensor and the polarization tensor.

Actually, the tensor notion is quite general and applies to all of the above examples; scalars and vectors are special kinds of tensors. The feature that distinguishes a scalar from a vector, and distinguishes both of those from a more general tensor quantity is the number of indices in the representing array. This number is called the rank (or the order) of a tensor. Thus, scalars are rank zero tensors (with no indices at all) and vectors are rank one tensors.

It should be noted that the array-of-numbers representation of a tensor is not the same thing as the tensor. An image and the object represented by the image are not the same thing. The mass of a stone is not a number. Rather, the mass can be described by a number relative to some specified unit mass. Similarly, a given numerical representation of a tensor only makes sense in a particular coordinate system.

It is also necessary to distinguish between two types of indices, depending on whether the corresponding numbers transform covariantly or contravariantly relative to a change in the frame of reference. Contravariant indices are written as superscripts, while the covariant indices are written as subscripts. The type (or valence) of a tensor is the pair $\left(p,q\right)$, where $p$ is the number of contravariant and $q$ the number of covariant indices, respectively. Note that a tensor of type $\left(p,q\right)$ has a rank of p + q. It is customary to represent the actual tensor, as a standalone entity, by a bold-face symbol such as $mathbf\left\{T\right\}$. The corresponding array of numbers for a type $\left(p,q\right)$ tensor is denoted by the symbol $T^\left\{i_1ldots i_p\right\}_\left\{j_1ldots j_q\right\},$ where the superscripts and subscripts are indices that vary from $1$ to $n$. The number $n$, the range of the indices, is called the dimension of the tensor; the total number of degrees of freedom required for the specification of a particular tensor is the dimension of the tensor raised to the power of the tensor's rank.

Again, it must be emphasized that the tensor $mathbf\left\{T\right\}$ and the representing array $T^\left\{i_1ldots i_q\right\}_\left\{j_1ldots j_p\right\}$ are not the same thing. The values of the representing array are given relative to some frame of reference, and undergo a linear transformation when the frame is changed.

Finally, it must be mentioned that most physical and geometric applications are concerned with tensor fields, that is to say tensor valued functions, rather than tensors themselves. Some care is required, because it is common to see a tensor field called simply a tensor. There is a difference, however; the entries of a tensor array $T^\left\{i_1ldots i_q\right\}_\left\{j_1ldots j_p\right\}$ are numbers, whereas the entries of a tensor field are functions. The present entry treats the purely algebraic aspect of tensors. Tensor field concepts, which typically involve derivatives of some kind, are discussed elsewhere.

## Definition

The formal definition of a tensor quantity begins with a finite-dimensional vector space $mathcal\left\{U\right\}$, which furnishes the uniform "building blocks" for tensors of all valences. In typical applications, $mathcal\left\{U\right\}$ is the tangent space at a point of a manifold; the elements of $mathcal\left\{U\right\}$ typically represent physical quantities such as velocities or forces. The space of $\left(p,q\right)$-valent tensors, denoted here by $mathcal\left\{U\right\}^\left\{p,q\right\}$ is obtained by taking the tensor product of $p$ copies of $mathcal\left\{U\right\}$ and $q$ copies of the dual vector space $mathcal\left\{U\right\}^*$. To wit,
$mathcal\left\{U\right\}^\left\{p,q\right\} =$
left{mathcal{U}otimesldotsotimesmathcal{U}right} otimes left{mathcal{U}^*otimesldotsotimesmathcal{U}^*right}

In order to represent a tensor by a concrete array of numbers, we require a frame of reference, which is essentially a basis of $mathcal\left\{U\right\}$, say $mathbf\left\{e\right\}_1,ldots,mathbf\left\{e\right\}_n in mathcal\left\{U\right\}.$ Every vector in $mathcal\left\{U\right\}$ can be "measured" relative to this basis, meaning that for every $mathbf\left\{v\right\}inmathcal\left\{U\right\}$ there exist unique scalars $v^i$, such that (note the use of the Einstein notation)

$mathbf\left\{v\right\} = v^imathbf\left\{e\right\}_i$

These scalars are called the components of $mathbf\left\{v\right\}$ relative to the frame in question.

Let $varepsilon^1,ldots,varepsilon^ninmathcal\left\{U\right\}^*$ be the corresponding dual basis, i.e.,

$varepsilon^i\left(mathbf\left\{e\right\}_j\right) = delta^i \left\{\right\}_j,$
where the latter is the Kronecker delta array. For every covector $mathbf\left\{alpha\right\}inmathcal\left\{U\right\}^*$ there exists a unique array of components $alpha_i$ such that
$mathbf\left\{alpha\right\} = alpha_i, varepsilon^i.$

More generally, every tensor $mathbf\left\{T\right\}inmathcal\left\{U\right\}^\left\{p,q\right\}$ has a unique representation in terms of components. That is to say, there exists a unique array of scalars $T^\left\{i_1ldots i_p\right\}_\left\{j_1ldots j_q\right\}$ such that

$mathbf\left\{T\right\} = T^\left\{i_1ldots i_p\right\}_\left\{j_1ldots j_q\right\}, mathbf\left\{e\right\}_\left\{i_1\right\} otimes$
ldotsotimes mathbf{e}_{i_q} otimes varepsilon^{j_1}otimesldotsotimes varepsilon^{j_p}.

## Transformation rules

Next, suppose that a change is made to a different frame of reference, say $hat\left\{mathbf\left\{e\right\}\right\}_1,ldots,hat\left\{mathbf\left\{e\right\}\right\}_ninmathcal\left\{U\right\}.$ Any two frames are uniquely related by an invertible transition matrix $A^i \left\{\right\}_j$, having the property that for all values of $j$ we have the frame transformation rule

hat{mathbf{e}}_j = A^i {}_j, mathbf{e}_i.

Let $mathbf\left\{v\right\}inmathcal\left\{U\right\}$ be a vector, and let $v^i$ and $hat\left\{v\right\}^i$ denote the corresponding component arrays relative to the two frames. From

$mathbf\left\{v\right\} = v^imathbf\left\{e\right\}_i = hat\left\{v\right\}^ihat\left\{mathbf\left\{e\right\}\right\}_i,$
and from the frame transformation rule we infer the vector transformation rule

hat{v}^i = B^i {}_j, v^j,

where $B^i \left\{\right\}_j$ is the matrix inverse of $A^i \left\{\right\}_j$, i.e.,

$A^i \left\{\right\}_k B^k \left\{\right\}_j = delta^i \left\{\right\}_j.$
Thus, the transformation rule for a vector's components is contravariant to the transformation rule for the frame of reference. It is for this reason that the superscript indices of a vector are called contravariant.

To establish the transformation rule for covectors, we note that the transformation rule for the dual basis takes the form

$hat\left\{v\right\}e^i = B^i \left\{\right\}_j , varepsilon^j,$
and that
$v^i = varepsilon^i\left(mathbf\left\{v\right\}\right),$
while
$hat\left\{v\right\}^i = hat\left\{v\right\}e^i\left(mathbf\left\{v\right\}\right).$

The transformation rule for covector components is covariant. Let $mathbf\left\{alpha\right\}in mathcal\left\{U\right\}^*$ be a given covector, and let $alpha_i$ and $hat\left\{alpha\right\}_i$ be the corresponding component arrays. Then

$hat\left\{alpha\right\}_j = A^i \left\{\right\}_j alpha_i.$
The above relation is easily established. We need only remark that
$alpha_i = mathbf\left\{alpha\right\}\left(mathbf\left\{e\right\}_i\right),$
and that
$hat\left\{alpha\right\}_j = mathbf\left\{alpha\right\}\left(hat\left\{mathbf\left\{e\right\}\right\}_j\right),$
and then use the transformation rule for the frame of reference.

In light of the above discussion, we see that the transformation rule for a general type $\left(p,q\right)$ tensor takes the form

$hat\left\{T\right\}^\left\{i_1ldots i_p\right\}_\left\{,j_1ldots j_q\right\} =$
A^{i_1} {}_{k_1}cdots A^{i_q} {}_{k_q} B^{l_1} {}_{j_1}cdots B^{l_p} {}_{j_p} T^{k_1ldots k_p}_{l_1ldots l_q}.