Definitions

# Rank (linear algebra)

The column rank of a matrix A is the maximal number of linearly independent columns of A. Likewise, the row rank is the maximal number of linearly independent rows of A.

Since the column rank and the row rank are always equal, they are simply called the rank of A; for the proofs, see, e.g., Murase (1960), Andrea & Wong (1960), Williams & Cater (1968), Mackiw (1995). It is commonly denoted by either rk(A) or rank A.

The rank of an $m times n$ matrix is at most $min\left(m,n\right)$. A matrix that has a rank as large as possible is said to have full rank; otherwise, the matrix is rank deficient.

## Alternative definitions

The maximal number of linearly independent columns of the m-by-n matrix A with entries in the field F is equal to the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A). Since the column rank and the row rank are the same, we can also define the rank of A as the dimension of the row space of A.

If one considers the matrix A as a linear map

f : FnFm
with the rule
f(x) = Ax
then the rank of A can also be defined as the dimension of the image of f (see linear map for a discussion of image and kernel). This definition has the advantage that they can be applied to any linear map without need for a specific matrix. The rank can also be defined as n minus the dimension of the kernel of f; the rank-nullity theorem states that this is the same as the dimension of the image of f.

Another equivalent definition of the rank of a matrix is the order of the greatest non-vanishing minor in the matrix.

## Properties

We assume that A is an m-by-n matrix over the field F and describes a linear map f as above.

• only the zero matrix has rank 0
• $operatorname\left\{rank\right\} A leq min\left(m, n\right)$
• f is injective if and only if A has rank n (in this case, we say that A has full column rank).
• f is surjective if and only if A has rank m (in this case, we say that A has full row rank).
• In the case of a square matrix A (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
• If B is any n-by-k matrix, then

$operatorname\left\{rank\right\}\left(AB\right) leq min\left(operatorname\left\{rank\right\} A, operatorname\left\{rank\right\} B\right)$
As an example of the "<" case, consider the product

begin{bmatrix}
`   0 & 0 `
`   1 & 0 `
end{bmatrix} begin{bmatrix}
`   0 & 0 `
`   0 & 1 `
end{bmatrix}
Both factors have rank 1, but the product has rank 0.

• If B is an n-by-k matrix with rank n, then

$operatorname\left\{rank\right\}\left(AB\right) = operatorname\left\{rank\right\}\left(A\right)$

• If C is an l-by-m matrix with rank m, then

$operatorname\left\{rank\right\}\left(CA\right) = operatorname\left\{rank\right\}\left(A\right)$

• The rank of A is equal to r if and only if there exists an invertible m-by-m matrix X and an invertible n-by-n matrix Y such that


` XAY =`
begin{bmatrix}
`   I_r & 0 `
`   0 & 0 `
end{bmatrix}

where Ir denotes the r-by-r identity matrix.

• Sylvester’s rank inequality: If A and B are any n-by-n matrices, then

$operatorname\left\{rank\right\}\left(A\right) + operatorname\left\{rank\right\}\left(B\right) - n leq operatorname\left\{rank\right\}\left(A B\right)$

• Subadditivity: $operatorname\left\{rank\right\}\left(A + B\right) leq operatorname\left\{rank\right\}\left(A\right) + operatorname\left\{rank\right\}\left(B\right)$ when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not less.
• The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix (this is the "rank theorem" or the "rank-nullity theorem").
• Rank of matrix and corresponding Gram matrix is equal:

$operatorname\left\{rank\right\}\left(A^T A\right) = operatorname\left\{rank\right\}\left(A A^T\right) = operatorname\left\{rank\right\}\left(A\right)$
This can be shown by proving equality of their null spaces. Null space of the Gram matrix is given by vectors $x$ for which $A^T A x = 0$. If this condition is fulfilled, also holds $0 = x^T A^T A x = |A x|^2$. This proof was adapted from.

## Computation

The easiest way to compute the rank of a matrix A is given by the Gauss elimination method. The row-echelon form of A produced by the Gauss algorithm has the same rank as A, and its rank can be read off as the number of non-zero rows.

Consider for example the 4-by-4 matrix


` A =`
begin{bmatrix}
`   2 & 4 & 1 & 3 `
`   -1 & -2 & 1 & 0 `
`   0 & 0 & 2 & 2 `
`   3 & 6 & 2 & 5 `
end{bmatrix}.

We see that the second column is twice the first column, and that the fourth column equals the sum of the first and the third. The first and the third columns are linearly independent, so the rank of A is two. This can be confirmed with the Gauss algorithm. It produces the following row echelon form of A:


` A =`
begin{bmatrix}
`   1 & 2 & 0 & 1 `
`   0 & 0 & 1 & 1 `
`   0 & 0 & 0 & 0 `
`   0 & 0 & 0 & 0 `
end{bmatrix}

which has two non-zero rows.

When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less expensive choices, such as QR decomposition with pivoting, which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.

## Applications

One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. The system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank.

In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.

## Generalization

There are different generalisations of the concept of rank to matrices over arbitrary rings. In those generalisations, column rank, row rank, dimension of column space and dimension of row space of a matrix may be different from the others or may not exist.

There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.

Matrix rank should not be confused with tensor rank. Matrices can be defined as tensors with tensor rank 2.