Peter Montgomery published in 1995 an algorithm, based on the Lanczos algorithm, for finding elements of the nullspace of a large sparse matrix over GF(2); since the set of people interested in large sparse matrices over finite fields and the set of people interested in large eigenvalue problems scarcely overlap, this is often also called the block Lanczos algorithm without causing unreasonable confusion. See Block Lanczos algorithm for nullspace of a matrix over a finite field.
If is the eigenvalue decomposition of , then . As gets very large, the diagonal matrix of eigenvalues will be dominated by whichever eigenvalue is largest (neglecting the case where there are large equal eigenvalues, of course). As this happens, will converge to the largest eigenvalue and to the associated eigenvector. If the largest eigenvalue is multiple, then will converge to a vector in the subspace spanned by the eigenvectors associated with those largest eigenvalues. After you get the first eigenvector/value, you can successively restrict the algorithm to the null space of the known eigenvectors to get the other eigenvector/values.
In practice, this simple algorithm does not work very well for computing very many of the eigenvectors because any round-off error will tend to introduce slight components of the more significant eigenvectors back into the computation, degrading the accuracy of the computation. Pure power methods also can converge slowly, even for the first eigenvector.
The Lanczos algorithm can be viewed as a simplified Arnoldi's algorithm in that it applies to Hermitian matrices. It transforms the original matrix into a tridiagonal matrix which is real and symmetric.
The diagonal elements are denoted by , and the off-diagonal elements are denoted by .
Note that , due to its symmetry.
There are in principle four ways to write the iteration procedure. Paige and other works show that the following procedure is the most numerically stable.
random vector with norm 1.
Note that (x,y) represents the dot product of vectors x and y here.
After the iteration, we get the and which constructs a tridiagonal matrix
The vectors (Lanczos vectors) generated on the fly constructs the transformation matrix
which is useful for calculating the eigenvectors (see below). In practice, it could be saved after generation (but takes a lot of memory), or could be regenerated when needed, as long as one keeps the first vector .
After the matrix is calculated, one can solve its eigenvalues and their corresponding eigenvectors . This process is pretty simple due to the nature of being a tridiagonal matrix.
It can be proved that the eigenvalues are approximate eigenvalues of the original matrix .
The Ritz eigenvectors of can be calculated by , where is the transformation matrix whose column vectors are .
For the Lanczos algorithm, it can be proved that with exact arithmetic, the set of vectors constructs an orthogonal basis, and the eigenvalues/vectors solved are good approximations to those of the original matrix. However, in practice (as we code it on to digital computers where round-off errors are inevitable), the orthogonality is quickly lost and in some cases the new vector could even be linearly dependent on the set that is already constructed. As a result, some of the eigenvalues of the resultant tridiagonal matrix may not be approximations to the original matrix. Therefore, the Lanczos algorithm is not very stable.
Users of this algorithm must be able to find and remove those "spurious" eigenvalues. Practical implementations of the Lanczos algorithm go in three directions to fight this stability issue:
Many implementations of the Lanczos algorithm restart after a certain number of iterations. One of the most influential restarted variations is the implicitly restarted Lanczos method , which is implemented in ARPACK. This has led into a number of other restarted variations such as restarted Lanczos bidiagonalization. Another successful restarted variation is the Thick-Restart Lanczos method, which has been implemented in a software package called TRLan.