Least mean squares (LMS)
algorithms are used in adaptive filters
to find the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent
method in that the filter is only adapted based on the error at the current time. It was invented in 1960
by Stanford University
professor Bernard Widrow
and his first Ph.D. student, Ted Hoff
Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system is to be identified and the adaptive filter attempts to adapt the filter to make it as close as possible to , while using only observable signals , and ; but , and are not directly observable. Its solution is closely related to the Wiener filter.
The idea behind LMS filters is to use the method of steepest descent to find a coefficient vector which minimizes a cost function.
We start the discussion by defining the cost function as