In mathematics, estimation of a value between two known data points. A simple example is calculating the mean (see mean, median, and mode) of two population counts made 10 years apart to estimate the population in the fifth year. Estimating outside the data points (e.g., predicting the population five years after the second population count) is called extrapolation. If more than two data points are available, a curve may fit the data better than a line. The simplest curve that fits is a polynomial curve. Exactly one polynomial of any given degree—an interpolating polynomial—passes through any number of data points.
Learn more about interpolation with a free trial on Britannica.com.
In engineering and science one often has a number of data points, as obtained by sampling or experimentation, and tries to construct a function which closely fits those data points. This is called curve fitting or regression analysis. Interpolation is a specific case of curve fitting, in which the function must go exactly through the data points.
A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function. Suppose we know the function but it is too complex to evaluate efficiently. Then we could pick a few known data points from the complicated function, creating a lookup table, and try to interpolate those data points to construct a simpler function. Of course, when using the simple function to calculate new data points we usually do not receive the same result as when using the original function, but depending on the problem domain and the interpolation method used the gain in simplicity might offset the error.
It should be mentioned that there is another very different kind of interpolation in mathematics, namely the "interpolation of operators". The classical results about interpolation of operators are the Riesz-Thorin theorem and the Marcinkiewicz theorem. There also are many other subsequent results.
From inter meaning between and pole, the points or nodes. Any means of calculating a new point between two existing data points is therefore interpolation.
There are many methods for doing this, many of which involve fitting some sort of function to the data and evaluating that function at the desired point. This does not exclude other means such as statistical methods of calculating interpolated data.
The simplest form of interpolation is to take the mean average of and of two adjacent points to find the mid point. This will give the same result as linear interpolation evaluated at the midpoint.
Given a sequence of n distinct numbers xk called nodes and for each xk a second number yk, we are looking for a function f so that
A pair xk,yk is called a data point and f is called an interpolant for the data points.
When the numbers yk are given by a known function f, we sometimes write fk.
There are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are: How accurate is the method? How expensive is it? How smooth is the interpolant? How many data points are needed?
The simplest interpolation method is to locate the nearest data value, and assign the same value. In one dimension, there are seldom good reasons to choose this one over linear interpolation, which is almost as cheap, but in higher dimensions, in multivariate interpolation, this can be a favourable choice for its speed and simplicity.
One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of determining f(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to take f(2.5) midway between f(2) = 0.9093 and f(3) = 0.1411, which yields 0.5252.
Generally, linear interpolation takes two data points, say (xa,ya) and (xb,yb), and the interpolant is given by:
Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is not differentiable at the point xk.
The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, and suppose that x lies between xa and xb and that g is twice continuously differentiable. Then the linear interpolation error is
Consider again the problem given above. The following sixth degree polynomial goes through all the seven points:
Generally, if we have n data points, there is exactly one polynomial of degree at most n−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the power n. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation solves all the problems of linear interpolation.
However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationaly expensive (see computational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may not be so exact after all, especially at the end points (see Runge's phenomenon). These disadvantages can be avoided by using spline interpolation.
Remember that linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline.
For instance, the natural cubic spline is piecewise cubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by
In this case we get f(2.5)=0.5962.
Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation and the interpolant is smoother. However, the interpolant is easier to evaluate than the high-degree polynomials used in polynomial interpolation. It also does not suffer from Runge's phenomenon.
Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is interpolation by rational functions, and trigonometric interpolation is interpolation by trigonometric polynomials. The discrete Fourier transform is a special case of trigonometric interpolation. Another possibility is to use wavelets.
The Whittaker–Shannon interpolation formula can be used if the number of data points is infinite.
Multivariate interpolation is the interpolation of functions of more than one variable. Methods include bilinear interpolation and bicubic interpolation in two dimensions, and trilinear interpolation in three dimensions.
Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads to Hermite interpolation problems.
The term extrapolation is used if we want to find data points outside the range of known data points.
In curve fitting problems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible. This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads to least squares approximation.
Approximation theory studies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function.