Definitions

# Finite difference

A finite difference is a mathematical expression of the form f(x + b) − f(x + a). If a finite difference is divided by ba, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

In mathematical analysis, operators involving finite differences are studied. A difference operator is an operator which maps a function f to a function whose values are the corresponding finite differences.

## Forward, backward, and central differences

Only three forms are commonly considered: forward, backward, and central differences.

A forward difference is an expression of the form

$Delta_h\left[f\right]\left(x\right) = f\left(x + h\right) - f\left(x\right).$

Depending on the application, the spacing h may be variable or held constant.

A backward difference uses the function values at x and xh, instead of the values at x + h and x:

$nabla_h\left[f\right]\left(x\right) = f\left(x\right) - f\left(x-h\right).$

Finally, the central difference is given by

$delta_h\left[f\right]\left(x\right) = f\left(x+tfrac12h\right)-f\left(x-tfrac12h\right).$

## Relation with derivatives

The derivative of a function f at a point x is defined by the limit

$f\text{'}\left(x\right) = lim_\left\{hto0\right\} frac\left\{f\left(x+h\right) - f\left(x\right)\right\}\left\{h\right\}.$

If h has a fixed (non-zero) value, instead of approaching zero, then the right-hand side is

$frac\left\{f\left(x + h\right) - f\left(x\right)\right\}\left\{h\right\} = frac\left\{Delta_h\left[f\right]\left(x\right)\right\}\left\{h\right\}.$

Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is continuously differentiable, the error is

$frac\left\{Delta_h\left[f\right]\left(x\right)\right\}\left\{h\right\} - f\text{'}\left(x\right) = O\left(h\right) quad \left(h to 0\right).$

The same formula holds for the backward difference:

$frac\left\{nabla_h\left[f\right]\left(x\right)\right\}\left\{h\right\} - f\text{'}\left(x\right) = O\left(h\right).$

However, the central difference yields a more accurate approximation. Its error is proportional to square of the spacing (if f is twice continuously differentiable):

$frac\left\{delta_h\left[f\right]\left(x\right)\right\}\left\{h\right\} - f\text{'}\left(x\right) = O\left(h^\left\{2\right\}\right) . !$

## Higher-order differences

In an analogous way one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for $f\text{'}\left(x+h/2\right)$ and $f\text{'}\left(x-h/2\right)$ and applying a central difference formula for the derivative of $f\text{'}$ at x, we obtain the central difference approximation of the second derivative of f:

$f\text{'}\text{'}\left(x\right) approx frac\left\{delta_h^2\left[f\right]\left(x\right)\right\}\left\{h^2\right\} = frac\left\{f\left(x+h\right) - 2 f\left(x\right) + f\left(x-h\right)\right\}\left\{h^\left\{2\right\}\right\} .$

More generally, the nth-order forward, backward, and central differences are respectively given by:

$Delta^n_h\left[f\right]\left(x\right) =$
sum_{i = 0}^{n} (-1)^i binom{n}{i} f(x + (n - i) h),

$nabla^n_h\left[f\right]\left(x\right) =$
sum_{i = 0}^{n} (-1)^i binom{n}{i} f(x - ih),

$delta^n_h\left[f\right]\left(x\right) =$
sum_{i = 0}^{n} (-1)^i binom{n}{i} fleft(x + left(frac{n}{2} - iright) hright).

Note that the central difference will, for odd $n$, have $h$ multiplied by non-integers. If this is a problem (usually it is), it may be remedied taking the average of $delta^n\left[f\right]\left(x - h/2\right)$ and $delta^n\left[f\right]\left(x + h/2\right)$.

The relationship of these higher-order differences with the respective derivatives is very straightforward:

$frac\left\{d^n f\right\}\left\{d x^n\right\}\left(x\right)$ $= frac\left\{Delta_h^n\left[f\right]\left(x\right)\right\}\left\{h^n\right\}+O\left(h\right)$ $= frac\left\{nabla_h^n\left[f\right]\left(x\right)\right\}\left\{h^n\right\}+O\left(h\right)$ $= frac\left\{delta_h^n\left[f\right]\left(x\right)\right\}\left\{h^n\right\} + O\left(h^2\right).$

Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination

$frac\left\{Delta_h\left[f\right]\left(x\right) - frac12 Delta_h^2\left[f\right]\left(x\right)\right\}\left\{h\right\} = - frac\left\{f\left(x+2h\right)-4f\left(x+h\right)+3f\left(x\right)\right\}\left\{2h\right\}$
approximates f'(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below.

If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.

### Properties

• For all positive k and n

$Delta^n_\left\{kh\right\} \left(f, x\right) = sumlimits_\left\{i_1=0\right\}^\left\{k-1\right\} sumlimits_\left\{i_2=0\right\}^\left\{k-1\right\} ... sumlimits_\left\{i_n=0\right\}^\left\{k-1\right\} Delta^n_h \left(f, x+i_1h+i_2h+...+i_nh\right).$

$Delta^n_h \left(fg, x\right) = sumlimits_\left\{k=0\right\}^n binom\left\{n\right\}\left\{k\right\} Delta^k_h \left(f, x\right) Delta^\left\{n-k\right\}_h\left(g, x+kh\right).$

## Finite difference methods

An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations respectively. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods.

Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc.

## Calculus of finite differences

The forward difference can be considered as a difference operator, which maps the function f to Δh[f]. This operator satisfies

$Delta_h = T_h-I, ,$
where $T_h$ is the shift operator with step $h$, defined by $T_h\left[f\right]\left(x\right) = f\left(x+h\right)$, and $I$ is an identity operator.

Finite difference of higher orders can be defined in recursive manner as $Delta^n_h\left(f,x\right):=Delta_h\left(Delta^\left\{n-1\right\}_h\left(f,x\right), x\right)$ or, in operators notation, $Delta^n_h:=Delta_h\left(Delta^\left\{n-1\right\}_h\right).$ Another possible (and equivalent) definition is $Delta^n_h = \left[T_h-I\right]^n.$

The difference operator Δh is linear and satisfies Leibniz rule. Similar statements hold for the backward and central difference.

Taylor's theorem can now be expressed by the formula

$Delta_h = hD + frac12 h^2D^2 + frac1\left\{3!\right\} h^3D^3 + cdots = mathrm\left\{e\right\}^\left\{hD\right\} - 1,$

where D denotes the derivative operator, mapping f to its derivative f'. Formally inverting the exponential suggests that

$hD = log\left(1+Delta_h\right) = Delta_h - frac12 Delta_h^2 + frac13 Delta_h^3 + cdots. ,$

This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to $f\text{'}\left(x\right)$ mentioned at the end of the section Higher-order differences.

The analogous formulas for the backward and central difference operators are

$hD = -log\left(1-nabla_h\right) quadmbox\left\{and\right\}quad hD = 2 , operatorname\left\{arcsinh\right\}\left(tfrac12delta_h\right).$

The calculus of finite differences is related to the umbral calculus in combinatorics.

## Generalizations

A generalized finite difference is usually defined as

$Delta_h^mu\left[f\right]\left(x\right) = sum_\left\{k=0\right\}^N mu_k f\left(x+kh\right),$
where $mu = \left(mu_0,ldots,mu_N\right)$ is its coefficients vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients $mu_k$ depend on point $x$ : $mu_k=mu_k\left(x\right)$, thus considering weighted finite difference. Also one may make step $h$ depend on point $x$ : $h=h\left(x\right)$. Such generalizations are useful for constructing different modulus of continuity.

## Finite difference in several variables

Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables.