Definitions

# Green's function

In mathematics, Green's function is a type of function used to solve inhomogeneous differential equations subject to boundary conditions. The term is used in physics, specifically in quantum field theory and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition; for this sense, see Correlation function (quantum field theory) and Green's function (many-body theory).

Green's function is named after the British mathematician George Green, who first developed the concept in the 1830s.

## Definition and uses

Technically, a Green's function, G(x, s), of a linear operator L acting on distributions over a subset of Rn, at a point s, is any solution of

$L G \left(x,s\right) = delta\left(x-s\right) \left(1\right)$

where $delta$ is the Dirac delta function. This technique can be used to solve differential equations of the form;

$L u\left(x\right) = f\left(x\right) \left(2\right)$

If the kernel of L is nontrivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria would give us a unique Green's function. Also, Green's functions in general are distributions, not necessarily proper functions.

Green's functions are also a useful tool in condensed matter theory, where they allow the resolution of the diffusion equation, and in quantum mechanics, where the Green's function of the Hamiltonian is a key concept, with important links to the concept of density of states. The Green's functions used in those two domains are highly similar, due to the analogy in the mathematical structure of the diffusion equation and Schrödinger equation.

## Motivation

Loosely speaking, if such a function G can be found for the operator L, then if we multiply the equation (1) for the Green's function by f(s), and then perform an integration in the s variable, we obtain;

$int L G\left(x,s\right) f\left(s\right) ds = int delta\left(x-s\right)f\left(s\right) ds = f\left(x\right).$

The right hand side is now given by the equation (2) to be equal to Lu(x), thus:

$Lu\left(x\right) = int L G\left(x,s\right) f\left(s\right) ds.$

Because the operator L is linear and acts on the variable x alone (not on the variable of integration s), we can take the operator L outside of the integration on the right hand side obtaining;

$Lu\left(x\right) = Lleft\left(int G\left(x,s\right) f\left(s\right) dsright\right).$

And this implies;

$u\left(x\right) = int G\left(x,s\right) f\left(s\right) ds . \left(3\right)$

Thus, we can obtain the function u(x) through knowledge of the Green's function in equation (1), and the source term on the right hand side in equation (2). This process has resulted from the linearity of the operator L.

In other words, the solution of equation (2), u(x), can be determined by the integration given in equation (3). Although f(x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation (1).

Not every operator L admits a Green's function. A Green's function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green's functions for a particular operator, the integral in equation (3), may be quite difficult to perform. However the method gives a theoretically exact result.

Convolving with a Green's function gives solutions to inhomogeneous differential-integral equations, most commonly a Sturm-Liouville problem. If G is the Green's function of an operator L, then the solution for u of the equation Lu = f is given by

$u\left(x\right) = int\left\{ f\left(s\right) G\left(x,s\right) , ds\right\}.$

This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over δ(x − s)) and a superposition of the solution on each projection. Such an integral is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.

## Green's function for solving inhomogeneous boundary value problems

The primary use of Green's functions in mathematics is to solve inhomogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams (and the phrase "Green's function" is often used for any correlation function).

### Framework

Let $L$ be the Sturm-Liouville operator, a linear differential operator of the form
$L = \left\{d over dx\right\}left\left[p\left(x\right) \left\{d over dx\right\} right\right] + q\left(x\right)$
and let D be the boundary conditions operator
$Du = left\left\{begin\left\{matrix\right\} alpha _1 u\text{'}\left(0\right) + beta _1 u\left(0\right) alpha _2 u\text{'}\left(l\right) + beta _2 u\left(l\right). end\left\{matrix\right\}right.$

Let $f\left(x\right)$ be a continuous function in $\left[0,l\right]$. We shall also suppose that the problem

$begin\left\{matrix\right\}Lu = f Du = 0 end\left\{matrix\right\}$
is regular, i.e. only the trivial solution exists for the homogeneous problem.

### Theorem

There is one and only one solution u(x) which satisfies

$begin\left\{matrix\right\}Lu = f Du = 0 end\left\{matrix\right\}$

and it is given by

$u\left(x\right) = int_0^ell f\left(s\right) G\left(x,s\right) , ds$

where G(x,s) is Green's function and satisfies the following conditions:

1. G(x,s) is continuous in x and s.
2. For $x ne s$, $L G\left(x, s \right) = 0 ,$.
3. For $s ne 0, l$, $D G\left(x, s \right) = 0 ,$.
4. Derivative "jump": $G\text{'}\left(s_\left\{ + 0\right\}, s \right) - G\text{'}\left(s_\left\{ - 0\right\}, s \right) = 1 / p\left(s\right). ,$
5. Symmetry: G(x, s) = G(s, x).

## Finding Green's functions

### Eigenvalue expansions

If a differential operator L admits a set of eigenvectors $Psi_n\left(x\right)$ (i.e. a set of functions $Psi_n\left(x\right)$ and scalars $lambda_n$ such that $L Psi_n = lambda_n Psi_n\right)$) that are complete, then we can construct a Green's function from these eigenvectors and eigenvalues.

By complete, we mean that the set of functions :$Psi_n\left(x\right)$ satisfies the following completeness relation:

$delta\left(x - x\text{'}\right) = sum_\left\{n=0\right\}^infty Psi_n\left(x\right) Psi_n\left(x\text{'}\right).$

We can prove the following:

$G\left(x, x\text{'}\right) = sum_\left\{n=0\right\}^infty frac\left\{Psi_n\left(x\right) Psi_n\left(x\text{'}\right)\right\}\left\{lambda_n\right\}.$

Now consider applying the operator L to each side of this equation. We end up with the completeness relation, which was assumed true.

The general study of the Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.

## Green's function for the Laplacian

Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities.

To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's law):

$int_V nabla cdot hat A dV = int_S hat A cdot dhatsigma.$

Let $A = phinablapsi - psinablaphi$ and substitute into Gauss' law. Compute $nablacdothat A$ and apply the chain rule for the $nabla$ operator:

$nablacdothat A = nablacdot\left(phinablapsi - psinablaphi\right) = \left(nablaphi\right)cdot\left(nablapsi\right) + phinabla^2psi - \left(nablaphi\right)cdot\left(nablapsi\right) - psinabla^2phi = phinabla^2psi - psinabla^2phi.$

Plugging this into the divergence theorem, we arrive at Green's theorem:

$int_V \left(phinabla^2psi - psinabla^2phi\right) dV = int_S \left(phinablapsi - psinablaphi\right)cdot dhatsigma.$

Suppose that our linear differential operator L is the Laplacian, $nabla^2$, and that we have a Green's function G for the Laplacian. The defining property of the Green's function still holds:

$L G\left(x,x\text{'}\right) = nabla^2 G\left(x,x\text{'}\right) = delta\left(x-x\text{'}\right).$

Let $psi = G$ in Green's theorem. We get:

$int_V phi\left(x\text{'}\right) delta\left(x - x\text{'}\right) - G\left(x,x\text{'}\right) nabla^2phi\left(x\text{'}\right) d^3x\text{'} = int_S left\left[phi\left(x\text{'}\right)nabla\text{'} G\left(x,x\text{'}\right) - G\left(x,x\text{'}\right)nabla\text{'}phi\left(x\text{'}\right)right\right] cdot dhatsigma\text{'}$

Using this expression, we can solve Laplace's equation $nabla^2phi\left(x\right)=0$ or Poisson's equation $nabla^2phi\left(x\right)=-rho\left(x\right)$, subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for $phi\left(x\right)$ everywhere inside a volume where either (1) the value of $phi\left(x\right)$ is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of $phi\left(x\right)$ is specified on the bounding surface (Neumann boundary conditions).

Suppose we're interested in solving for $phi\left(x\right)$ inside the region. Then the integral

$intlimits_V \left\{phi\left(x\text{'}\right)delta\left(x-x\text{'}\right) d^3x\text{'}\right\}$
reduces to simply $phi\left(x\right)$ due to the defining property of the Dirac delta function and we have:

$phi\left(x\right) = int_V G\left(x,x\text{'}\right) rho\left(x\text{'}\right) d^3x\text{'} + int_S left\left[phi\left(x\text{'}\right)nabla\text{'} G\left(x,x\text{'}\right) - G\left(x,x\text{'}\right)nabla\text{'}phi\left(x\text{'}\right)right\right] cdot dhatsigma\text{'}.$

This form expresses the well-known property of harmonic functions that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere.

In electrostatics, we interpret $phi\left(x\right)$ as the electric potential, $rho\left(x\right)$ as electric charge density, and the normal derivative $nablaphi\left(x\text{'}\right)cdot dhatsigma\text{'}$ as the normal component of the electric field.

If we're interested in solving a Dirichlet boundary value problem, we choose our Green's function such that $G\left(x,x\text{'}\right)$ vanishes when either x or x' is on the bounding surface; conversely, if we're interested in solving a Neumann boundary value problem, we choose our Green's function such that its normal derivative vanishes on the bounding surface. Thus we are left with only one of the two terms in the surface integral.

With no boundary conditions, the Green's function for the Laplacian (Green's function for the three-variable Laplace equation) is:

$G\left(hat x, hat x\text{'}\right) = frac\left\{1\right\}$
.>

Supposing that our bounding surface goes out to infinity, and plugging in this expression for the Green's function, we arrive at the familiar expression for electric potential in terms of electric charge density (in the CGS unit system) as

$phi\left(hat x\right) = int_V frac\left\{rho\left(x\text{'}\right)\right\}$
> d^3x'.

## Example

Given the problem

$begin\left\{matrix\right\}Luend\left\{matrix\right\} = u \text{'} \text{'} + u = f\left(x \right)$
$u\left(0\right) = 0, quad quad uleft\left(frac\left\{pi\right\}\left\{2\right\}right\right) = 0$

Find Green's function.

First step: From demand-2 we see that

$g\left(x,s\right) = c_1 \left(s\right) cdot cos x + c_2 \left(s\right) cdot sin x.,$

For x < s and demand-3 we see that

$g\left(0,s\right) = c_1 \left(s\right) cdot 1 + c_2 \left(s\right) cdot 0 = 0, quad c_1 \left(s\right) = 0.$

The equation of $g\left(frac\left\{pi\right\}\left\{2\right\},s\right) = 0$ is skipped because $x ne frac\left\{pi\right\}\left\{2\right\}$ if $quad x < s$ and $s ne frac\left\{pi\right\}\left\{2\right\}.$

For x > s and demand-3 we see that

$gleft\left(frac\left\{pi\right\}\left\{2\right\},sright\right) = c_1 \left(s\right) cdot 0 + c_2 \left(s\right) cdot 1 = 0, quad c_2 \left(s\right) = 0.$

The equation of $quad g\left(0,s\right) = 0$ is skipped for similar reasons.

Summarize the results:

$g\left(x,s\right)=left\left\{begin\left\{matrix\right\}$
a(s) sin x, ;; x < s b(s) cos x, ;; s < x end{matrix}right.

second step: Now we shall determine a(s) and b(s).

Using demand-1 we get

$a\left(s\right) sin s = b\left(s\right) cos s.,$

Using demand-4 we get

$b\left(s\right) cdot \left[- sin s \right] - a\left(s\right) cdot cos s = frac\left\{1\right\}\left\{1\right\} = 1, .$

Using Cramer's rule or by intelligent guess solve for a(s) and b(s) and obtain that

$a\left(s\right) = - cos s quad ; quad b\left(s\right) = - sin s.$

Check that this automatically satisfies demand-5.

So our Green's function for this problem is:

$g\left(x,s\right)=left\left\{begin\left\{matrix\right\}$
-1 cdot cos s cdot sin x, ;; x < s, -1 cdot sin s cdot cos x, ;; s < x. end{matrix}right.

## Further examples

$G\left(x, y, x_0, y_0\right)=frac\left\{1\right\}\left\{2pi\right\}left\left[lnsqrt\left\{\left(x-x_0\right)^2+\left(y-y_0\right)^2\right\}-lnsqrt\left\{\left(x+x_0\right)^2+\left(y-y_0\right)^2\right\}right\right]$
$+frac\left\{1\right\}\left\{2pi\right\}left\left[lnsqrt\left\{\left(x-x_0\right)^2+\left(y+y_0\right)^2\right\}-lnsqrt\left\{\left(x+x_0\right)^2+\left(y+y_0\right)^2\right\}right\right].$

## References

• Eyges, Leonard, The Classical Electromagnetic Field, Dover Publications, New York, 1972. ISBN 0-486-63947-9. (Chapter 5 contains a very readable account of using Green's functions to solve boundary value problems in electrostatics.)
• A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
• A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9