Definitions
Nearby Words

# Feynman diagram

In quantum field theory a Feynman diagram is an intuitive graphical representation of the transition amplitude or other physical quantity of a quantum system.

Within the canonical formulation of quantum field theory a Feynman diagram represents a term in the Wick's expansion of the perturbative S-matrix. The transition amplitude is the matrix element of the S-matrix between the initial and the final states of the quantum system.

Alternatively the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible paths of the system from the initial to the final state. A Feynman diagram is then identified with a particular path of the system contributing to the transition amplitude.

Feynman diagrams are named after Richard Feynman.

## Motivation and history

When calculating scattering cross sections in particle physics, the interaction between particles can be described by starting from a free field which describes the incoming and outgoing particles, and including an interaction Hamiltonian to describe how the particles deflect one another. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states. The number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at intermediate times are energy eigenstates, collections of particles with a definite momentum, the series is called old-fashioned perturbation theory.

The Dyson series can be alternately rewritten as a sum over Feynman diagrams, where at each interaction vertex both the energy and momentum are conserved, but where the length of the energy momentum four vector is not equal to the mass. The Feynman diagrams are much easier to keep track of than old-fasioned terms, because the old-fasioned way treats the particle and antiparticle contributions as separate. Each Feynman diagram is the sum of exponentially many old fasioned terms, because each internal line can separately represent either a particle or an antiparticle. In a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term.

Feynman gave a prescription for calculating the amplitude for any given diagram from a field theory Lagrangian, the Feynman rules. Each internal line corresponds to a factor of the corresponding virtual particle's propagator; each vertex where lines meet gives a factor derived from an interaction term in the Lagrangian, and incoming and outgoing lines carry an energy, momentum, and spin.

In addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available; in fact, intermediate virtual particles are allowed to propagate faster than light. The probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the functional integral formulation of quantum mechanics, also invented by Feynman–see path integral formulation.

The naïve application of such calculations often produces diagrams whose amplitudes are infinite, because the short-distance particle interactions require a careful limiting procedure, to include particle self-interactions. The technique of renormalization, pioneered by Feynman, Schwinger, and Tomonaga compensates for this effect and eliminates the troublesome infinite terms. After such renormalization, calculations using Feynman diagrams often match experimental results with very good accuracy.

Feynman diagram and path integral methods are also used in statistical mechanics.

### Alternative names

Murray Gell-Mann always referred to Feynman diagrams as Stückelberg diagrams, after a Swiss physicist, Ernst Stückelberg, who devised a similar notation many years earlier. Stückelberg was motivated by the need for a manifestly covariant formalism for quantum field theory, but did not provide as automated a way to handle symmetry factors and loops, although he was first to find the correct physical interpretation in terms of forward and backward in time particle paths, all without the path-integral. Historically they were sometimes called Feynman-Dyson diagrams or Dyson graphs, because when they were introduced the path integral was unfamiliar, and Freeman Dyson's derivation from old-fashioned perturbation theory was easier for physicists trained in earlier methods to follow.

## Description

A Feynman diagram represents a perturbative contribution to the amplitude of a quantum transition from some initial quantum state to some final quantum state.

For example, in the process of electron-positron annihilation the initial state is one electron and one positron, the final state — two photons.

The initial state is often assumed to be at the right of the diagram and the final state — at the left (although other conventions are also used quite often).

A Feynman diagram consists of points, called vertexes, and lines attached to the vertexes.

The particles in the initial state are depicted by lines sticking out in the direction of the initial state (e.g. to the right), the particles in the final state are represented by lines sticking out in the direction of the final state (e.g. to the left).

In QED there are two types of particles: electrons/positrons (called fermions) and photons (called gauge bosons). They are represented in Feynman diagrams as follows:

1. Electron in the initial state is represented by a solid line with an arrow pointing toward the vertex (•←).
2. Electron in the final state is represented by a line with an arrow pointing away from the vertex: (←•).
3. Positron in the initial state is represented by a solid line with an arrow pointing away from the vertex: (•→).
4. Positron in the final state is represented by a line with an arrow pointing toward the vertex: (→•).
5. Photon in the initial and the final state is represented by a wavy line (•~ and ~•).

In a gauge theory (of which QED is a fine example) a vertex always has three lines attached to it: one bosonic line, one fermionic line with arrow toward the vertex, and one fermionic line with arrow away from the vertex.

The vertexes might be connected by a bosonic or fermionic propagator. A bosonic propagator is represented by a wavy line connecting two vertexes (•~•). A fermionic propagator is represented by a solid line (with an arrow in one or another direction) connecting two vertexes, (•←•).

The number of vertexes gives the order of the term in the perturbation series expansion of the transition amplitude.

$e^+e^-to2gamma$
`$gamma$        $e^-$`
`  ~•←`
`   ↓`
`  ~•→`
`$gamma$        $e^+$`

For example, this second order Feynman diagram contributes to a process (called electron-positron annihilation) where in the initial state (at the right) there is one electron (•←) and one positron (•→) and in the final state (at the left) there are two photons (~•).

## Canonical quantization formulation

### Perturbative S-matrix

The probability amplitude for a transition of a quantum system from the initial state $|irangle$ to the final state $|frangle$ is given by the matrix element

$S_\left\{fi\right\}=langle f|S|irangle;,$

where $S$ is the S-matrix.

In the canonical quantum field theory the S-matrix is represented within the interaction picture by the perturbation series in the powers of the interaction Lagrangian,

$S=sum_\left\{n=0\right\}^\left\{infty\right\}\left\{i^nover n!\right\}intprod_\left\{j=1\right\}^n d^4 x_j Tprod_\left\{j=1\right\}^n L_v\left(x_j\right)equivsum_\left\{n=0\right\}^\left\{infty\right\}S^\left\{\left(n\right)\right\};,$

where $L_v$ is the interaction Lagrangian and $T$ signifies the time-product of operators.

A Feynman diagram is a graphical representation of a term in the Wick's expansion of the time product in the $n$-th order term $S^\left\{\left(n\right)\right\}$ of the S-matrix,

$Tprod_\left\{j=1\right\}^nL_v\left(x_j\right)=sum_\left\{mathrm\left\{all;possible;contractions\right\}\right\}\left(pm\right)Nprod_\left\{j=1\right\}^nL_v\left(x_j\right);,$

where $N$ signifies the normal-product of the operators and $\left(pm\right)$ takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator).

### Feynman rules

The diagrams are drawn according to the Feynman rules which depend upon the interaction Lagrangian. For the QED interaction lagrangian, $L_v=-gbarpsigamma^mupsi A_mu$, describing the interaction of a fermionic field $psi$ with a bosonic gauge field $A_mu$, the Feynman rules can be formulated in coordinate space as follows:

1. Each integration coordinate $x_j$ is represented by a point (sometimes called a vertex);
2. A bosonic propagator is represented by a curvy line connecting two points;
3. A fermionic propagator is represented by a solid line connecting two points;
4. A bosonic field $A_mu\left(x_i\right)$ is represented by a curvy line attached to the point $x_i$;
5. A fermionic field $psi\left(x_i\right)$ is represented by a solid line attached to the point $x_i$ with an arrow toward the point;
6. A fermionic field $barpsi\left(x_i\right)$ is represented by a solid line attached to the point $x_i$ with an arrow from the point;

### Example: second order processes in QED

The second order perturbation term in the S-matrix is

$S^\left\{\left(2\right)\right\}=\left\{\left(ie\right)^2over 2!\right\}int d^4x d^4x\text{'} Tbarpsi\left(x\right)gamma^mupsi\left(x\right)A_mu\left(x\right)barpsi\left(x\text{'}\right)gamma^nupsi\left(x\text{'}\right)A_nu\left(x\text{'}\right);$

#### Scattering of fermions

```
|

The Wick's expansion of the integrand gives (among others) the following term$Nbarpsi\left(x\right)gamma^mupsi\left(x\right)barpsi\left(x\text{'}\right)gamma^nupsi\left(x\text{'}\right)underline\left\{A_mu\left(x\right)A_nu\left(x\text{'}\right)\right\};,$where$underline\left\{A_mu\left(x\right)A_nu\left(x\text{'}\right)\right\}=int\left\{d^4pover\left(2pi\right)^4\right\}\left\{ig_\left\{munu\right\}over k^2+i0\right\}e^\left\{-k\left(x-x\text{'}\right)\right\}$is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes: $e^-e^-$ scattering (initial state at the right, final state at the left of the diagram);
$e^+e^+$ scattering (initial state at the left, final state at the right of the diagram);
$e^-e^+$ scattering (initial state at the bottom/top, final state at the top/bottom of the diagram). Compton scattering and annihilation/generation of $e^-e^+$ pairs Another interesting term in the expansion is$Nbarpsi\left(x\right)gamma^muunderline\left\{psi\left(x\right)barpsi\left(x\text{'}\right)\right\}gamma^nupsi\left(x\text{'}\right)A_mu\left(x\right)A_nu\left(x\text{'}\right);,$where$underline\left\{psi\left(x\right)barpsi\left(x\text{'}\right)\right\}=int\left\{d^4kover\left(2pi\right)^4\right\}\left\{iover gamma p-m+i0\right\}e^\left\{-p\left(x-x\text{'}\right)\right\}$is the fermionic contraction (propagator).
Path integral formulation In a path-integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory should have a good ground state, and the integral should be performed a little bit rotated into imaginary time.
Scalar Field Lagrangian A simple example is the free relativistic scalar field in d-dimensions, whose action integral is:
$S = int \left\{1over 2\right\} partial_mu phi partial^mu phi d^dx$The probability amplitude for a process is:$int_A^B e^\left\{iS\right\} Dphi$where A and B are space-like hypersurfaces which define the boundary conditions. The collection of all the $phi\left(A\right)$ on the starting hypersurface give the initial value of the field, analogous to the starting position for a point particle, and the field values $phi\left(B\right)$ at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude.The path integral gives the expectation value of operators between the initial and final state:$int_A^B e^\left\{iS\right\} phi\left(x_1\right) ... phi\left(x_n\right) Dphi = langle A| phi\left(x_1\right) ... phi\left(x_n\right) |B rangle$and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral should be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant doesn't change anything:$\left\{int e^\left\{iS\right\} phi\left(x_1\right) ... phi\left(x_n\right) Dphi over int e^\left\{iS\right\} Dphi \right\} = langle 0 | phi\left(x_1\right) .... phi\left(x_n\right) |0rangle$The normalization factor on the bottom is called the partition function for the field, and it coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time.The initial-to-final amplitudes are ill-defined if you think of things in the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral should be thought of as on a discrete square lattice, with lattice spacing $a$ and the limit $arightarrow 0$ should be taken carefully. If the final results do not depend on the shape of the lattice or the value of a, then the continuum limit exists.On a lattice, the field can be expanded in Fourier modes:

phi(x) = int {dkover (2pi)^d} phi(k) e^{ikcdot x} = int_k phi(k) e^{ikx}
Where the integration domain is over k restricted to a cube of side length $2pi/a$, so that large values of k are not allowed. It is important to note that the k measure contains the factors of $2pi$ from Fourier transforms, this is the best standard convention for k integrals in QFT. The lattice means that fluctuations at large k are not allowed to contribute right away, they only start to contribute in the limit $arightarrow 0$. Sometimes, instead of a lattice, the field modes are just cut off at high values of k instead.It is also convenient from time to time to consider the space-time volume to be finite, so that the k modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in k are not localized, but it is convenient for keeping track of the factors in front of the k-integrals and the momentum-conserving delta functions which will arise.On a lattice, the action needs to be discretized:
where  means that x and y are nearest lattice neighbors. The discretization should be thought of as defining what the derivative $partial_mu phi$ means.In terms of the lattice Fourier modes, the action can be written:

S= int_k ((1-cos(k_1)) +(1-cos(k_2)) + ... + (1-cos(k_d)) )phi^*_k phi^k

Which for k near zero is:

S = int_k {1over 2} k^2 |phi(k)|^2
Which is the continuum Fourier transform of the original action. In finite volume, the quantity $d^dk$ is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or $\left(2pi/V\right)^d$.The field $phi$ is real valued, so the Fourier transform obeys:$phi\left(k\right)^* = phi\left(-k\right),$In terms of real and imaginary parts, the real part of $phi\left(k\right)$ is an even function of k, while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written:$S = int_k \left\{1over 2\right\} k^2 phi\left(k\right) phi\left(-k\right)$over an integration domain which integrates over each pair (k,-k) exactly once.For a complex scalar field with action:$S = int \left\{1over 2\right\} partial_muphi^* partial^muphi d^dx$The Fourier transform is unconstrained:$S = int_k \left\{1over 2\right\} k^2 |phi\left(k\right)|^2$and the integral is over all k.Integrating over all different values of $phi\left(x\right)$ is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If
$y_i = A_\left\{ij\right\} x_j,$Then

det(A) int dx_1 dx_2 ... dx_n = int dy_1 dy_2 ... dy_n
If A is a rotation, then

A^T A = I
,
so that $det A = pm 1$, and the sign depends on whether the rotation includes a reflection or not.The matrix which changes coordinates from $phi\left(x\right)$ to $phi\left(k\right)$ can be read off from the definition of a Fourier transform.$A_\left\{kx\right\} = e^\left\{ikx\right\} ,$and the Fourier inversion theorem tells you the inverse:$A^\left\{-1\right\}_\left\{kx\right\} = e^\left\{-ikx\right\} ,$which is the complex conjugate-transpose, up to factors of $2pi$. On a finite volume lattice, the determinant is nonzero and independent of the field values.
$det A = 1 ,$and the path integral is a separate factor at each value of k.$int exp\left(i sum_k phi^*\left(k\right) phi\left(k\right)\right) Dphi = prod_k int_\left\{phi_k\right\} e^\left\{\left\{iover 2\right\} k^2 |phi_k|^2 d^dk \right\} ,$and each separate factor is an oscillatory Gaussian.In imaginary time, the Euclidean action' becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values $phi_k$ is
$e^\left\{int_k - \left\{1over 2\right\} k^2 phi^*_k phi_k\right\} = prod_k e^\left\{- k^2 |phi_k|^2 d^dk\right\}$The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution:
langle phi(x_1) ... phi(x_n) rangle = { int e^{-S} phi(x_1) ... phi(x_n) Dphi over int e^{-S} Dphi}
Since the probability of $phi_k$ is a product, the value of $phi\left(k\right)$ at each separate value of k is independently Gaussian distributed. The variance of the Gaussian is 1/(k^2 d^dk), which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is $V/k^2$.
Monte-Carlo The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber k to be a gaussian random variable with variance $1/k^2$. This generates a configuration $phi_C\left(k\right)$ at random, and the Fourier transform gives $phi_C\left(x\right)$. For real scalar fields, the algorithm must generate only one of each pair $phi\left(k\right),phi\left(-k\right)$, and make the second the complex conjugate of the first.To find any correlation function, generate a field again and again by this procedure, and find the statistical average:$langle phi\left(x_1\right) ... phi\left(x_n\right) rangle = lim_\left\{|C|rightarrowinfty\right\}\left\{ sum_C phi_C\left(x_1\right) ... phi_C\left(x_n\right) over |C| \right\}$where $|C|$ is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions.For free fields with a quadratic action, the probability distribution is a high dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte-carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions.
Scalar Propagator Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate:for $kne k\text{'}$, since then the two gaussian random variables are independent and both have zero mean.in finite volume V, when the two k-values coincide, since this is the variance of the Gaussian. In the infinite volume limit,Strictly speaking, this is an approximation: the lattice propagator is:But near k=0, for field fluctuations long compared to the lattice spacing, the two forms coincide.It is important to emphasize that the delta functions contain factors of $2pi$, so that they cancel out the $2pi$ factors in the measure for k integrals.$delta\left(k\right) = \left(2pi\right)^d delta_D\left(k_1\right)delta_D\left(k_2\right) ... delta_D\left(k_d\right) ,$where $delta_D\left(k\right)$ is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal--- some authors keep the factors of $2pi$ in the delta functions (and in the k-integration) explicit.
Equation of Motion The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is:$partial_mu partial^mu phi = 0,$and in an expectation value, this says:
partial_mupartial^mu langle phi(x) phi(y)rangle =0
Where the derivatives act on x, and the identity is true everywhere except when x and y coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (euclidean) Feynman propagator $Delta$ as the fourier transform of the time-ordered two-point function (the one that comes from the path-integral):$partial^2 Delta \left(x\right) = idelta\left(x\right),$So that:$Delta\left(k\right) = \left\{iover k^2\right\}$If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix which defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the Path integral. The factor of i disappears in the Euclidean theory.
Wick Theorem Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys Wick's theorem:$langle phi\left(k_1\right) phi\left(k_2\right) ... phi\left(k_n\right)rangle$is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of $phi$'s, and for an even number of phi's, it is equal to a contribution from each pair separately, with a delta function.$langle phi\left(k_1\right) ... phi\left(k_\left\{2n\right\}\right)rangle = sum prod_\left\{i,j\right\} \left\{delta\left(k_i - k_j\right) over k_i^2 \right\}$where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example,$langle phi\left(k_1\right) phi\left(k_2\right) phi\left(k_3\right) phi\left(k_4\right) rangle = \left\{delta\left(k_1 -k_2\right) over k_1^2\right\}\left\{delta\left(k_3-k_4\right)over k_3^2\right\} + \left\{delta\left(k_1-k_3\right) over k_3^2\right\}\left\{delta\left(k_2-k_4\right)over k_2^2\right\} + \left\{delta\left(k_1-k_4\right)over k_1^2\right\}\left\{delta\left(k_2 -k_3\right)over k_2^2\right\}$An intepretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator.
Higher Gaussian moments--- completing Wick's theorem There is a subtle point left before Wick's theorem is proved--- what if more than two of the phi's have the same momentum? If its an odd number, the integral is zero, negative values cancel with the positive values, But if the number is even, the integral is positive. The previous demonstration assumed that the phi's would only match up in pairs.But the theorem is correct even when arbitrarily many of the phis are equal, and this is a notable property of Gaussian integration:
$I = int e^\left\{-ax^2/2\right\} = sqrt\left\{2piover a\right\}$
$\left\{partial^n over partial a^n \right\} I = int \left\{x^\left\{2n\right\} over 2^n\right\} e^\left\{-ax^2\right\} = \left\{1cdot 3 cdot 5 ... cdot \left(2n-1\right) over 2 cdot 2 cdot 2 ... ;;;;;cdot 2;;;;;;\right\} sqrt\left\{2pi\right\} a^\left\{-\left\{2n+1over2\right\}\right\}$Dividing by I,$langle x^\left\{2n\right\}rangle=\left\{int x^\left\{2n\right\} e^\left\{-a x^2\right\} over int e^\left\{-a x^2\right\} \right\} = 1 cdot 3 cdot 5 ... cdot \left(2n-1\right) \left\{1over a^n\right\}$
$langle x^2 rangle = \left\{1over a\right\}$If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of 2n x's:$langle x_1 x_2 x_3 ... x_\left\{2n\right\} rangle$where the x-s are all the same variable, the index is just to keep track of the number of ways to pair them. The first x can be paired with 2n-1 others, leaving 2n-2. The next unpaired x can be paired with 2n-3 different x's leaving 2n-4, and so on. This means that Wick's theorem, uncorrected, says that the expectation value of $x^\left\{2n\right\}$ should be:$langle x^\left\{2n\right\} rangle = \left(2n-1\right)cdot\left(2n-3\right).... cdot5 cdot 3 cdot 1 \left(langle x^2rangle\right)^n$and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide.
Interaction
Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action:$S = int partial^mu phi partial_muphi + \left\{lambda over 4!\right\} phi^4.$The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes:$S = int_k k^2 |phi\left(k\right)|^2 + int_\left\{k_1k_2k_3k_4\right\} phi\left(k_1\right) phi\left(k_2\right) phi\left(k_3\right)phi\left(k_4\right) delta\left(k_1+k_2+k_3 + k_4\right) = S_F + X.$Where $S_F$ is the free action, whose correlation functions are given by Wick's theorem. The exponential of S in the path integral can be expanded in powers of $lambda$, giving a series of corrections to the free action.$e^\left\{-S\right\} = e^\left\{-S_F\right\} \left(1 + X + \left\{1over 2!\right\} X X + \left\{1over 3!\right\} X X X + ... \right)$The path integral for the interacting action is then a power series of corrections to the free action. The term represented by X should be thought of as four half-lines, one for each factor of $phi\left(k\right)$. The half-lines meet at a vertex, which contributes a delta-function which ensures that the sum of the momenta are all equal.To compute a correlation function in the interacting theory, there is a contribution from the X terms now. For example, the path-integral for the four-field correlator:$langle phi\left(k_1\right) phi\left(k_2\right) phi\left(k_3\right) phi\left(k_4\right) rangle = \left\{int e^\left\{-S\right\} phi\left(k_1\right)phi\left(k_2\right)phi\left(k_3\right)phi\left(k_4\right) Dphi over Z\right\}$which in the free field was only nonzero when the momenta k were equal in pairs, is now nonzero for all values of the k. The momenta of the insertions $phi\left(k_i\right)$ can now match up with the momenta of the X's in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum k, but one which is not integrated.The lowest order contribution comes from the first nontrivial term $e^\left\{-S_F\right\} X$ in the Taylor expansion of the action. Wick's theorem requires that the momenta in the X half-lines, the $phi\left(k\right)$ factors in X, should match up with the momenta of the external half-lines in pairs. The new contribution is equal to:$lambda \left\{1over k_1^2\right\} \left\{1over k_2^2\right\} \left\{1over k_3^2\right\} \left\{1over k_4^2\right\}.$The 4! inside X is canceled because there are exactly 4! ways to match the half-lines in X to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of the k's, by Wick's theorem.
Feynman Diagrams The expansion of the action in powers of X gives a series of terms with progressively higher number of X's. The contribution from the term with exactly n X's are called n-th order.The n-th order terms has: 4n internal half-lines, which are the factors of $phi\left(k\right)$ from the X's. These all end on a vertex, and are integrated over all possible k.
external half-lines, which are the come from the $phi\left(k\right)$ insertions in the integral.
By Wick's theorem, each pair of half-lines must be paired together to make a line, and this line gives a factor of$delta\left(k_1 + k_2\right) over k_1^2$which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line k. The half-line at the tail end of the arrow carries momentum k, while the half-line at the head-end carries momentum -k. If one of the two half-lines is external, this kills the integral over the internal k, since it forces the internal k to be equal to the external k. If both are internal, the integral over k remains.The diagrams which are formed by linking the half-lines in the X's with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of $1over k^2$, the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming k is equal to the total outgoing k.The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex.
Loop Order A tree diagram is one where all the internal lines have momentum which is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration.An example of a tree diagram is the one where each of four external lines end on an X. Another is when eight external lines end on two X's. A third is when three external lines end on an X, and the remaining half-line joins up with another X, and the remaining half-lines of this
X run off to external lines.It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex.A diagram which is not a tree diagram is called a loop diagram, and an example is one where two lines of an X are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two X's are joined to each other by matching the legs one to the other. This diagram has no external lines at all.The reason loop diagrams are called loop diagrams is because the number of k-integrals which are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually R^d valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the k-valued weighted graph is zero.A set of k-values can be relabeled whenever there is a closed loop going from vertex to vertex, never revisiting the same vertex. Such a cycle can be thought of as the boundary of a 2-cell. The k-labelings of a graph which conserve momentum (which have zero boundary) up to redefinitions of k (up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta which are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way.
Symmetry factors The number of ways to form a given Feynman diagram by joining together half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete.The uncancelled denominator is called the symmetry factor of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor.For example, consider the Feynman diagram formed from two external lines joined to one X, and the remaining two half-lines in the X joined to each other. There are 4*3 ways to join the external half-lines to the X, and then there is only one way to join the two remaining lines to each other. The X comes divided by 4!=4*3*2, but the number of ways to link up the X half lines to make the diagram is only 4*3, so the contribution of this diagram is divided by two.For another example, consider the diagram formed by joining all the half-lines of one X to all the half-lines of another X. This diagram is called a vacuum bubble, because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two X's) and two factors of 4!. The contribution is multiplied by 4!/(2*4!*4!) = 1/48.Another example is the Feynman diagram formed from two X's where each X links up to two external lines, and the remaining two half-lines of each X are joined to each other. The number of ways to link an X to two external lines is 4*3, and either X could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two X's can be linked to each other in two ways, so that the total number of ways to form the diagram is 4*3*4*3*2*2, while the denominator is 4!4!2!. The total symmetry factor is 2, and the contribution of this diagram is divided by two.The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has.An automorphism of a Feynman graph is a permutation M of the lines and a permutation N of the vertices with the following properties:
If a line l goes from vertex v to vertex v', then M(l) goes from N(v) to N(v'). If the line is undirected, as it is for a real scalar field, then M(l) can go from N(v') to N(v) too.
If a line l ends on an external line, M(l) ends on the same external line.
If there are different types of lines, M(l) should preserve the type.
This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states which only differ by interchanging identical particles.Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking the a half-line to a name and then to the other half line.Now count the number of ways to form the named diagram. Each permutation of the X's gives a different pattern of linking names to half-lines, and this is a factor of n!. Each permutation of the half-lines in a single X gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion.But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph.
A diagram is connected when it is connected as a graph, meaning that there is a sequence of attached lines and vertices which link any line or vertex to any other. The connected diagrams suffice to reconstruct the full Feynman series, and this is the linked cluster theorem.The full series is the sum over all diagrams, which include several connected components, each one can occur multiple times. The automorphism of the full graph consists of the automorphisms of the connected components, and an extra factor of n! for permutations of n identical copies of one connected component.$sum prod_i \left\{C_\left\{i\right\}^\left\{n_i\right\} over n_i!\right\}$But this can be seen to be a product of separate factors for each connected graph:$prod_i sum_j \left\{C_i^\left\{n_i\right\} over n_i!\right\} = prod_i exp\left(C_i\right) = exp\left(sum_i C_i\right).$This is the linked cluster theorem: the sum of all diagrams is the exponential of the connected ones.
Vacuum Bubbles
An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals:$langle phi_1\left(x_1\right) ... phi_n\left(x_n\right)rangle = \left\{int e^\left\{-S\right\} phi_1\left(x_1\right) ...phi_n\left(x_n\right) Dphi over int e^\left\{-S\right\} Dphi\right\}.$The top is the sum over all Feynman diagrams, including disconnected diagrams which do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator:$int e^\left\{-S\right\}phi_1\left(x_1\right)...phi_n\left(x_n\right)rangle = \left(sum E_i\right)\left(exp\left(sum_i C_i\right) \right).$Where the sum over E diagrams includes only those diagrams each of whose connected components end on at least on external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor.The vacuum bubbles then are only useful for determining Z itself, which from the definition of the path integral is equal to:$Z= int e^\left\{-S\right\} Dphi = e^\left\{-HT\right\} = e^\left\{-rho V\right\}$where $rho$ is the energy density in the vacuum. Each vacuum bubble contains a factor of $delta\left(k\right)$ zeroing the total k at each vertex, and when there are no external lines, this contains a factor of $delta\left(0\right)$, because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum.
Sources Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing sources unifies the formalism, by making new vertices where one line can end.Sources are external fields, fields which contribute to the action, but are not dynamical variables. A scalar field source is another scalar field h which contributes a term to the (Lorentz) Lagrangian:$int h\left(x\right) phi\left(x\right) d^dx = int h\left(k\right) phi\left(k\right) d^dk ,$In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an X vertex, or on an H-vertex, and only one line enters an H vertex. The Feynman rule for an H-vertex is that a line from an H with momentum k gets a factor of h(k).The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "x" with one line extending out, exactly as an insertion.$log\left(Z\left[h\right]\right) = sum_\left\{n,C\right\} h\left(k_1\right) h\left(k_2\right) ... h\left(k_n\right) C\left(k_1,...,k_n\right),$where $C\left(k_1,....,k_n\right)$ is the connected diagram with n external lines carrying momentum as indicated. The sum is over all connected diagrams, as before.The field h is not dynamical, which means that there is no path integral over h: h is just a parameter in the Lagrangian which varies from point to point. The path integral for the field is:$Z\left[h\right] = int e^\left\{iS + iint hphi\right\} Dphi ,$and it is a function of the values of h at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on R^n, the Fourier transform of the probability density is:$int rho\left(y\right) e^\left\{i k y\right\} d^n y = langle e^\left\{i k y\right\} rangle = langle prod_\left\{i=1\right\}^\left\{n\right\} e^\left\{h_i y_i\right\}rangle ,$The fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source h(x) is:$Z\left[h\right] = int e^\left\{iS\right\} e^\left\{iint_x h\left(x\right)phi\left(x\right)\right\} Dphi = langle e^\left\{i h phi \right\}rangle$which, on a lattice, is the product of an oscillatory exponential for each field value:$langle prod_x e^\left\{i h_x phi_x\right\}rangle$The fourier transform of a delta-function is a constant, which gives a formal expression for a delta function:$delta\left(x-y\right) = int e^\left\{k\left(x-y\right)\right\} dk$This tells you what a field delta function looks like in a path-integral. For two scalar fields $phi$ and $eta$,$delta\left(phi - eta\right) = int e^\left\{ i h\left(x\right)\left(phi\left(x\right) -eta\left(x\right)d^dx\right\} Dh$Which integrates over the Fourier transform coordinate, over h. This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral.The partition function is now a function of the field h, and the physical partition function is the value when h is the zero function:The correlation functions are derivatives of the path integral with respect to the source:$langlephi\left(x\right)rangle = \left\{1over Z\right\} \left\{partial over partial h\left(x\right)\right\} Z\left[h\right] = \left\{partialoverpartial h\left(x\right)\right\} log\left(Z\left[h\right]\right)$
Spin 1/2: Grassman integrals The preceding discussion can be extended to the Fermi case, but only if the notion of integration is expanded.
Particle-Path Interpretation  A Feynman diagram is a representation of quantum field theory processes in terms of particle paths.In a Feynman diagram, particles are represented by lines, which can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is referred to as an interaction vertex, or vertex for short. There are three different types of lines: internal lines connect two vertices, incoming lines extend from "the past" to a vertex and represent an initial state, and outgoing lines extend from a vertex to "the future" and represent the final state.There are several conventions for where to represent the past and the future. Sometimes, the bottom of the diagram represents the past and the top of the diagram represents the future. Other times, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, there is no past and future and all the lines are internal. Then the particle lines begin and end on small x's, which represent the positions of the operators whose correlation is being calculated. The LSZ reduction formula is the standardized argument that shows that the correlation functions and scattering diagrams are the same.Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process which can happen in several different ways. When a group of incoming particles are to scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time. In a perturbative expansion of the scattering amplitude for the experiment defined by the incoming and outgoing lines. In some quantum field theories (notably quantum electrodynamics), one can obtain an excellent approximation of the scattering amplitude from a few terms of the perturbative expansion, corresponding to a few simple Feynman diagrams with the same incoming and outgoing lines connected by different vertices and internal lines.The method, although originally invented for particle physics, is useful in any part of physics where there are statistical or quantum fields. In condensed matter physics, there are many-body Feynman diagrams with dashed lines which represent an instantaneous potential interaction, while phonons take the place of photons. In statistical physics, there are statistical Feynman diagrams which represent the way in which correlations travel along paths.Feynman diagrams are often confused with spacetime diagrams and bubble chamber images because they all seek to represent particle scattering. Feynman diagrams are graphs that represent the trajectories of particles in intermediate stages of a scattering process. Unlike a bubble chamber picture, only the sum of all the Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition--- every diagram contributes a factor to the total amplitude for the process.
Scattering The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space.In the 1930's, Wigner gave a mathematical definition for single-particle states: they are a collection of states which form an irreducible representation of the Poincare group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accomodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories.A field operator can act to produce a one-particle state from the vacuum, which means that the field operator $phi\left(x\right)$ produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle,5-particle (if there is no +/- symmetry also 2,4,6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections.The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for n particles to go to m-particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for n+m field insertions, leaving out the propagators for the external legs.For example, for the $lambda phi^4$ interaction of the previous section, the order $lambda$ contribution to the (Lorentz) correlation function is:$langle phi\left(k_1\right)phi\left(k_2\right)phi\left(k_3\right)phi\left(k_4\right)rangle = \left\{iover k_1^2\right\}\left\{iover k_2^2\right\} \left\{iover k_3^2\right\} \left\{iover k_4^2\right\} ilambda ,$Stripping off the external propagators, that is, removing the factors of $i/k^2$, gives the invariant scattering amplitude M:$M = ilambda ,$which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of $|M|^2$ over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that M is a relativistic invariant.Non-relativistic single particle states are labeled by the momentum k, and they are chosen to have the same norm at every value of k. This is because the nonrelativistic unit operator on single particle states is:$int dk |kranglelangle k|,$In relativity, the integral over k states for a particle of mass m integrates over a hyperbola in E,k space defined by the energy-momentum relation:$E^2 - k^2 = m^2 ,$If the integral weighs each k point equally, the measure is not Lorentz invariant. The invariant measure integrates over all values of k and E, restricting to the hyperbola with a Lorentz invariant delta function:$int delta\left(E^2-k^2 - m^2\right) |E,kranglelangle E,k| dE dk = int \left\{dk over 2 E\right\} |kranglelangle k|$So the normalized k-states are different from the relativistically normalized k-states by a factor of $sqrt\left\{E\right\} = \left(k^2-m^2\right)^\left\{1over 4\right\}$ The invariant amplitude M is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states.For nonrelativistic values of k, the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor $sqrt\left\{m\right\}$ ). In this limit, the $phi^4$ invariant scattering amplitude is still constant. The particles created by the field phi scatter in all directions with equal amplitude.The nonrelativistic potential which scatters in all directions with an equal amplitude (in the Born approximation) is one whose Fourier transform is constant--- a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of the this theory--- it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time.
LSZ theorem
Nonperturbative effects
Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect which goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever.But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations which on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes which involve large numbers of particles, but where the interactions between each of the particles is simple. This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe-Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman Vainshtein Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way.The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available.
Mathematical details
A Feynman diagram can be considered a graph. When considering a field composed of particles, the edges will represent (sections of) particle world lines; the vertices represent virtual interactions. Since only certain interactions are permitted, the graph is constrained to have only certain types of vertices. The type of field of an edge is its field label; the permitted types of interaction are interaction labels. The value of a given diagram can be derived from the graph; the value of the interaction as a whole is obtained by summing over all diagrams.
Mathematical interpretation Feynman diagrams are really a graphical way of keeping track of deWitt indices, much like Penrose's graphical notation for indices in multilinear algebra. There are several different types for the indices, one for each field (this does not depend on how the fields are grouped; for instance, if the up quark field and down quark field are treated as different fields, then there would be the same type assigned to both of them but if they are treated as a single multicomponent field with "flavors", then there would be a problem). The edges, (i.e., propagators) are tensors of rank (2,0) in deWitt's notation (i.e., with two contravariant indices and no covariant indices), while the vertices of degree n are rank n covariant tensors which are totally symmetric among all bosonic indices of the same type and totally antisymmetric among all fermionic indices of the same type and the contraction of a propagator with a rank n covariant tensor is indicated by an edge incident to a vertex (there is no ambiguity in which "slot" to contract with because the vertices correspond to totally symmetric tensors). The external vertices correspond to the uncontracted contravariant indices.A derivation of the Feynman rules using Gaussian functional integrals is given in the functional integral article.Each Feynman diagram on its own does not have a physical significance. It's only the infinite sum over all possible (bubble-free) Feynman diagrams which gives physical results. This infinite sum is usually only asymptotically convergent.
Invariance mechanics
Penguin diagram Notes

References  Gerardus 't Hooft, Martinus Veltman, Diagrammar, CERN Yellow Report 1973, online
David Kaiser, Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics, Chicago: University of Chicago Press, 2005. ISBN 0-226-42266-6
Martinus Veltman, Diagrammatica: The Path to Feynman Diagrams, Cambridge Lecture Notes in Physics, ISBN 0-521-45692-4 (expanded, updated version of above) External links  Feynman diagram page at SLAC
AMS article: "What's New in Mathematics: Finite-dimensional Feynman Diagrams"
WikiTeX supports editing Feynman diagrams directly in Wiki articles.
Drawing Feynman diagrams with FeynDiagram C++ library that produces PostScript output.
Feynman Diagram Examples using Thorsten Ohl's Feynmf LaTeX package.
JaxoDraw A Java program for drawing Feynman diagrams.
```
Search another word or see Vacuum bubbleon Dictionary | Thesaurus |Spanish