Definitions

# Integration by parts

In calculus, and more generally in mathematical analysis, integration by parts is a rule that transforms the integral of products of functions into other, hopefully simpler, integrals. The rule arises from the product rule of differentiation.

## The rule

Suppose f(x) and g(x) are two continuously differentiable functions. The product rule states

$\left(f\left(x\right)g\left(x\right)\right)\text{'} = f\left(x\right) g\text{'}\left(x\right) + f\text{'}\left(x\right) g\left(x\right)!$

Integrating both sides gives

$f\left(x\right) g\left(x\right) = int f\left(x\right) g\text{'}\left(x\right), dx + int f\text{'}\left(x\right) g\left(x\right), dx!$

Rearranging terms

$int f\left(x\right) g\text{'}\left(x\right), dx = f\left(x\right) g\left(x\right) - int f\text{'}\left(x\right) g\left(x\right), dx!$

From above one can derive integration by parts rule which states that given an interval with endpoints a, b, one has

$int_a^b f\left(x\right) g\text{'}\left(x\right), dx = left\left[f\left(x\right) g\left(x\right) right\right]_\left\{a\right\}^\left\{b\right\} - int_a^b f\text{'}\left(x\right) g\left(x\right), dx!$

where we use the common notation

$left\left[f\left(x\right) g\left(x\right) right\right]_\left\{a\right\}^\left\{b\right\} = f\left(b\right) g\left(b\right) - f\left(a\right) g\left(a\right).!$

The rule is shown to be true by using the product rule for derivatives and the fundamental theorem of calculus. Thus


begin{align} f(b)g(b) - f(a)g(a) & = int_a^b frac{d}{dx} (f(x) g(x) ), dx & = int_a^b f'(x) g(x) , dx + int_a^b f(x) g'(x), dx. end{align}

In the traditional calculus curriculum, this rule is often stated using indefinite integrals in the form

$int f\left(x\right) g\text{'}\left(x\right), dx = f\left(x\right) g\left(x\right) - int f\text{'}\left(x\right) g\left(x\right), dx!$

or in an even shorter form, if we let u = f(x), v = g(x) and the differentials du = f ′(x) dx and dv = g′(x) dx, then it is in the form in which it is most often seen:

$int u, dv=uv-int v, du.!$

Note that the original integral contains the derivative of g; in order to be able to apply the rule, the antiderivative g must be found, and then the resulting integral ∫g f′ dx must be evaluated.

One can also formulate a discrete analogue for sequences, called summation by parts.

An alternative notation has the advantage that the factors of the original expression are identified as f and g, but has the twin drawbacks of a nested integral and total unsuitability for the definite integral case:

$int f g, dx = f int g, dx - int left \left(f\text{'} int g, dx right \right), dx.!$

This formula is valid whenever f is continuously differentiable and g is continuous.

More general formulations of integration by parts exist for the Riemann-Stieltjes integral and Lebesgue-Stieltjes integral.

Note: More complicated forms such as the one below are also valid:

$int u v, dw = u v w - int u w, dv - int v w, du.!$

## Strategy

Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate it into a product of two functions ƒ(x)g(x) such that the integral produced by the integration by parts formula is easier to evaluate than the original one. The following form is useful in illustrating the best strategy to take:

$int f g, dx = f int g, dx - int left \left(f\text{'} int g,dx right \right), dx.!$

Note that on the right-hand side, ƒ is differentiated and g is integrated; consequently it is useful to choose ƒ as a function that simplifies when differentiated, and/or to choose g as a function that simplifies when integrated. As a simple example, consider:

$int frac\left\{ln x\right\}\left\{x^2\right\}, dx.!$

Since ln x simplifies to 1/x when differentiated, we make this part of ƒ; since 1/x2 simplifies to −1/x when integrated, we make this part of g. The formula now yields:

$int frac\left\{ln x\right\}\left\{x^2\right\}, dx = -frac\left\{ln x\right\}\left\{x\right\} - int \left(1/x\right)\left(-1/x\right), dx.!$

The remaining integral of −1/x2 can be completed with the power rule and is 1/x.

Alternatively, we may choose ƒ and g such that the product $f\text{'}\left(int g, dx\right)!$ simplifies due to cancellation. For example, suppose we wish to integrate:

$int frac\left\{ln\left(sin x\right)\right\}\left\{\left(cos x\right)^2\right\}, dx!$

If we choose ƒ(x) = ln(sin x) and g(x) = 1/(cos x)2, then ƒ differentiates to 1/tan x using the chain rule and g integrates to tan x; so the formula gives:

$int frac\left\{ln\left(sin x\right)\right\}\left\{\left(cos x\right)^2\right\}, dx = ln\left(sin x\right)tan x - int \left(1/tan x\right)\left(tan x\right), dx.!$

The integrand simplifies to 1. Finding a simplifying combination frequently involves experimentation.

In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below.

## Examples

### Integrands with powers of x or e x

In order to calculate:

$int xcos \left(x\right) , dx!$

Let:

u = x, so du = dx,
dv = cos(x) dx, so v = sin(x).

Then:


begin{align}
` int xcos (x) ,dx & = int u , dv `
` & = uv - int v , du `
` & = xsin (x) - int sin (x) , dx `
` & = xsin (x) + cos (x) + C.`
end{align} !

where C is an arbitrary constant of integration.

By repeatedly using integration by parts, integrals such as

$int x^\left\{3\right\} sin \left(x\right) , dx quad mbox\left\{and\right\} quad int x^\left\{2\right\} e^\left\{x\right\} , dx$

can be computed in the same fashion: each application of the rule lowers the power of x by one.

An interesting example that is commonly seen is:

$int e^\left\{x\right\} cos \left(x\right) , dx!$

where, strangely enough, in the end, the actual integration does not need to be evaluated.

This example uses integration by parts twice. First let:

u = cos(x); thus du = −sin(x) dx
dv = ex dx; thus v = ex

Then:

$int e^\left\{x\right\} cos \left(x\right) , dx = e^\left\{x\right\} cos \left(x\right) + int e^\left\{x\right\} sin \left(x\right) , dx.!$

Now, to evaluate the remaining integral, we use integration by parts again, with:

u = sin(x); du = cos(x) dx
v = ex; dv = ex dx

Then:

$int e^\left\{x\right\} sin \left(x\right) , dx!$ $= e^\left\{x\right\} sin \left(x\right) - int e^\left\{x\right\} cos \left(x\right) ,dx!$
Putting these together, we get

$int e^\left\{x\right\} cos \left(x\right) ,dx = e^\left\{x\right\} cos \left(x\right) + e^x sin \left(x\right) - int e^\left\{x\right\} cos \left(x\right) , dx.!$

Notice that the same integral shows up on both sides of this equation. So we can simply add the integral to both sides to get:

$2 int e^\left\{x\right\} cos \left(x\right) , dx = e^\left\{x\right\} \left(sin \left(x\right) + cos \left(x\right) \right) + C!$

$int e^\left\{x\right\} cos \left(x\right) ,dx = \left\{e^\left\{x\right\} \left(sin \left(x\right) + cos \left(x\right) \right) over 2\right\} + C\text{'}!$

where, again, C (and C' = C/2) is an arbitrary constant of integration.

A similar trick is used to find the integral of secant cubed.

### Interchange of the order of integration

The above formulation includes the technique of interchange of the order of integration, which is not usually viewed in this manner. Consider the double integral:

$int_a^z dx int_a^x dy h\left(y\right).$
In the order written above, the strip of width dx is integrated first over the y-direction as shown in the left panel of the figure, which is inconvenient especially when function h (y ) is not easily integrated. The integral can be reduced to a single integration by reversing the order of integration as shown in the right panel of the figure. To accomplish this interchange of variables, the strip of width dy is first integrated from the line x = y to the limit x = z, and then the result is integrated from y = a to y = z, resulting in:
$int_a^z dx int_a^x dy h\left(y\right) = int_a^z dy h\left(y\right) int_y^z dx = int_a^z dy left\left(z-yright\right) h\left(y\right) .$
This result can be seen to be an example of the above formula for integration by parts, repeated below:
$int_a^z f\left(x\right) g\text{'}\left(x\right), dx = left\left[f\left(x\right) g\left(x\right) right\right]_\left\{a\right\}^\left\{z\right\} - int_a^z f\text{'}\left(x\right) g\left(x\right), dx!$
Substitute:
$g \left(x\right) = int_a^x dy h\left(y\right)$  and  $f\left(x\right) = z-x .$
However, exchange of the order of integration has the merit that it generates the function f in a natural manner.

### More examples

Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x is also known.

The first example is ∫ ln(xdx. We write this as:

$int ln \left(x\right) cdot 1 , dx.!$

Let:

u = ln(x); du = 1/x dx
v = x; dv = 1·dx

Then:

$int ln \left(x\right) , dx!$ $= x ln \left(x\right) - int frac\left\{x\right\}\left\{x\right\} , dx!$
$= x ln \left(x\right) - int 1 , dx!$
$int ln \left(x\right) , dx = x ln \left(x\right) - \left\{x\right\} + \left\{C\right\}!$

$int ln \left(x\right) , dx = x \left(ln \left(x\right) - 1 \right) + C!$

where, again, C is the arbitrary constant of integration.

The second example is ∫ arctan(xdx, where arctan(x) is the inverse tangent function. Re-write this as

$int arctan \left(x\right) cdot 1 , dx.!$

Now let:

u = arctan(x); du = 1/(1 + x2dx
v = x; dv = 1·dx

Then

$int arctan \left(x\right) , dx!$ $= x arctan \left(x\right) - int frac\left\{x\right\}\left\{1 + x^2\right\} , dx!$
$= x arctan \left(x\right) - \left\{1 over 2\right\} ln left\left(1 + x^2 right\right) + C!$

using a combination of the inverse chain rule method and the natural logarithm integral condition.

Here is an example:

$int x, dx = x^2 - int x, dx!$
$2 int x, dx = x^2!$
$int x, dx = frac\left\{x^2\right\}\left\{2\right\} + C!$

## The LIATE rule

A rule of thumb for choosing which of two functions is to be u and which is to be dv is to choose u by whichever function comes first in this list:

L: logarithmic functions: ln x, log2(x), etc.
I: inverse trigonometric functions: arctan x, arcsec x, etc.
A: algebraic functions: $x^2!$, $3x^\left\{50\right\}!$, etc.
T: trigonometric functions: sin x, tan x, etc.
E: exponential functions: $e^x!$, $13^x!$, etc.

Then make dv the other function. You can remember the list by the mnemonic LIATE. The reason for this is that functions lower on the list have easier antiderivatives than the functions above them.

To demonstrate this rule, consider the integral

$int xcos x , dx.!$

Following the LIATE rule, u = x and dv = cos x dx , hence du = dx and v = sin x , which makes the integral become

$xsin x - int 1sin x , dx!$
which equals
$xsin x + cos x+C.!$

In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos x was chosen as u and x as dv, we would have the integral

$frac\left\{x^2\right\}2cos x + int frac\left\{x^2\right\}2sin x, dx,!$

which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere.

Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate

$int x^3e^\left\{x^2\right\}, dx,!$

we would set

$u=x^2, quad dv=xe^\left\{x^2\right\}, dx.!$

This results in

$int x^3e^\left\{x^2\right\}, dx=frac\left\{1\right\}\left\{2\right\}e^\left\{x^2\right\}\left(x^2-1\right)+C.!$

## Recursive integration by parts

Integration by parts can often be applied recursively on the $int v,du$ term to provide the following formula

$int uv = u v_1 - u\text{'} v_2 + u v_3 - cdots + \left(-1\right)^\left\{n\right\} u^\left\{\left(n\right)\right\} v_\left\{n+1\right\}.!$

Here, $u\text{'}!$ is the first derivative of $u!$ and $u$! is the second derivative of $u!$. Further, $u^\left\{\left(n\right)\right\}!$ is a notation to describe its nth derivative (with respect to the variable u and v are functions of). Another notation has been adopted:

$v_\left\{n+1\right\}\left(x\right)=int! int cdots int v \left(dx\right)^\left\{n+1\right\}.!$

There are n + 1 integrals.

Note that the integrand above ($uv!$) differs from the previous equation. The $dv!$ factor has been written as $v!$ purely for convenience.

The above mentioned form is convenient because it can be evaluated by differentiating the first term and integrating the second (with a sign reversal each time), starting out with $u v_1!$. It is very useful especially in cases when $u^\left\{\left(k+1\right)\right\}!$ becomes zero for some k + 1. Hence, the integral evaluation can stop once the $u^\left\{\left(k\right)\right\}!$ term has been reached.

## Tabular integration by parts

While the aforementioned recursive definition is correct, it is often tedious to remember and implement. A much easier visual representation of this process is often taught to students and is dubbed either "the tabular method," "rapid repeated integration" or "the tic-tac-toe method." This method works best when one of the two functions in the product is a polynomial, that is, after differentiating it several times one obtains zero. It may also be extended to work for functions that will repeat themselves.

For example, consider the integral

$int x^3 cos x , dx.!$

Let $u=x^3!$. Begin with this function and list in a column all the subsequent derivatives until zero is reached. Secondly, begin with the function v (in this case $cos x$) and list each integral of v until the size of the column is the same as that of u. The result should appear as follows.

Derivatives of u (Column A) Integrals of v (Column B)
$x^3 ,$ $cos x ,$
$3x^2 ,$ $sin x ,$
$6x ,$ $-cos x ,$
$6 ,$ $-sin x ,$
$0 ,$ $cos x ,$

Now simply pair the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc... with alternating signs (beginning with the positive sign). Do so until further pairing is impossible. The result is the following (notice the alternating signs in each term):

$\left(+\right)\left(x^3\right)\left(sin x\right) + \left(-\right)\left(3x^2\right)\left(-cos x\right) + \left(+\right)\left(6x\right)\left(-sin x\right) + \left(-\right)\left(6\right)\left(cos x\right) + C ,.$

Which, with simplification, leads to the result

$x^3sin x + 3x^2cos x - 6xsin x - 6cos x + C. ,$

With proper understanding of the tabular method, it can be extended.

$int e^x cos x ,dx.$

Derivatives of u (Column A) Integrals of v (Column B)
$e^x ,$ $cos x ,$
$e^x ,$ $sin x ,$
$e^x ,$ $-cos x ,$

In this case in the last step it is necessary to integrate the product of the two bottom cells obtaining:

$int e^x cos x ,dx = e^xsin x + e^xcos x - int e^x cos x ,dx$

which is then solvable in the usual way.

## Higher dimensions

The formula for integration by parts can be extended to functions of several variables. Instead of an interval one needs to integrate over a n-dimensional set. Also, one replaces the derivative with a partial derivative.

More specifically, suppose Ω is an open bounded subset of $mathbb\left\{R\right\}^n$ with a piecewise smooth boundary ∂Ω. If u and v are two continuously differentiable functions on the closure of Ω, then the formula for integration by parts is

$int_\left\{Omega\right\} frac\left\{partial u\right\}\left\{partial x_i\right\} v ,dx = int_\left\{partialOmega\right\} u v , nu_i ,dsigma - int_\left\{Omega\right\} u frac\left\{partial v\right\}\left\{partial x_i\right\} , dx$
where $mathbf\left\{nu\right\}$ is the outward unit surface normal to ∂Ω, νi is its i-th component, and i ranges from 1 to n. Replacing v in the above formula with vi and summing over i gives the vector formula
$int_\left\{Omega\right\} nabla u cdot mathbf\left\{v\right\}, dx = int_\left\{partialOmega\right\} u, mathbf\left\{v\right\}cdotnu, dsigma - int_Omega u, nablacdot mathbf\left\{v\right\}, dx$
where v is a vector-valued function with components v1, ..., vn.

Setting u equal to the constant function 1 in the above formula gives the divergence theorem. For $mathbf\left\{v\right\}=nabla v$ where $vin C^2\left(bar\left\{Omega\right\}\right)$, one gets

$int_\left\{Omega\right\} nabla u cdot nabla v, dx = int_\left\{partialOmega\right\} u, nabla vcdotnu, dsigma - int_Omega u, Delta v, dx$
which is the first Green's identity.

The regularity requirements of the theorem can be relaxed. For instance, the boundary ∂Ω need only be Lipschitz continuous. In the first formula above, only $u,vin H^1\left(Omega\right)$ is necessary (where H1 is a Sobolev space); the other formulas have similarly relaxed requirements.

For reference, consult Appendix C of Evans or the applied math notes of Arbogast and Bona.

## Cultural references

• The method of tabular integration by parts is featured in the 1988 film Stand and Deliver.