Added to Favorites

Related Searches

Definitions

In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.
## Formal definition

Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z_{1}, Z_{2}, …, Z_{n}}, written ρ_{XY·Z}, is the correlation between the residuals R_{X} and R_{Y} resulting from the linear regression of X with Z and of Y with Z, respectively. In fact, the first-order partial correlation is nothing else than a difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344-345).
## Computation

### Using linear regression

The obvious way to compute a (sample) partial correlation is to solve the two associated linear regression problems, get the residuals, and calculate the correlation between the residuals. If we write x_{i}, y_{i} and z_{i} to denote i.i.d. samples of some joint probability distribution over X, Y and Z, solving the linear regression problem amounts to finding### Using recursive formula

It can be computationally expensive to solve the linear regression problems. Actually, the nth-order partial correlation (i.e., with |Z| = n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlation ρ_{XY·Ø} is defined to be the regular correlation coefficient ρ_{XY}.### Using matrix inversion

Another approach allows to compute in $mathcal\{O\}(n^3)$ all partial correlations between any two variables X_{i} and X_{j} of a set V of cardinality n given all others, i.e., $mathbf\{V\}\; setminus\; \{X\_i,X\_j\}$, provided the correlation matrix Ω = (ω_{ij}), where ω_{ij} = ρ_{X}iX_{j}, is invertible. If we define P = Ω^{-1}, we have:## Interpretation

### Geometrical

Let three variables X, Y, Z [where x is the Independent Variable (IV), y is the Dependent Variable (DV), and Z is the "control" or "extra variable"] be chosen from a joint probability distribution over n variables V. Further let v_{i}, 1 ≤ i ≤ N, be N n-dimensional i.i.d. samples taken from the joint probability distribution over V. We then consider the N-dimensional vectors x (formed by the successive values of X over the samples), y (formed by the values of Y) and z (formed by the values of Z).### As conditional independence test

With the assumption that all involved variables are multivariate Gaussian, the partial correlation ρ_{XY·Z} is zero if and only if X is conditionally independent from Y given Z.
This property does not hold in the general case.## Use in time series analysis

In time series analysis, the partial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag h, as## See also

## References

### Other

- $mathbf\{w\}\_X^*\; =\; argmin\_\{mathbf\{w\}\}\; left\{\; sum\_\{i=1\}^N\; (x\_i\; -\; langlemathbf\{w\},\; mathbf\{z\}\_i\; rangle)^2\; right\}$

- $mathbf\{w\}\_Y^*\; =\; argmin\_\{mathbf\{w\}\}\; left\{\; sum\_\{i=1\}^N\; (y\_i\; -\; langlemathbf\{w\},\; mathbf\{z\}\_i\; rangle)^2\; right\}$

with N being the number of samples and $langlemathbf\{v\},mathbf\{w\}\; rangle$ the scalar product between the vectors v and w. The residuals are then

- $r\_\{X,i\}\; =\; x\_i\; -\; langlemathbf\{w\}\_X^*,mathbf\{z\}\_i\; rangle$

- $r\_\{Y,i\}\; =\; y\_i\; -\; langlemathbf\{w\}\_Y^*,mathbf\{z\}\_i\; rangle$

and the sample partial correlation is

- $hat\{rho\}\_\{XYcdotmathbf\{Z\}\}=frac\{Nsum\_\{i=1\}^N\; r\_\{X,i\}r\_\{Y,i\}-sum\_\{i=1\}^N\; r\_\{X,i\}sum\; r\_\{Y,i\}\}$

It holds, for any $Z\_0\; in\; mathbf\{Z\}$:

- $rho\_\{XYcdot\; mathbf\{Z\}\; \}\; =$

Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of $mathcal\{O\}(n^3)$.

Note in the case where Z is a single variable, this reduces to:

$rho\_\{XYcdot\; Z\; \}\; =\; frac\{rho\_\{XY\}\; -\; rho\_\{XZ\}rho\_\{YZ\}\}\; \{sqrt\{1-rho\_\{XZ\}^2\}\; sqrt\{1-rho\_\{YZ\}^2\}\}.$

- $rho\_\{X\_iX\_jcdot\; mathbf\{V\}\; setminus\; \{X\_i,X\_j\}\}\; =\; -frac\{p\_\{ij\}\}\{sqrt\{p\_\{ii\}p\_\{jj\}\}\}.$

It can be shown that the residuals R_{X} coming from the linear regression of X using Z, if also considered as an N-dimensional vector r_{X}, have a zero scalar product with the vector z generated by Z. This means that the residuals vector lives on a hyperplane S_{z} which is perpendicular to z.

The same also applies to the residuals R_{Y} generating a vector r_{Y}. The desired partial correlation is then the cosine of the angle φ between the projections r_{X} and r_{Y} of x and y, respectively, onto the hyperplane perpendicular to z.

In order to test if a sample partial correlation $hat\{rho\}\_\{XYcdotmathbf\{Z\}\}$ vanishes, Fisher's z-transform of the partial correlation can be used:

- $z(hat\{rho\}\_\{XYcdotmathbf\{Z\}\})\; =\; frac\{1\}\{2\}\; lnleft(frac\{1+hat\{rho\}\_\{XYcdotmathbf\{Z\}\}\}\{1-hat\{rho\}\_\{XYcdotmathbf\{Z\}\}\}right).$

The null hypothesis is $H\_0:\; hat\{rho\}\_\{XYcdotmathbf\{Z\}\}\; =\; 0$, to be tested against the two-tail alternative $H\_A:\; hat\{rho\}\_\{XYcdotmathbf\{Z\}\}\; neq\; 0$. We reject H_{0} with significance level α if:

- $sqrt\{N\; -\; |mathbf\{Z\}|\; -\; 3\}cdot\; |z(hat\{rho\}\_\{XYcdotmathbf\{Z\}\})|\; >\; Phi^\{-1\}(1-alpha/2),$

where Φ(·) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. Note that this z-transform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact t-test based on a combination of the partial regression coefficient, the partial correlation coefficient and the partial variances is available.

The distribution of the sample partial correlation was described by Fisher.

- $phi(h)=\; rho\_\{X\_0X\_hcdot\; \{X\_1,cdots,X\_\{h-1\}\; \}\}.$

- Linear regression
- Conditional independence
- What is a partial correlation?
- Mathematical formulae in the "Description" section of the IMSL PCORR routine
- A three-variable example

- Guilford J. P., Fruchter B. (1973).
*Fundamental statistics in psychology and education*. Tokyo: MacGraw-Hill Kogakusha, LTD..

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Monday October 06, 2008 at 05:26:16 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Monday October 06, 2008 at 05:26:16 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.