Definitions

# Pearson product-moment correlation coefficient

In statistics, the Pearson product-moment correlation coefficient (sometimes referred to as the MCV or PMCC, and typically denoted by r) is a common measure of the correlation between two variables X and Y. In accordance with the usual convention, when calculated for an entire population, the Pearson Product Moment correlation is typically designated by the analogous Greek letter, which in this case is rho (ρ). Hence its designation by the Latin letter r implies that it has been computed for a sample (to provide an estimate for that of the underlying population). For these reasons, it is sometimes called "Pearson's r." Pearson's correlation reflects the degree of linear relationship between two variables. It ranges from +1 to -1. A correlation of +1 means that there is a perfect positive linear relationship between variables. A correlation of -1 means that there is a perfect negative linear relationship between variables. A correlation of 0 means there is no linear relationship between the two variables. Correlations are rarely exactly 0, 1, or -1. A certain outcome could indicate whether correlations are negative or positive.

The statistic is defined as the sum of the products of the standard scores of the two measures divided by the degrees of freedom. If the data comes from a sample, then

$r = frac \left\{1\right\}\left\{n - 1\right\} sum ^n _\left\{i=1\right\} left\left(frac\left\{X_i - bar\left\{X\right\}\right\}\left\{s_X\right\} right\right) left\left(frac\left\{Y_i - bar\left\{Y\right\}\right\}\left\{s_Y\right\} right\right)$

where

$frac\left\{X_i - bar\left\{X\right\}\right\}\left\{s_X\right\}, bar\left\{X\right\}, text\left\{ and \right\} s_X$

are the standard score, sample mean, and sample standard deviation (calculated using n − 1 in the denominator).

If the data comes from a population, then

$rho = frac \left\{1\right\}\left\{n\right\} sum ^n _\left\{i=1\right\} left\left(frac\left\{X_i - mu_X\right\}\left\{sigma_X\right\} right\right) left\left(frac\left\{Y_i - mu_Y\right\}\left\{sigma_Y\right\} right\right)$

where

$frac\left\{X_i - mu_X\right\}\left\{sigma_X\right\}, mu_X, text\left\{ and \right\} sigma_X$

are the standard score, population mean, and population standard deviation (calculated using n in the denominator).

The result obtained is equivalent to dividing the covariance between the two variables by the product of their standard deviations.

The coefficient ranges from −1 to 1. A value of 1 shows that a linear equation describes the relationship perfectly and positively, with all data points lying on the same line and with Y increasing with X. A score of −1 shows that all data points lie on a single line but that Y increases as X decreases. A value of 0 shows that a linear model is inappropriate – that there is no linear relationship between the variables.

The linear equation that best describes the relationship between X and Y can be found by linear regression. This equation can be used to "predict" the value of one measurement from knowledge of the other. That is, for each value of X the equation calculates a value which is the best estimate of the values of Y corresponding the specific value. We denote this predicted variable by Y'.

Any value of Y can therefore be defined as the sum of Y′ and the difference between Y and Y′:

$Y = Y^prime + \left(Y - Y^prime\right).$

The variance of Y is equal to the sum of the variance of the two components of Y:

$s_y^2 = S_\left\{y^prime\right\}^2 + s^2_\left\{y.x\right\}.$

Since the coefficient of determination implies that sy.x2 = sy2(1 − r2) we can derive the identity

$r^2 = \left\{s_\left\{y^prime\right\}^2 over s_y^2\right\}.$

The square of r is conventionally used as a measure of the association between X and Y. For example, if r2 is 0.90, then 90% of the variance of Y can be "accounted for" by changes in X and the linear relationship between X and Y.