Added to Favorites

Popular Searches

In statistics, Mallows' Cp, named after Colin Mallows, is often used as a stopping rule for various forms of stepwise regression. This was not Mallows' intention - he proposed the statistic only as a way of facilitating comparisons among many alternative subset regressions, and warned against its use as a decision rule. Collinearity in a regression model
results from the common mistake of putting too many regressors into the model. Often many of the hopefully independent variables will have effects that are highly correlated and cannot be separately estimated. When too many regressors,
variables whose coefficients must be estimated, have been included in a regression model it is said to be "over-fit." The worst case of this is
when the number of parameters to be estimated is larger than the number of observations so that some effects cannot be estimated
at all. The Cp statistic can be used as a subsetting criterion to be used in selecting a reduced model without such problems.
If P regressors are selected from a set of K > P, Cp is defined as :## Practical use

A common misconception is that the "best" model is the minimizer of Cp. While true that for independent Gaussian errors of constant variance, the model minimizing MSPE is in some sense optimal, this is not necessarily true for Cp. Rather, because Cp is a random variable, it is important to consider its distribution. For example, one may form confidence intervals for Cp under its null distribution; that is, when the bias is zero. Cp is similar to the Akaike information criterion and, as a reliable measure of the "goodness of fit" for a model, tends to be less dependent than R-square on the number of effects in the model. Hence, Cp tends to find the best subset that includes only the important predictors of the dependent variable. Under a model not suffering from appreciable lack of fit (bias), Cp has expectation nearly equal to P; otherwise the expectation is roughly P plus a positive bias term. Nevertheless, even though it has expectation greater than or equal to P, there is nothing to prevent Cp < P or even Cp < 0 in extreme cases. A common misconception is that one should choose a subset that has Cp approximately equal to p.
## References

- $C\_p=\{SSE\_p\; over\; S^2\}\; -\; N\; +\; 2P,$

where

- $SSE\_p\; =\; sum\_\{i=1\}^N(Y\_i-Y\_\{pi\})^2$ is the error sum of squares for the model with P regressors,
- Y
_{pi}is the predicted value of the i'th observation of Y from the P regressors, - S
^{2}is the residual mean square after regression on the complete set of K regressors - and N is the sample size.

If the model used to form S^{2} fits without bias, then N^{-1}S^{2}Cp is an unbiased estimator of the mean squared prediction error (MSPE).

- Mallows, C.L. (1973) "Some Comments on Cp" Technometrics, 15, 661-675.
- Hocking, R.R. (1976) "The Analysis and Selection of Variables in Linear Regression" Biometrics, 32, 1-50.
- Daniel, C. and Wood, F. (1980) Fitting Equations to Data, Rev. Ed., NY: Wiley & Sons, Inc.

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Wednesday September 10, 2008 at 06:11:46 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Wednesday September 10, 2008 at 06:11:46 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2015 Dictionary.com, LLC. All rights reserved.