Added to Favorites

Related Searches

Many statistical techniques are sensitive to the presence of outliers. For example, simple calculations of the mean and standard deviation may be distorted by a single grossly inaccurate data point.## Definition

Grubbs' test (also known as the maximum normed residual test) is used to detect outliers in a univariate data set. It is based on the assumption of normality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs' test.### Test statistic

The Grubbs' test statistic is defined as:
### test whether the minimum value is an outlier

_{min} denoting the minimum value.
### test whether the maximum value is an outlier

_{max} denoting the maximum value.
## Critical region

For the two-sided test, the hypothesis of no outliers is rejected at significance level α if## Related techniques

Several graphical techniques can, and should, be used to detect outliers. A simple run sequence plot, a box plot, or a histogram should show any obviously outlying points. A normal probability plot or lag plot may also be useful.
## See also

## External links

## References

Checking for outliers should be a routine part of any data analysis. Potential outliers should be examined to see if they are possibly erroneous. If the data point is in error, it should be corrected if possible and deleted if it is not possible. If there is no reason to believe that the outlying point is in error, it should not be deleted without careful consideration. However, the use of more robust techniques may be warranted. Robust techniques will often downweight the effect of outlying points without deleting them.

Grubbs' test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or less since it frequently tags most of the points as outliers.

Grubbs' test is defined for the hypothesis:

- H
_{0}: There are no outliers in the data set

- H
_{a}: There is at least one outlier in the data set

- $$

This is the two-sided version of the test. The Grubbs test can also be defined as one of the following one-sided tests:

- $$

- $$

- $$

with t_{α/(2N),N−2} denoting the upper critical value of the t-distribution with N − 2 degrees of freedom and a significance level of α/(2N). For the one-sided tests, replace α/(2N) with α/N.

- Grubbs, Frank (1969). "Procedures for Detecting Outlying Observations in Samples".
*Technometrics*11 (1): 1-21. - Stefansky, W. (1972). "Rejecting Outliers in Factorial Designs".
*Technometrics*469-479.

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Monday May 19, 2008 at 05:43:59 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Monday May 19, 2008 at 05:43:59 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2015 Dictionary.com, LLC. All rights reserved.