Definitions

population parameter

Quantile

[kwon-tahyl, -til]
Quantiles are points taken at regular intervals from the cumulative distribution function of a random variable. Dividing ordered data into q essentially equal-sized data subsets is the motivation for q-quantiles; the quantiles are the data values marking the boundaries between consecutive subsets. Put another way, the kth q-quantile is the value x such that the probability that a random variable will be less than x is at most k/q and the probability that a random variable will be more than or equal to x is at least k/q. There are q − 1 quantiles, with k an integer satisfying 0 < k < q.

Specialized quantiles

Some quantiles have special names:

More generally, one can consider the quantile function for any distribution. This is defined for real variables between zero and one and is mathematically the inverse of the cumulative distribution function.

Some software programs (including Microsoft Excel) regard the minimum and maximum as the 0th and 100th percentile, respectively; however, such terminology is an extension beyond traditional statistics definitions. For an infinite population, the kth quantile is the data value where the cumulative distribution function is equal to k/q. For a finite N sample size, calculate Ncdot k/q—if this is not an integer, then round up to the next integer to get the appropriate sample number (assuming samples ordered by increasing value); if it is an integer, then any value from the value of that sample number to the value of the next can be taken as the quantile, and it is conventional (though arbitrary) to take the average of those two values (see Estimating the quantiles ).

More formally: the kth "q"-quantile of the population parameter X can be defined as the value "x" such that:

P(Xle x)ge pmbox{ and }P(Xge x)ge 1-p where p=frac{k}{q}

or equivalently

P(X< x)le pmbox{ and }P(X> x)le 1-p where p=frac{k}{q}

If, instead of using integers k and q, the p-quantile is based on a real number p with 0<p<1, then this becomes: The p-quantile of the distribution of a random variable X can be defined as the value(s) x such that:

P(Xleq x)geq p mathrm{and} P(Xgeq x)geq 1-p

or equivalently

P(X< x)le p mathrm{and} P(X> x)le 1-p.

An example

Consider the 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}.

  • The first quartile is determined by 10*(1/4) = 2.5, which rounds up to 3, meaning that 3 is the rank in order of samples (from least to greatest values), at which approximately 1/4 samples have values less than this third sample, which, in this case, is 7.
  • The second quartile value (same as the median) is determined by 10*(2/4) = 5, which is an integer, while the number of samples (10) is an even number, so the average of both the fifth and sixth values is taken—that is (8+10)/2 = 9, though any value from 8 through to 10 could be taken to be the median. If the number of data values is odd, then the median value (or 2nd quartile) is the value found at sample=(#values + 1)/2.
    So, for this example, if there had also been a value of 9 between values 8 and 10, making 11 samples total, then (11+1)/2 = 6. This would mean that the sixth sample (in this case, the value 9) would be the 2nd quartile, where 1/2 of the samples have values greater than the value at this sample (greater than 9—the value at sample 6 of 11), and 1/2 of the samples have values less than the value at this sample.
  • The third quartile value for the original example above is determined by 10*(3/4) = 7.5, which rounds up to 8, and the eighth sample is 15.

The motivation for this method is that the first quartile should divide the data between the bottom quarter and top three-quarters. Ideally, this would mean 2.5 of the samples are below the first quartile and 7.5 are above, which in turn means that the third data sample is "split in two", making the third sample part of both the first and second quarters of data, so the quartile boundary is right at that sample.

Discussion

Standardized test results are commonly misinterpreted as a student scoring "in the 80th percentile", for example, as if the 80th percentile is an interval to score "in", which it is not; one can score "at" some percentile or between two percentiles, but not "in" some percentile.

If a distribution is symmetrical, then the median is the mean (so long as the latter exists). But, in general, the median and the mean differ. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean. This is because the exponential distribution has a long tail for positive values, but is zero for negative numbers.

Quantiles are useful measures because they are less susceptible to long-tailed distributions and outliers.

Empirically, if the data you are analyzing are not actually distributed according to your assumed distribution, or if you have other potential sources for outliers that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics.

Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available.

The quantiles of a random variable are generally preserved under increasing transformations, in the sense that, for example, if m is the median of a random variable X, then 2m is the median of 2X, unless an arbitrary choice has been made from a range of values to specify a particular quantile. Quantiles can also be used in cases where only ordinal data is available.

Estimating the quantiles

There are several methods for estimating the quantiles. The most comprehensive breadth of methods is available in R, which includes nine estimation methods.

Let N be the number of non-missing values of the sample population, and let x_1,x_2,ldots,x_N represent the ordered values of the sample population such that x_1 is the smallest value, etc. For the kth q-quantile, let p = k/q.Empirical distribution function : begin{cases}x_j, & g=0 x_{j+1}, & g>0end{cases} j is the integer part of Ncdot p and g is the fractional partEmpirical distribution function with averaging : begin{cases}frac{1}{2}(x_j+x_{j+1}), & g=0 x_{j+1}, & g>0end{cases} j is the integer part of Ncdot p and g is the fractional partWeighted average : x_{j+1}+gcdot(x_{j+2}-x_{j+1}) j is the integer part of (N-1)cdot p and g is the fractional part. This method is used, for example, in the PERCENTILE function of Microsoft Excel.Sample number closest to (N-1)·p+1 : begin{cases}x_j, & g<.5 x_{j+1}, & gge .5end{cases} j is the integer part of (N-1)cdot p+1 and g is the fractional part

See also

References

  • R.J. Serfling. Approximation Theorems of Mathematical Statistics. John Wiley & Sons, 1980.

Search another word or see population parameteron Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature
FAVORITES
RECENT

;