What Is a Disadvantage of Using Range As a Measure of Dispersion?

Martin Barraud/Caiaimage/Getty Images

One of the greatest disadvantages of using range as a method of dispersion is that range is sensitive to outliers in the data. Range only considers the smallest and largest data elements in the set.

If outliers exist in a set of data such that the lowest or highest extremes are far away from almost every other data element in the set, then range may not be the best way to measure dispersion. For example, if one were to measure a student’s consistency on quizzes, and he scored {40, 90, 91, 93, 95, 100} on six different quizzes, the range would be 60 points, marking considerable inconsistency. However, five of the six quizzes show consistency in the student’s performance, achieving within 10 points of each other on all of these.

Using other methods of dispersion, such as measuring the interquartile range, the difference between the 25th and 75th percentile, provide a better representation of dispersion in cases where outliers are involved. Standard deviation and average deviation are also commonly used methods to determine the dispersion of data.