The level of measurement of a variable in mathematics and statistics is a classification that is used to describe the nature of information contained within numbers assigned to objects and, therefore, within the variable. The levels were proposed by Stanley Smith Stevens in his 1946 article On the theory of scales of measurement. According to Stevens' theory of scales, different mathematical operations on variables are possible, depending on the level at which a variable is measured.
Stevens proposed four levels of measurement, described below:
Interval and ratio variables are also grouped together as continuous variables.
In the paper in which Stevens introduced the classification Scheme, he also proposed the definition that is widely cited in texts in some version: "Measurement is the assignment of numbers to objects or events according to a rule". This definition has received criticism on a number of grounds (e.g. Duncan, 1984; Michell, 1986, 1999). However, the scheme is widely used.
|Level||Can define…||Relation or Operation||Mathematical structure|
|nominal||mode||equality (=)||[[set]No meaning of Arithmetic Operation]|
|ordinal||median||order (<)||totally ordered set|
|interval||mean, standard deviation||subtraction (−) and weighted average||affine line|
|ratio||geometric mean, coefficient of variation||addition (+) and multiplication (×)||field|
In this classification, the numbers assigned to objects represent the rank order (1st, 2nd, 3rd etc.) of the entities measured. The numbers are called ordinals. The variables are called ordinal variables or rank variables. Comparisons of greater and less can be made, in addition to equality and inequality. However, operations such as conventional addition and subtraction are still meaningless. The corresponding variable can be called an ordered categorical variable.
One can define quantiles, notably quartiles and percentiles, together with maximum and minimum, but no new measures of statistical dispersion beyond the nominal ones can be defined: one cannot define range or interquartile range, since one cannot subtract quantities.
Ratios between numbers on the scale are not meaningful, so operations such as multiplication and division cannot be carried out directly. But ratios of differences can be expressed; for example, one difference can be twice another.
More subtly, while one can define moments about the origin, only central moments are useful, since the choice of origin is arbitrary and not meaningful. One can define standardized moments, since ratios of differences are meaningful, but one cannot define coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment.
A ratio measurement scale is one in which the ratio between any two measurements is meaningful. To achieve this a ratio scale has to have a non-arbitrary zero value. Then operations such as multiplication and division become meaningful as well. For a ratio scale one can thus say "This value is double this other value".
"If it's twice as cold today as it was yesterday," runs a popular joke, "and it was zero degrees yesterday, how cold is it today?" This illustrates the limitation of interval measurements such as Celsius and Fahrenheit temperature: by setting zero at an arbitrary point, they make it impossible to multiply and divide meaningfully.
Social variables of ratio measure include age, length of residence in a given place, number of organizations belonged to or number of church attendances in a particular time.
In addition to the measures of statistical dispersion defined for interval variables, such as range and standard deviation, for ratio variables one can also define measures that require a ratio, such as studentized range or coefficient of variation.
In a ratio variable, unlike in an interval variable, the moments about the origin are meaningful, since the origin is not arbitrary.
Duncan (1986) observed that Stevens' classification nominal measurement is contrary to his own definition of measurement. Stevens (1975) said on his own definition of measurement that "the assignment can be any consistent rule. The only rule not allowed would be random assignment, for randomness amounts in effect to a nonrule". However, so-called nominal measurement involves arbitrary assignment, and the "permissible transformation" is any number for any other. This is one of the points made in Lord's (1953) satirical paper On the Statistical Treatment of Football Numbers.
Among those who accept the classification scheme, there is also some controversy in behavioural sciences over whether the mean is meaningful for ordinal measurement. In terms of measurement theory, it is not, because the arithmetic operations are not made on numbers that are measurements in units, and so the results of computations do not give numbers in units. However, many behavioural scientists use means for ordinal data anyway. This is often justified on the basis that ordinal scales in behavioural science are really somewhere between true ordinal and interval scales; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude. For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across a range of an assessment. Thus, some argue, that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such as PSPP require the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstone made progress toward developing a justification for obtaining interval-level measurements based on the law of comparative judgment. Further progress was made by Georg Rasch, who developed the probabilistic Rasch model which provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
Studies from Korea Advanced Institute of Science & Technology Add New Findings in the Area of Signal Processing
Jan 11, 2012; According to the authors of a study from Taejon, South Korea, "The detection of near-duplicate video clips (NDVCs) is an area of...