An IEEE 754 single precision floating point number has 24 bits of mantissa, including the leading unit digit. The number 1 is represented with an unbiased exponent of 0 and a mantissa of 1.000000000000000000000002 in binary. The next largest representable number has an exponent of 0 and a mantissa of 1.000000000000000000000012. The difference between these numbers is 0.000000000000000000000012, or 2−23. This is the machine epsilon of a floating point unit which uses IEEE single-precision floating point arithmetic. In general, for a floating point type with a base b and a mantissa of p digits, the epsilon is εmach = b1-p.
The machine epsilon is sometimes defined as the smallest positive number which, when added to 1, yields a result other than one. Unlike the definition above, this value depends on the rounding mode. For IEEE single precision in the most common rounding mode (round to even), this value is 2−24 + 2−47, or slightly more than half the value by the definition above. For rounding toward +∞ the value is the smallest representable positive number (2−149, a denormal), while for rounding toward -∞ or toward zero it coincides with the earlier definition.
Some documents mistakenly use this definition when the other is intended. For example, both the GNU libc manual and Microsoft Visual C++ documentation define the constant FLT_EPSILON in this way, in conflict with the ISO C standard, which mandates the definition at the head of this article, and with their own implementations, which follow the standard.
Some formats supported by the processor might be not supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries.
In a strict sense the term machine epsilon means the 1+eps accuracy directly supported by the processor (or coprocessor), not some 1+eps accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
A trivial example is the machine epsilon for integer arithmetic on processors without floating point formats; it is 1, because 1+1=2 is the smallest integer greater than 1.
The following C program does not actually determine the machine epsilon; rather, it determines a number within a factor of two (one order of magnitude) of the true machine epsilon, using a linear search.
int main(int argc, char **argv )
float machEps = 1.0f;
printf("current Epsilon, 1 + current Epsilonn" );
printf("%Gt%.20fn", machEps, (1.0f + machEps) );
machEps /= 2.0f;
// If next epsilon yields 1, then break, because current
// epsilon is the machine epsilon.}
while ((float)(1.0 + (machEps/2.0)) != 1.0);
printf("nCalculated Machine epsilon: %Gn", machEps );
$ gcc machine_epsilon.c; ./a.out
current Epsilon, 1 + current Epsilon
Calculated Machine epsilon: 1.19209E-07
A similar, Java method: