A distinction is sometimes made between global randomness and local randomness. Most philosophical conceptions of randomness are "global" — they are based on the idea that "in the long run" a sequence would look truly random, even if certain sequences would not look random (in a "truly" random sequence of sufficient length, for example, it is probable that there would be long sequences of nothing but zeros, though on the whole the sequence might be "random"). "Local" randomness refers to the idea that there can be minimum sequence lengths in which "random" distributions are approximated. Long stretches of the same digits, even those generated by "truly" random processes, would diminish the "local randomness" of a sample (it might only be locally random for sequences of 10,000 digits; taking sequences of less than 1,000 might not appear "random" at all, for example).
A sequence exhibiting a pattern is not thereby proved not statistically random. According to principles of Ramsey theory, sufficiently large objects must necessarily contain a given structure ("complete disorder is impossible").
Contrast with algorithmic randomness.
Kendall and Smith's original four tests were hypothesis tests, which took as their null hypothesis the idea that each number in a given random sequence had an equal chance of occurring, and that various other patterns in the data should be also distributed equiprobably.
If a given sequence was able to pass all of these tests within a given degree of significance (generally 5%), then it was judged to be, in their words "locally random". Kendall and Smith differentiated "local randomness" from "true randomness" in that many sequences generated with truly random methods might not display "local randomness" to a given degree — very large sequences might contain many rows of a single digit. This might be "random" on the scale of the entire sequence, but in a smaller block it would not be "random" (it would not pass their tests), and would be useless for a number of statistical applications.
As random number sets became more and more common, more tests, of increasing sophistication were used. Some modern tests plot random digits as points on a three-dimensional plane, which can then be rotated to look for hidden patterns. In 1995, the statistician George Marsaglia created a set of tests known as the Diehard tests which he distributes with a CD-ROM of 5 billion pseudorandom numbers.
Pseudorandom number generators require tests as exclusive verifications for their "randomness" as they are decidedly not produced by "truly random" processes, but rather by deterministic algorithms. Over the history of random number generation, many sources of numbers thought to appear "random" under testing have later been discovered to be very non-random when subjected to certain types of tests. The notion of quasi-random numbers was developed in order to circumvent some of these problems, though pseudorandom number generators are still extensively used in many applications (even ones known to be extremely "non-random"), as they are "good enough" for most applications.
Other tests :
Researchers at National Taiwan University of Science and Technology Publish New Data on Structural and Multidisciplinary Optimization
Dec 11, 2012; By a News Reporter-Staff News Editor at Journal of Technology -- Data detailed on Structural and Multidisciplinary Optimization...