that are significantly larger than those ordinarily used in everyday life, for instance in simple counting or in monetary transactions. The term typically refers to large positive integers
, or more generally, large positive real numbers
, but it may also be used in other contexts.
Very large numbers often occur in fields such as mathematics, cosmology and cryptography. Sometimes people refer to numbers as being "astronomically large". However, it is easy to mathematically define numbers that are much larger even than those used in astronomy.
Using scientific notation to handle large and small numbers
Scientific notation was created to handle the wide range of values which occur in scientific study. 1.0 × 109, for example, means one billion, a 1 followed by nine zeros: 1 000 000 000, and 1.0 × 10−9 means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is.
Adding a 0 at the end of a number multiplies it by 10; 100 is 10 times 10. In scientific notation, however, the exponent only increases by one, from 101 to 102.
Large numbers in the everyday world
Examples of large numbers describing everyday real-world objects are:
- the number of bits on a computer hard disk (as of 2008, typically about 1013, 500-1000 GB)
- the number of cells in the human body (more than 1014)
- the number of neuronal connections in the human brain (estimated at 1014)
- Avogadro's number (i.e. the number of atoms in 12 grams of carbon-12, approximately 6.022 × 1023)
Astronomically large numbers
Other large numbers, as regards length
, are found in astronomy
. For example, the current Big Bang model
of the Universe suggests that it is 13.7 billion years (4.3 × 1017
seconds) old, and that the observable universe
is 93 billion light years
across (4.65 × 1026
metres), and contains about 5 × 1022
stars, organized into around 125 billion (1.25 × 1011
) galaxies, according to Hubble Space Telescope observations. There are about 1080
fundamental particles in the observable universe
, by rough estimation.
Combinatorial processes rapidly generate even larger numbers. The factorial function, which defines the number of permutations on a set of fixed objects, grows very rapidly with the number of objects. Stirling's formula gives a precise asymptotic expression for this rate of growth.
Combinatorial processes generate very large numbers in statistical mechanics. These numbers are so large that they are typically only referred to using their logarithms.
Gödel numbers, and similar numbers used to represent bit-strings in algorithmic information theory, are very large, even for mathematical statements of reasonable length. However, some pathological numbers are even larger than the Gödel numbers of typical mathematical propositions.
Computers and computational complexity
Moore's Law, generally speaking, estimates that computers double in speed about every 18 months. This sometimes leads people to believe that eventually, computers will be able to solve any mathematical problem, no matter how complicated. This is not the case; computers are fundamentally limited by the constraints of physics, and certain upper bounds on what to expect can reasonably be formulated. Also, there are certain theoretical results which show that some problems are inherently beyond the reach of complete computational solution, no matter how powerful or fast the computation; see n-body problem.
Between 1980 and 2000, hard disk sizes increased from about 10 megabytes (1 × 107) to over 100 gigabytes (1011 bytes). A 100 gigabyte disk could store the given names of all of Earth's six billion inhabitants without using data compression. But what about a dictionary-on-disk storing all possible passwords containing up to 40 characters? Assuming each character equals one byte, there are about 2320 such passwords, which is about 2 × 1096. In his paper Computational capacity of the universe, Seth Lloyd points out that if every particle in the universe could be used as part of a huge computer, it could store only about 1090 bits, less than one millionth of the size such a dictionary would require.
Still, computers can easily be programmed to start creating and displaying all possible 40-character passwords one at a time. Such a program could be left to run indefinitely. Assuming a modern PC could output 1 billion strings per second, it would take one billionth of 2 × 1096 seconds, or 2 × 1087 seconds to complete its task, which is about 6 × 1079 years. By contrast, the universe is estimated to be 13.7 billion (1.37 × 1010) years old. Computers will presumably continue to get faster, but the same paper mentioned before estimates that the entire universe functioning as a giant computer could have performed no more than 10120 operations since the Big Bang. This is trillions of times more computation than is required for displaying all 40-character passwords, but computing all 50 character passwords would outstrip the estimated computational potential of the entire universe.
Problems like this grow exponentially in the number of computations they require, and are one reason why exponentially difficult problems are called "intractable" in computer science: for even small numbers like the 40 or 50 characters described earlier, the number of computations required exceeds even theoretical limits on mankind's computing power. The traditional division between "easy" and "hard" problems is thus drawn between programs that do and do not require exponentially increasing resources to execute.
Such limits are an advantage in cryptography, since any cipher-breaking technique which requires more than, say, the 10120 operations mentioned before will never be feasible. Such ciphers must be broken by finding efficient techniques unknown to the cipher's designer. Likewise, much of the research throughout all branches of computer science focuses on finding efficient solutions to problems that work with far fewer resources than are required by a naïve solution. For example, one way of finding the greatest common divisor between two 1000 digit numbers is to compute all their factors by trial division. This will take up to 2 × 10500 division operations, far too large to contemplate. But the Euclidean algorithm, using a much more efficient technique, takes only a fraction of a second to compute the GCD for even huge numbers such as these.
As a general rule, then, PCs in 2005 can perform 240 calculations in a few minutes. A few thousand PCs working for a few years could solve a problem requiring 264 calculations, but no amount of traditional computing power will solve a problem requiring 2128 operations (which is about what would be required to break the 128-bit SSL commonly used in web browsers, assuming the underlying ciphers remain secure). Limits on computer storage are comparable. Quantum computers may allow certain problems to become feasible, but have practical and theoretical challenges which may never be overcome.
See also: Examples of numbers, in numerical order (below)
- (10,000,000,000), called "10 billion" (or traditionally, but now rarely, 10,000 million in the U.K. - see Long and short scales).
- googol = (10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000)
- googolplex = It is the number of states a system can be in that consists of particles, each of which can be in googol states. Alternatively, it is the number of states a system can be in that consists of a googol particles, each of which can be in 10 states, or a system of 3.32 googol particles, each of which has 2 possible states.
- centillion = or , depending on number naming system
- Skewes' numbers: the first is ca. , the second
The total amount of printed material in the world is roughly 1.6 × 1018 bits; therefore the contents can be represented by a number which is roughly
The first number is much larger than the second, due to the larger height of the power tower, and in spite of the small numbers 1.1 (however, if these numbers are made 1 or less, that greatly changes the result). In comparing the magnitude of each successive exponent in the last number with , we find a difference in the magnitude of effect on the final exponent. In the number 3000.48, the final exponent, The overall magnitude of the final exponent is donated by the 2nd exponent (1000). The 1st exponent only adds a factor of 3 to the mix (). The base number only supports a factor of 1.00016 in the final exponent (). This clearly shows the importance of the highest exponent in the stack.
Systematically creating ever faster increasing sequences
Let a strictly increasing integer sequence
≥1) be given (written as a function for conveniently writing the functional powers
. Then a sequence of sequences can be found by
, from which we can select the "cross-sequence"
Together this is a process of creating a new sequence from a given one. This can be repeated (i.e., we can apply ), and again we can select from the matrix of numbers a single sequence, by taking the 10th element of each. We can apply the same whole process again and again.
Starting from this process corresponds to adding an element 10 in the Conway chain before the variable n, which is at the end of the chain: we get , and the new sequence selected from the matrix is that of which the kth element is . (See also Knuth's up-arrow notation and Conway chained arrow notation).
Repeating this process we get for successive values of n, and selecting k=10 we get a single sequence .
Repeating this whole process we get ever longer chains. Selecting n=10 we get the sequence of (10→10), (10→10→10), (10→10→10→10),... This can be used as starting sequence to apply the process again, etc. Even the value for this sequence is already a Conway chain of length 10 billion plus one.
Each sequence in this whole process can be identified by its order type in the process:
- (10→n→k) is the sequence with index n with order type k - 1
- (10→10→n→k) is the sequence with index n with order type ω + k - 1
- (10→10→10→n→k) is the sequence with index n with order type 2ω + k - 1
- (10→10), (10→10→10), (10→10→10→10),... is the sequence with order type ω²