Definitions

# Gambler's fallacy

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the false belief that if deviations from expected behaviour are observed in repeated independent trials of some random process then these deviations are likely to be evened out by opposite deviations in the future. For example, if a fair coin is tossed repeatedly and tails comes up a larger number of times than is expected, a gambler may incorrectly believe that this means that heads is more likely in future tosses. Such an event is often referred to as being "due". This is an informal fallacy.

The gambler's fallacy implicitly involves an assertion of negative correlation between trials of the random process and therefore involves a denial of the exchangeability of outcomes of the random process.

The inverse gambler's fallacy is the belief that an unlikely outcome of a random process (such as rolling double sixes on a pair of dice) implies that the process is likely to have occurred many times before reaching that outcome.

## An example: coin-tossing

The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin. With a fair coin, the outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is exactly $1/2$ (one in two). It follows that the probability of getting two heads in two tosses is $1/4$ (one in four) and the probability of getting three heads in three tosses is $1/8$ (one in eight). In general, if we let $A_i$ be the event that toss $i$ of a fair coin comes up heads, then we have,

$Prleft\left(bigcap_\left\{i=1\right\}^n A_iright\right)=prod_\left\{i=1\right\}^n Pr\left(A_i\right)=\left\{1over2^n\right\}$.

Now suppose that we have just tossed four heads in a row, so that if the next coin toss were also to come up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is only $1/32$ (one in thirty-two), a believer in the gambler's fallacy might believe that this next flip is less likely to be heads than to be tails. However, this is not correct, and is a manifestation of the Gambler's fallacy. The probability that the next toss is a head is in fact,

$Prleft\left(A_5|A_1 cap A_2 cap A_3 cap A_4 right\right)=Prleft\left(A_5right\right)=1/2$.

While a run of five heads is only 1 in 32 (0.03125), it is 1 in 32 before the coin is first tossed. After the first four tosses the results are no longer unknown, so they do not count. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses - that a run of luck in the past somehow influences the odds in the future - is the fallacy.

## Other examples

We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152. However, the probability of flipping a head after having already flipped 20 heads in a row is simply 1 in 2. This is an example of Bayes' theorem.

Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an equal probability, although a rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as their "lucky numbers" hence a win in which these numbers are over represented is more likely to result in a shared payout.

A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides always to bring a bomb with him. "The chances of an aircraft having a bomb on it are very small," he reasons, "and certainly the chances of having two are almost none!".

A similar example is in the film The World According to Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the chances of another plane hitting the house have just dropped to zero.

## Non-examples of the fallacy

There are many scenarios where the gambler's fallacy might superficially seem to apply but does not. When the probability of different events is not independent, the probability of future events can change based on the outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be of another rank. Thus, the odds for drawing a jack, assuming that it was the first card drawn and that there are no jokers, have decreased from 4/52 (7.69%) to 3/51 (5.88%), while the odds for each other rank have increased from 4/52 (7.69%) to 4/51 (7.84%). This is how counting cards really works, when playing the game of blackjack.

The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g. changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced player's success may decrease after opposing teams discover his or her weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. See Game Theory.

Many riddles trick the reader into believing that they are an example of Gambler's Fallacy, such as the Monty Hall problem.

## Non-example: unknown probability of event

When the probability of repeated events are not known, such as when one tosses a coin which may not be a fair coin (i.e. it may be biased), outcomes will not be statistically independent. In the case of coin tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased increases. For example, if one flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable (meaning that the random process from which they are generated may be biased but is equally likely to be biased in any direction) previous observations demonstrate the likely direction of the bias, such that the outcome which has occurred the most in the observed data is the most likely to occur again.

## Psychology behind the fallacy

Amos Tversky and Daniel Kahneman proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic . According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red, so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays close to 0.5 in any short segment moreso than would be predicted by chance ; Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones .

The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect .