See his Reminiscences of an Astronomer (1903); study by L. M. Dunphy (1956).
Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969, and appeared in Martin Gardner's Scientific American column in 1974. Today it is a much debated problem in the philosophical branch of decision theory but has received little attention from the mathematical side.
The player of the game is presented with two opaque boxes, labeled A and B. The player is permitted to take the contents of both boxes, or just of box B. (The option of taking only box A is ignored, for reasons soon to be obvious.) Box A contains $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.
By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor's prediction, and knowledge of the Predictor's infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.
|Predicted choice||Actual choice||Payout|
|A and B||A and B||$1,000|
|A and B||B only||$0|
|B only||A and B||$1,001,000|
|B only||B only||$1,000,000|
The second strategy suggests taking only B. By this strategy, we can ignore the possibilities that return $0 and $1,001,000, as they both require that the Predictor has made an incorrect prediction, and the problem states that the Predictor is almost never wrong. Thus, the choice becomes whether to receive $1,000 (both boxes) or to receive $1,000,000 (only box B)—so taking only box B is better.
In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."
Many argue that the paradox is primarily a matter of conflicting decision making models. Using the expected utility hypothesis will lead one to believe that one should expect the most utility (or money) from taking only box B. However if one uses the Dominance principle, one would expect to benefit most from taking both boxes.
Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.
Other philosophers have proposed many solutions to the problem, many eliminating its seemingly paradoxical nature:
Some suggest a rational person will choose both boxes, and an irrational person will choose just the one, therefore rational people fare better, since the Predictor cannot actually exist. Others have suggested that an irrational person will do better than a rational person and interpret this paradox as showing how people can be punished for making rational decisions.
It is possible to create a predictor similar to that proposed in the problem through the use of memory-blocking drugs like Versed. Under such drugs, subjects are unable to lay down new memories, so it would be possible to run a subject through the problem a large number of times, producing for many subjects a highly accurate prediction of what they will do the next iteration, though today's drugs would generate a different mental state between the drugged trials and non-drugged "real" experiment. This technique would fail with subjects who decide to deliberately act randomly in their response.
Others have suggested that in a world with perfect predictors (or time machines because a time machine could be the mechanism for making the prediction) causation can go backwards. If a person truly knows the future, and that knowledge affects his actions, then events in the future will be causing effects in the past. Chooser's choice will have already caused Predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and Chooser will do whatever he's fated to do. Others conclude that the paradox shows that it is impossible to ever know the future. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Some philosophers argue this paradox is equivalent to the grandfather paradox. Put another way, the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. Note, however, that Nozick's exposition specifically excludes backward causation (such as time travel) and requires only that the predictions be of high accuracy, not that they are absolutely certain to be correct. So the considerations just discussed are irrelevant to the paradox as seen by Nozick, which focuses on two principles of choice, one probabilistic and the other causal - assuming backward causation removes any conflict between these two principles.
Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person. Suppose we take the Predictor to be a machine that arrives at its prediction by simulating the brain of the Chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the Chooser, then the Chooser cannot tell if he is standing in front of the boxes in the real world or in the virtual world generated by the simulation. The "virtual" Chooser would thus tell the Predictor which choice the "real" Chooser is going to make.