This is different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of "true" reality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to distinguish from "true" reality.
The idea of a simulated reality raises several questions:
This category subdivides into two further types:
Also worthy is mentioning the option of a completely virtual-person (born in the simulation) becoming somehow self-aware (after "waking up") and willing to escape the simulation, consequently somehow succeeding to be transferred into an outer-reality person (transcendent to the simulated world), and this option can be contributed to Gurdjieff's aspect in Fourth Way that "humans are not born with a soul. Rather, a man must create a soul through the course of his life".
This "creation of a soul" for a (by its nature soulless) virtual-person (part of the Program) would ultimately mean exiting (emigrating) and getting transformed on exit into a real (outer-reality) person, assuming the outer-reality is a realm of Spirit. And the (right) "course of life" in simulation would only be the preparation for that final act of emigration (transferring and related transforming).
In this case, since the emigrating inhabitant of the simulation didn't have an associated outer-reality person (user with a "real body"), this virtual person would be transferred into either a new outer-reality person (assuming that possible), or an already existing one, whether being a player of the simulation or not at all. And if being a player, that outer-reality person, as a user, would be previously associated with some other inhabitant from the simulated world and thus with "taking over" (or merging with) this chosen special previous-inhabitant that emigrates, he could choose to destroy that other/old inhabitant, or abandon him (leaving him then in the simulated world without a user temporarily or permanently). Or if neither destroying or abandoning, but willing to further 'play the simulation' and choosing to play that same old inhabitant (that didn't emigrate), he would do that now as a 'transformed' user ('enriched' with an emigrated virtual-person, or now even completely being that previously virtual person, if that was chosen and possible, and as such continuing to play the simulation using a 'new' virtual-person).
And the outer-reality person (which as self is transcendent to the simulated world) can be 'something' completely indescribable from the point of the simulated world, but as self(=soul), essentially emanates from the Spirit, with a 'personality' manifesting the Spirit.
An intermingled simulation supports both types of consciousness: "players" from the outer reality who are visiting (as a brain-computer interface simulation) or emigrating, and virtual-people who are natives of the simulation and hence lack any physical body in the outer reality.
The Matrix movies feature an intermingled type of simulation: they contain not only human minds (with their physical bodies remaining outside), but also sentient software programs that govern various aspects of the computed realm.
Then the ultimate question is—if one accepts that theses 1, 2, and 3 are at least possible, which of the following is more likely?
In greater detail, his argument attempts to prove the trichotomy, that:
Bostrom's argument uses the premise that given sufficiently advanced technology, it is possible to simulate entire inhabited planets or even larger habitats or even entire universes as quantum simulations in time/space pockets, including all the people on them, on a computer, and that simulated people can be fully conscious, and are as much persons as non-simulated people.
A particular case provided in the original paper poses the scenario where we assume that the human race could reach such a technological level without destroying themselves in the process (i.e. we deny the first hypothesis); and that once we reached such a level we would still be interested in history, the past, and our ancestors, and that there would be no legal or moral strictures on running such simulations (we deny the second hypothesis)—then
Assumptions as to whether the human race (or another intelligent species) could reach such a technological level without destroying themselves depend greatly on the value of the Drake equation, which gives the number of intelligent technological species communicating via radio in a galaxy at any given point in time. The expanded equation looks to the number of posthuman civilizations that ever would exist in any given universe. If the average for all universes, real or simulated, is greater than or equal to one such civilization existing in each universe's entire history, then odds are rather overwhelmingly in favor of the proposition that the average civilization is in a simulation, assuming that such simulated universes are possible and such civilizations would want to run such simulations.
Some papers analyses "serious mathematical and logical errors" in the Simulation Argument.
Physicist Prof. Frank J. Tipler envisages a similar scenario to Nick Bostrom's argument, one that Tipler maintains is a physically required cosmological scenario in the far future of the universe: as the universe comes to an end in a solitary singularity during the Big Crunch, the computational capacity of the universe is capable of increasing at a sufficient rate that is accelerating exponentially faster than the time running out. In principle, a simulation run on this universe-computer can thus continue forever in its own terms, even though proper time lasts only a finite duration.
Prof. Tipler identifies this final singularity and its state of infinite information capacity with God. According to Prof. Tipler and Prof. David Deutsch, the implication of this theory for present-day humans is that this ultimate cosmic computer will essentially be able to resurrect everyone who has ever lived, by recreating all possible quantum brain states within the master simulation, somewhat reminiscent of the resurrection ideas of Nikolai Fyodorovich Fyodorov. This would manifest as a simulated reality. From the perspective of the inhabitant, the Omega Point represents an infinite-duration afterlife, which could take any imaginable form due to its virtual nature. At first glance, Tipler's hypothesis requires some means by which the inhabitants of the far future can recover historical information in order to reincarnate their ancestors into a simulated afterlife. However, if they really have access to infinite computing power, that is no problem at all—they can just simulate "all possible worlds". (This line of thought is continued in Platonic simulation theories). Tipler's argument can also be intertwined with Nick Bostrom's aforementioned argument from probability. If Omega Point will simulate an infinite number of virtual worlds then it would be infinitely more likely that our reality is in one of those simulated worlds, rather than in the lone real world that created the Omega Point.
Prof. Tipler's Omega Point Theory is predicated on an eventual Big Crunch, thought by some to be an unlikely scenario by virtue of a number of recent astronomical observations. Tipler has recently amended his views to accommodate an accelerating universe due to a positive cosmological constant. He proposes baryon tunneling as a means of propelling interstellar spacecraft. He states that if the baryons in the universe were to be annihilated by this process, then this would force the Higgs field toward its absolute vacuum, cancelling the positive cosmological constant, stopping the acceleration, and allowing the universe to collapse into the Omega Point.
Some theorists have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (also known as mathematical Platonism) are both true our consciousnesses must be inside a simulation. This argument states that a "Plato's heaven" or ultimate ensemble would contain every algorithm, including those which implement consciousness. Platonic simulation theories are also subsets of the multiverse theories and theories of everything.
A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the "dream hypothesis" cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it. One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th Century BC. He phrased the problem as the well-known "Butterfly Dream," which went as follows:
Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49)The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In Meditations on First Philosophy, he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep", and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false".
Chalmers (2003) discusses the dream hypothesis, and notes that this comes in two distinct forms:
Both the dream argument and the Simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth.
Another state of mind in which an individual's perceptions have no physical basis in the real world is called psychosis.
A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something no computer can do, it cannot be a computer simulation. In fact, known physics is held to be computable.
The objection could be made that the simulation does not have to run in "real time". But it misses an important point: the shortfall is not linear, rather it is a matter of performing an infinite number of computational steps in a finite time. This objection does not apply if the hypothetical simulation is being run on a hypercomputer, a machine more powerful than a Turing machine. Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed. The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the Simulation Hypothesis unfalsifiable and therefore scientifically unacceptable, at least by Popperian standards.
[Yet if all possible virtual reality initial conditions have been simulated and still it is possible to create a reality that plays out differently to those already created (despite starting at an initial condition common to one of those already in existence) then that extra environment must obey slightly different cause and effect laws of reality, or else it would simply play out in the same way as one of those already simulated. This implies that the argument by Deutsch is only valid if the laws that govern each virtual reality may be different: i.e. they would have to allow inconsistencies such as objects suddenly disappearing or appearing out of nowhere for every time an environment transitions from one time slot to another. If instead one simply assumes that there are infinitely many possible initial conditions, since they vary by infinitesimally small amounts, then (even if all follow the same laws) there will be infinitely many possible virtual realities that could be generated, which leads to the same conclusion as Deutsch.]
However, later on in the book, Deutsch goes on to argue for a very strong version of the Turing principle, namely: "It is possible to build a virtual reality generator whose repertoire includes every physically possible environment."
However, in order to include every physically possible environment, the computer would have to be able to include a full simulation of the environment containing itself. Even so, a computer running a simulation need not have to run every possible physical moment to be plausible to its inhabitants.
As of 2007, the computational requirements for molecular dynamics are such that it takes several months of computing time on the world's fastest computers to simulate 1/10th of one second of the folding of a single protein molecule.
To simulate an entire galaxy would require more computing power than can presently be envisioned, assuming that no shortcuts are taken when simulating areas that nobody is observing.
In answer to this objection, Bostrom calculated that simulating the brain functions of all humans who have ever lived would require roughly 1033 to 1036 calculations. He further calculated that a planet-sized computer built using known nanotechnological methods would perform about 1042 calculations per second — and a planet-sized computer is not inherently impossible to build, (although the speed of light could severely constrain the speed at which its subprocessors share data). In any case, a simulation need not compute every single molecular event that occurs inside it; it may only process events that its participants can actively perceive. This is particularly the case if the simulation contained only a handful of people; far less processing power would be needed to make them believe they were in a "world" much larger than was actually the case.Brain-computer interface
Some have argued that a dream is a reality being simulated for certain parts of the dreamer's brain by other parts of the dreamer's brain — possibly showing that a 'computer' less powerful than a whole human brain can simulate oft-believable realities for the senses. Similar arguments would apply to vivid recollections, imaginings, and especially hallucinations. However, all of these things are usually less vivid and do not have to consistently obey the laws of physics, which our world does and which constraint presumably requires more computational power. (Another point some have made about hallucinations is that the hallucination cannot be interracted with in a rich, vivid way requiring simulation of multiple senses, possibly because the brain knows it does not have the computing power to support such interraction.)
Additionally, it's possible that the parts of our brains that question the validity of a situation are impaired when we sleep. The believability of a simulation is an important influence on the results it generates.Validity of the arguments In any case, it is perhaps erroneous to apply our current sense of feasibility to projects undertaken in an outer reality, where resources and physical laws may be very different. It also assumes designers would need to simulate reality beyond our natural senses.
Also, a simulated reality need not run in realtime. The inhabitants of a simulated universe would have no way of knowing that one day of subjective time actually required much longer to calculate in their host computer, or vice-versa. Isaac Asimov pushed the limits of this by claiming that, unbeknownst to the inhabitants, the simulation could even run backwards, or in pieces on different computers, or with a million generations of monks working weekends on abacuses — all without the simulation missing a beat 'in simulation time'.
All other things being equal, the solution with the fewest assumptions is preferable.
It has been noted that there is no definitive way to tell whether one is in a simulation. It is generally the case that any number of hypotheses can explain the same evidence. This situation often prompts the use of a heuristic rule called Occam's razor, which prefers simpler explanations over more complex ones, and is often implicated in skeptical criticisms of far-fetched hypotheses.
Since it is a heuristic rule, and not a natural law, it is not an infallible guide as to what is ultimately the truth, but only what is usually best to believe, all other things being equal. If we assume Occam's Razor applies, then it would tell us to reject simulated reality as being too complex, in favor of reality being what it appears to be..
In fact, bugs could be very common. An interesting question is whether knowledge of bugs or loopholes in a sufficiently powerful simulation are instantly erased the minute they are observed since presumably all thoughts and experiences in a simulated world could be carefully monitored and altered. This would, however, require enormous processing capability in order to simultaneously monitor billions of people at once. Of course, if this is the case we would never be able to act on discovery of bugs. In fact, any simulation significantly determined to protect its existence could erase any proof that it was a simulation whenever it arose, provided it had the enormous capacity necessary to do so.
To take this argument to an even greater extreme, a sufficiently powerful simulation could make its inhabitants think that erasing proof of its existence is difficult. This would mean that the computer actually has an easy time of erasing glitches, but we all think that changing reality requires great power.
However, such messages have not been made public if they have been found, and the argument relies on the messages being truthful. As usual, other hypotheses could explain the same evidence. In any case, if such constants are in fact infinite, then at some point an apparently meaningful message will appear in them (this is known as the infinite monkey theorem), not necessarily because it was placed there.
The Easter Egg Theory also assumes that a simulation would want to inform its inhabitants of its real nature; it may not. Otherwise, if we consider that the human race will eventually be capable of creating intelligent programs (i.e. machines) living inside a virtual subspace of our "real" world, then an interesting question would be to define whether or not we will be capable of suppressing from our sentient robots their capability of knowing their artificial nature (see the movie Blade Runner).
However, this argument, like many others, assumes that accurate judgments about the simulating computer can be made from within the simulation. If we are being simulated, we might be misled about the nature of computers.
Taken one step further, the "fine grained" elements of our world could themselves be simulated since we never see the sub-atomic particles due to our inherent physical limitations. In order to see such particles we rely on other instruments which appear to magnify or translate that information into a format our limited senses are able to view: computer print out, lens of a microscope, etc. Therefore, we essentially take on faith that they're an accurate portrayal of the fine grained world which appears to exist in a realm beyond our natural senses. Assuming the sub-atomic could also be simulated then the processing power required to generate a realistic world would be greatly reduced.
This "disturbance" interpretation of the uncertainty principle is similar to how scenes are sometimes rendered in video games, where computational resources are limited. Some areas of the simulation may not be rendered until a participant looks at them. This might resemble "observer effect" to a participant.
It can be argued that the use of continua in physics constitutes a possible argument against the simulation of a physical universe. Removing the real numbers and uncountable infinities from physics would counter some of the objections noted above, and at least make computer simulation a possibility. However, digital physics must overcome these objections. For instance, cellular automata would appear to be a poor model for the non-locality of quantum mechanics.
Bostrom further elaborated on the idea of bots:
In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or "shadow-people" – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much [computationally] cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience.
The idea of "zombies" has a well known corollary in the video game industry where computer generated characters are known as Non-Player Characters ("NPCs"). The term 'bots' is short for 'robots'. The usage originated as the name given to the simple AI opponents of modern video games.
It is possible that time passes slower or quicker for brains in a dream state (i.e., in a brain-computer interface trance); however, the point is that they still function at a finite, biological speed, and the simulation must track with them. Unless those interacting with the simulation are augmented and capable of processing information at the same rate as the simulation itself.
A virtual-people or emigration simulated reality, on the other hand, need not. This is because its inhabitants are using the simulation's own physics in order to experience, think, and react. If the simulation were slowed down or sped up, so also would the inhabitants' own senses, brains, and muscles, as well as every other molecule inside. The inhabitants would perceive no change in the passage of time, simply because their method of measuring time is dependent on the cosmic clock that they are seeking to measure. (They could perform the measurement only if they had some access to data from the outer reality.)
For that matter, they could not even detect whether the simulation had been completely halted: a pause in the simulation would pause every life and mind within it. When the simulation was later resumed, the inhabitants would continue exactly as they were before the pause, completely unaware that (for example) their cosmos had been paused and archived for a billion years before being resumed by a completely different director. A simulation could also be created with its inhabitants already possessing memories as though they had already lived part of their lives before; said inhabitants would not be able to tell the difference unless informed of it by the simulation. (Compare with the five minute hypothesis and Last Thursdayism).
One practical implication of this is that a virtual-people or a hybrid simulation does not require a computer powerful enough to model its entire cosmos at full speed. Per the Turing completeness theorem, a simulation can progress at whatever speed its host computer can manage; it would be constrained by available memory but not by computation rate.
This recursion could continue to infinitely many levels — a simulation containing a computer running a simulation containing a computer running a simulation and so on. The recursion is subject only to one constraint: each 'nested' simulation must be:
The latter is the basis of the idea that quantum uncertainties are circumstantial evidence that our own reality is a simulation. However, this assumes that there is a finite limitation somewhere in the chain. Assuming an infinite number of simulations within simulations, there need not be any noticeable difference between any of the subsets.