The Chinese Room argument comprises a thought experiment and associated arguments by John Searle , who attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.
Searle then asks the reader to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the aforementioned computer program and processes the Chinese characters according to its instructions. He does not understand a word of Chinese; he simply manipulates what, to him, are meaningless symbols, using the book and whatever other equipment, like paper, pencils, erasers and filing cabinets, is available to him. After manipulating the symbols, he responds to a given Chinese question in the same language. As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he has done so, too, simply by running the program manually. "Nobody just looking at my answers can tell that I don't speak a word of Chinese," he writes.
This lack of understanding, according to Searle, proves that computers do not understand Chinese either, because they are in the same position as he — nothing but mindless manipulators of symbols: they do not have conscious mental states like an "understanding" of what they are saying, so they cannot fairly and properly be said to have minds.
Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes BBS editor Stevan Harnad, "still think that the Chinese Room Argument is dead wrong." The sheer volume of the literature that has grown up around it inspired Pat Hayes to quip that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false."
Despite the controversy (or perhaps because of it) the paper has become "something of a classic in cognitive science," according to Harnad. Varol Akman agrees, and has described Searle's paper as "an exemplar of philosophical clarity and purity".
John Haugeland summarises the philosophical position of early AI researchers thus:
AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.
Statements like these assume a philosophical position that Searle calls "strong AI":
The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
Searle also ascribes the following positions to advocates of strong AI:
Each of the following, according to Harnad, is a "tenet" of computationalism:
Searle accuses strong AI of dualism, the idea that the mind and the body are made up of different "substances". He writes that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter." He rejects any form of dualism, writing that "brains cause minds and that "actual human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains", a position called "biological naturalism" (as opposed to alternatives like behaviourism, functionalism, identity theory and dualism).
Searle's argument centers on "understanding" — that is, mental states with what philosophers call "intentionality" — and does not directly address other closely related ideas, such as "intelligence" or "consciousness". David Chalmers has argued that, to the contrary, "it is fairly clear that consciousness is at the root of the matter".
Since the primary mission of AI research is only to create useful systems that act intelligently, Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence.
Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this, as long as it understood that it is merely a simulation and not the real thing.
Some of the arguments (robot and brain simulation, for example) fall into multiple categories.
Systems reply. The "systems reply" argues that it is the whole system that understands Chinese, consisting of the room, the book, the man, the paper, the pencil and the filing cabinets. While the man by himself can only understand English, the complete system can understand Chinese. The man is part of the system, just as the hippocampus is a part of the brain. The fact that the man understands nothing is irrelevant and is no more surprising than the fact that the hippocampus understands nothing by itself.
Searle's response is to consider what happens if the man memorizes the rules and keeps track of everything in his head. Then the only component of the system is the man himself—in this sense, the man is the system. Searle argues that if the man doesn't understand Chinese then the system (which consists of just the man) doesn't understand Chinese either and the fact that the man appears to understand Chinese proves nothing.
Virtual mind reply. A more precise response is that there is a Chinese speaking mind in Searle's room, but that it is virtual. A fundamental property of computing machinery is that one machine can "implement" another: any (Turing complete) computer can do a step-by-step simulation of any other machine. In this way, a machine can simultaneously be two machines at once: for example, it can be a Macintosh and a word processor at the same time. A virtual machine depends on the hardware (in that if you turn off the Macintosh, you turn off the word processor as well), yet is different from the hardware. (This is how the position resists dualism: there can be two machines in the same place, both made of the same substance, if one of them is virtual.) A virtual machine is also "implementation independent" in that it doesn't matter what sort of hardware it runs on: a PC, a Macintosh, a supercomputer, a brain or Searle in his Chinese room.
To clarify the distinction between the systems reply and virtual mind reply, David Cole notes that a program could be written that implements two minds at once – for example, one speaking Chinese and the other Korean. While there is only one system and only one man in the room, there may be an unlimited number of "virtual minds.
Searle would respond that such a mind is only a simulation. He writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter. The question is, is the human mind like the pocket calculator, essentially composed of information? Or is it like the rainstorm, which can't be duplicated using digital information alone? (The issue of simulation is also discussed in the article synthetic intelligence.)
What they do and don't prove. These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.
However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."
Robot reply. Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world.
Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robots eyes. (See Mary's Room for a similar thought experiment.)
Derived meaning. Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols he manipulates are already meaningful, they're just not meaningful to him.
Searle complains that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.
Commonsense knowledge / contextualist reply. Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.
Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.
What they do and don't prove. To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics.
However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.
Brain simulator reply. Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.
Searle replies that such a simulation will not have reproduced the important features of the brain — its causal and intentional states. Searle is adamant that "human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains."
Two variations on the brain simulator reply are:
Connectionist replies. Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.
Combination reply. This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.
What they do and don't prove. Arguments such as these (and the robot and commonsense knowledge replies above) recommend that Searle's room be redesigned. Searle's replies all point out that, however the program is written or however it is connected to the world, it is still being simulated by a simple step by step Turing complete machine (or machines). Every one of these machines is still, at the ground level, just like the man in the room: it understands nothing and doesn't speak Chinese.
Searle also argues that, if features like a robot body or a connectionist architecture are required, then strong AI (as he understands it) has been abandoned. Either (1) Searle's room can't pass the Turing test, because formal symbol manipulation (by a Turing complete machine) is not enough, or (2) Searle's room could pass the Turing test, but the Turing test is not sufficient to determine if the room has a "mind." Either way, it denies one or the other of the positions Searle thinks of "strong AI", proving his argument.
The brain arguments also suggests that computation can't provide an explanation of the human mind (another aspect of what Searle thinks of as "strong AI"). They assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works.
Other critics don't argue that these improvements are necessary for the Chinese room to pass the Turing test or to have a mind. They accept the premise that the room as Searle describes it does, in fact, have a mind, but they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments being used as appeals to intuition (see next section). Searle's intuition, however, is never shaken. He writes: "I can have any formal program you like, but I still understand nothing.
In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's "blockhead" argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes S, reply with P and goto X". Any program can be rewritten (or "refactored") into this form, even a brain simulation. It is hard for most to imagine that such a program would give rise to a "mind" or have "understanding". In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of our conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims.
The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies.
Speed and complexity replies. The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
Several of the replies above address the issue of complexity. The connectionist reply emphasizes that a working artificial system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge," as Daniel Dennett explains.
Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')
Other minds reply. The problem of other minds is that there is no way we can determine if other people's subjective experience is the same as our own. Searle's argument is a version of the problem of other minds, applied to machines. We can only judge if other people have minds by studying their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.
Alan Turing (writing 30 years before Searle presented his argument) noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks. The Turing test simply extends this "polite convention" to machines. He doesn't intend to solve the problem of other minds (for machines or people) and he doesn't think we need to. (One of Turing's motivations for devising the test is to avoid precisely the kind of philosophical problems that Searle is interested in. He writes "I do not wish to give the impression that I think there is no mystery ... [but] I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.)
According to those who make this reply, Searle's position implies that we can't prove that the Chinese room or the man in it has a mind. Searle believes that there are "causal properties" in our neurons that give rise to the mind. These causal properties can't be detected by anyone outside the mind, otherwise the Chinese Room couldn't pass the Turing test—the people outside would be able to tell there wasn't a Chinese speaker in the room by detecting their causal properties. Since they can't detect causal properties, they can't detect the existence of the mental in either case. argue that this implies the human mind, as Searle describes it, is epiphenomenal: that it "casts no shadow." To make this point clear, Daniel Dennett suggests this version of the "other minds" reply:
Searle disagrees with this analysis and insists that we must "presuppose the reality and knowability of the mental. and that "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers. He takes it as obvious that we can detect the presence of other minds and dismisses this reply as being off the point.
What they do and don't prove These arguments apply only to our intuitions. (As do the arguments above which are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies.) They do not directly prove that a machine can or can't have a mind.
However, some critics believe that Searle's argument relies entirely on intuitions. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think. Daniel Dennett describes the Chinese room argument as an "intuition pump and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it.
These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.
The only premise or conclusion in the argument which should be controversial is A3 and it is this point which the Chinese room thought experiment is intended to prove.
He begins with three axioms:
Searle posits that these lead directly to this conclusion:
This much of the argument is intended to show that artificial intelligence will never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct? He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds:
Searle claims that we can derive "immediately" and "trivially that:
And from this he derives the further conclusions: