Definitions

brimming

Chinese room

The Chinese Room argument comprises a thought experiment and associated arguments by John Searle , who attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.

Chinese room thought experiment

Searle requests that his reader imagine that, many years from now, people have constructed a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, using a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All of the questions that the human asks it receive appropriate responses, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human being. Most proponents of artificial intelligence would draw the conclusion that the computer understands Chinese, just as the Chinese-speaking human does.

Searle then asks the reader to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the aforementioned computer program and processes the Chinese characters according to its instructions. He does not understand a word of Chinese; he simply manipulates what, to him, are meaningless symbols, using the book and whatever other equipment, like paper, pencils, erasers and filing cabinets, is available to him. After manipulating the symbols, he responds to a given Chinese question in the same language. As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he has done so, too, simply by running the program manually. "Nobody just looking at my answers can tell that I don't speak a word of Chinese," he writes.

This lack of understanding, according to Searle, proves that computers do not understand Chinese either, because they are in the same position as he — nothing but mindless manipulators of symbols: they do not have conscious mental states like an "understanding" of what they are saying, so they cannot fairly and properly be said to have minds.

History

Searle's argument first appeared in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It eventually became the journal's "most influential target article", generating an enormous number of commentaries and responses in the ensuing decades.

Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes BBS editor Stevan Harnad, "still think that the Chinese Room Argument is dead wrong." The sheer volume of the literature that has grown up around it inspired Pat Hayes to quip that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false."

Despite the controversy (or perhaps because of it) the paper has become "something of a classic in cognitive science," according to Harnad. Varol Akman agrees, and has described Searle's paper as "an exemplar of philosophical clarity and purity".

Searle's targets: "strong AI" and computationalism

Although the Chinese Room argument was originally presented in reaction to the statements of AI researchers, philosophers have come to view it as an important part of the philosophy of mind — a challenge to functionalism and the computational theory of mind, and related to such questions as the mind-body problem, the problem of other minds, the symbol-grounding problem and the hard problem of consciousness.

Strong AI

In 1955, AI founder Herbert Simon declared that "there are now in the world machines that think, that learn and create", and claimed that they had "solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.

John Haugeland summarises the philosophical position of early AI researchers thus:

AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.

Statements like these assume a philosophical position that Searle calls "strong AI":

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

Searle also ascribes the following positions to advocates of strong AI:

  • AI systems can be used to explain the mind;
  • The brain is irrelevant to the mind; and
  • The Turing test is adequate for establishing the existence of mental states.

Strong AI as philosophy

Stevan Harnad argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of computationalism, a position (unlike 'strong AI') that is actually held by many thinkers, and hence one worth refuting. Computationalism is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processing system.

Each of the following, according to Harnad, is a "tenet" of computationalism:

  • Mental states are computational states (which is why computers can have mental states and help to explain the mind);
  • Computational states are implementation-independent — in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that
  • Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive. This last point is a version of functionalism.

Searle accuses strong AI of dualism, the idea that the mind and the body are made up of different "substances". He writes that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter." He rejects any form of dualism, writing that "brains cause minds and that "actual human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains", a position called "biological naturalism" (as opposed to alternatives like behaviourism, functionalism, identity theory and dualism).

Searle's argument centers on "understanding" — that is, mental states with what philosophers call "intentionality" — and does not directly address other closely related ideas, such as "intelligence" or "consciousness". David Chalmers has argued that, to the contrary, "it is fairly clear that consciousness is at the root of the matter".

Strong AI v. AI research

Searle's argument does not limit the intelligence with which machines can behave or act; indeed, it fails to address this issue directly, leaving open the possibility that a machine could be built that acts intelligently but does not have a mind or intentionality in the same way that brains do.

Since the primary mission of AI research is only to create useful systems that act intelligently, Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence.

Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this, as long as it understood that it is merely a simulation and not the real thing.

Replies

Replies to Searle's argument may be classified according to what they claim to show:

  • Those which identify who speaks Chinese;
  • Those which demonstrate how meaningless symbols can become meaningful;
  • Those which suggest that the Chinese room should be redesigned more along the lines of a brain; and
  • Those which demonstrate the ways in which Searle's argument is misleading.

Some of the arguments (robot and brain simulation, for example) fall into multiple categories.

System and virtual mind replies: finding the mind

These two replies attempt to answer the question: since the man in the room doesn't speak Chinese, where is the "mind" that does?

These replies address the key ontological issues of mind vs. body and simulation vs. reality.

Systems reply. The "systems reply" argues that it is the whole system that understands Chinese, consisting of the room, the book, the man, the paper, the pencil and the filing cabinets. While the man by himself can only understand English, the complete system can understand Chinese. The man is part of the system, just as the hippocampus is a part of the brain. The fact that the man understands nothing is irrelevant and is no more surprising than the fact that the hippocampus understands nothing by itself.

Searle's response is to consider what happens if the man memorizes the rules and keeps track of everything in his head. Then the only component of the system is the man himself—in this sense, the man is the system. Searle argues that if the man doesn't understand Chinese then the system (which consists of just the man) doesn't understand Chinese either and the fact that the man appears to understand Chinese proves nothing.

Virtual mind reply. A more precise response is that there is a Chinese speaking mind in Searle's room, but that it is virtual. A fundamental property of computing machinery is that one machine can "implement" another: any (Turing complete) computer can do a step-by-step simulation of any other machine. In this way, a machine can simultaneously be two machines at once: for example, it can be a Macintosh and a word processor at the same time. A virtual machine depends on the hardware (in that if you turn off the Macintosh, you turn off the word processor as well), yet is different from the hardware. (This is how the position resists dualism: there can be two machines in the same place, both made of the same substance, if one of them is virtual.) A virtual machine is also "implementation independent" in that it doesn't matter what sort of hardware it runs on: a PC, a Macintosh, a supercomputer, a brain or Searle in his Chinese room.

To clarify the distinction between the systems reply and virtual mind reply, David Cole notes that a program could be written that implements two minds at once – for example, one speaking Chinese and the other Korean. While there is only one system and only one man in the room, there may be an unlimited number of "virtual minds.

Searle would respond that such a mind is only a simulation. He writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter. The question is, is the human mind like the pocket calculator, essentially composed of information? Or is it like the rainstorm, which can't be duplicated using digital information alone? (The issue of simulation is also discussed in the article synthetic intelligence.)

What they do and don't prove. These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.

However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."

Robot and semantics replies: finding the meaning

As far as the man in the room is concerned, the symbols he writes are just meaningless "squiggles." But if the Chinese room really "understands" what it's saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize.

These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply. Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world.

Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robots eyes. (See Mary's Room for a similar thought experiment.)

Derived meaning. Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols he manipulates are already meaningful, they're just not meaningful to him.

Searle complains that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.

Commonsense knowledge / contextualist reply. Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.

Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.

What they do and don't prove. To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics.

However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.

Brain simulation and connectionist replies: redesigning the room

These arguments are all versions of the systems reply that identify a particular kind of system as being important. They try to outline what kind of a system would be able to pass the Turing test and give rise to conscious awareness in a machine. (Note that the "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.)

Brain simulator reply. Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.

Searle replies that such a simulation will not have reproduced the important features of the brain — its causal and intentional states. Searle is adamant that "human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains."

Two variations on the brain simulator reply are:

Chinese nation. What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.

Brain replacement scenario. In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.

Connectionist replies. Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.

Combination reply. This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.

What they do and don't prove. Arguments such as these (and the robot and commonsense knowledge replies above) recommend that Searle's room be redesigned. Searle's replies all point out that, however the program is written or however it is connected to the world, it is still being simulated by a simple step by step Turing complete machine (or machines). Every one of these machines is still, at the ground level, just like the man in the room: it understands nothing and doesn't speak Chinese.

Searle also argues that, if features like a robot body or a connectionist architecture are required, then strong AI (as he understands it) has been abandoned. Either (1) Searle's room can't pass the Turing test, because formal symbol manipulation (by a Turing complete machine) is not enough, or (2) Searle's room could pass the Turing test, but the Turing test is not sufficient to determine if the room has a "mind." Either way, it denies one or the other of the positions Searle thinks of "strong AI", proving his argument.

The brain arguments also suggests that computation can't provide an explanation of the human mind (another aspect of what Searle thinks of as "strong AI"). They assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works.

Other critics don't argue that these improvements are necessary for the Chinese room to pass the Turing test or to have a mind. They accept the premise that the room as Searle describes it does, in fact, have a mind, but they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments being used as appeals to intuition (see next section). Searle's intuition, however, is never shaken. He writes: "I can have any formal program you like, but I still understand nothing.

In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's "blockhead" argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes S, reply with P and goto X". Any program can be rewritten (or "refactored") into this form, even a brain simulation. It is hard for most to imagine that such a program would give rise to a "mind" or have "understanding". In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of our conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims.

Speed, complexity and other minds: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies.

Speed and complexity replies. The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.

An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose this analogous thought experiment:

Churchland's luminous room. Suppose a philosopher finds it inconceivable that light is caused by waves of electromagnetism. He could go into a dark room and wave a magnet up down. He would see no light, of course, and he could claim that he had proved light is not a magnetic wave and that he has refuted Maxwell's equations. The problem is that he would have to wave the magnet up and down something like 450,000,000,000 times a second in order to see anything.

Several of the replies above address the issue of complexity. The connectionist reply emphasizes that a working artificial system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge," as Daniel Dennett explains.

Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')

Other minds reply. The problem of other minds is that there is no way we can determine if other people's subjective experience is the same as our own. Searle's argument is a version of the problem of other minds, applied to machines. We can only judge if other people have minds by studying their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.

Alan Turing (writing 30 years before Searle presented his argument) noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks. The Turing test simply extends this "polite convention" to machines. He doesn't intend to solve the problem of other minds (for machines or people) and he doesn't think we need to. (One of Turing's motivations for devising the test is to avoid precisely the kind of philosophical problems that Searle is interested in. He writes "I do not wish to give the impression that I think there is no mystery ... [but] I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.)

According to those who make this reply, Searle's position implies that we can't prove that the Chinese room or the man in it has a mind. Searle believes that there are "causal properties" in our neurons that give rise to the mind. These causal properties can't be detected by anyone outside the mind, otherwise the Chinese Room couldn't pass the Turing test—the people outside would be able to tell there wasn't a Chinese speaker in the room by detecting their causal properties. Since they can't detect causal properties, they can't detect the existence of the mental in either case. argue that this implies the human mind, as Searle describes it, is epiphenomenal: that it "casts no shadow." To make this point clear, Daniel Dennett suggests this version of the "other minds" reply:

Dennett's reply from natural selection. Suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. (This sort of animal is a called a "zombie" in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it's most likely that human beings (as we see them today) are actually "zombies," who nevertheless insist they are conscious. This suggests it's unlikely that Searle's "causal properties" would have ever evolved in the first place. Nature has no incentive to create them.

Searle disagrees with this analysis and insists that we must "presuppose the reality and knowability of the mental. and that "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers. He takes it as obvious that we can detect the presence of other minds and dismisses this reply as being off the point.

What they do and don't prove These arguments apply only to our intuitions. (As do the arguments above which are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies.) They do not directly prove that a machine can or can't have a mind.

However, some critics believe that Searle's argument relies entirely on intuitions. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think. Daniel Dennett describes the Chinese room argument as an "intuition pump and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it.

These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.

Formal arguments

Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first "excessively crude version in 1984. The version given below is from 1990.

The only premise or conclusion in the argument which should be controversial is A3 and it is this point which the Chinese room thought experiment is intended to prove.

He begins with three axioms:

(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.

(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.

(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room argument is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.

Searle posits that these lead directly to this conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore programs are not minds.

This much of the argument is intended to show that artificial intelligence will never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct? He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds:

(A4) Brains cause minds.

Searle claims that we can derive "immediately" and "trivially that:

(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it "causal powers". "Causal powers" is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have "equivalent causal powers". "Equivalent causal powers" is whatever else that could be used to make a mind.

And from this he derives the further conclusions:

(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
This follows from C1 and C2: Since no program can produce a mind, and "equivalent causal powers" produce minds, it follows that programs do not have "equivalent causal powers."

(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
Since programs do not have "equivalent causal powers", "equivalent causal powers" produce minds, and brains produce minds, it follows that brains do not use programs to produce minds.

Notes

References

  • . Page numbers above refer to a standard pdf print of the article.
  • .
  • .
  • . Page numbers above refer to a standard pdf print of the article.
  • . Page numbers above refer to a standard pdf print of the article.
  • . Page numbers above refer to a standard pdf print of the article.
  • . Page numbers above refer to a standard pdf print of the article.
  • .
  • . Page numbers above refer to a standard pdf print of the article. See also Searle's original draft
  • .
  • paperback: ISBN 0-67457633-0.
  • .
  • .
  • . Page numbers above refer to a standard pdf print of the article.

Further reading

Search another word or see brimmingon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature
FAVORITES
RECENT

;