In philosophy, the computational theory of mind is the view that the human mind is best conceived as an information processing system and that thought is a form of computation. The theory was proposed in its modern form by Hilary Putnam in 1961 and developed by Jerry Fodor in the 60s and 70s. This view is common in modern cognitive psychology and is presumed by theorists of evolutionary psychology.
The computational theory of mind is a philosophical concept that the mind functions as a computer or symbol manipulator. The theory is that the mind computes input from the natural world to create outputs in the form of further mental or physical states. A computation is the process of taking input and following a step by step algorithm to get a specific output. The computational theory of mind claims that there are certain aspects of the mind that follow step by step processes to compute representations of the world, however this theory does not claim that computation is sufficient for thought.
The computational theory of mind requires representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, it must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However the two theories differ in that the represtenational theory claims that all mental states are representations while the computational theory leaves open that certain mental states, such as pain or depression, may not be representational and therefore may not be suitable for a computational treatment. These non-representational mental states are known as qualia. The computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states).
'Computer' is not meant to mean a modern day electronic computer. Rather a computer is a symbol manipulator that follows step by step functions to compute input and form output. Alan Turing describes this type of computer in his concept of a Turing Machine.
There are arguments against the Computational Theory of Mind. Some of the most compelling encompass the physical realm of a computational process. Gallistel writes in Learning and Representation about some of the implications of a truly computational system of the mind. Essentially Gallistel is concerned with the limits of thermodynamics within the circuits of the brain. With the high volume of information, and the low level of lost material necessary, we have to ask where the energy comes from and how the heat would be dissipated.
It can also be argued that not all thoughts are actually computable. The idea that a thought (or thought process) can be broken down into a number-based system (such as would be used in a computer; a Turing Machine) is necessary to the computational theory of mind. If it is not true that all thoughts can be reduced to numbers, then the computational theory of mind cannot be all true. Herein lies another objection to the theory. Is it even possible that the computational model constitutes learning?
John Searle has offered a thought experiment known as the Chinese Room that demonstrates this problem. Imagine that there is a man in a room with no way of communicating to anyone or anything outside of the room except for a piece of paper that is passed under the door. With the paper, he is to use a series of books provided to decode and “answer” what is on the paper. The symbols are all in Chinese, and all the man knows is where to look in the books, which then tell him what to write in response. It just so happens that this generates a conversation that the Chinese man outside of the room can actually understand, but can our man in the room really be said to understand it? This is essentially what the CTM presents us with; a model in which the mind simply decodes symbols and outputs more symbols. It is argued that perhaps this is not real learning or thinking at all.
Additionally, Penrose has noted the fact that the human mind seems to have a capacity to understand, use, and discover mathematical intricacies that computers have yet to accomplish, though one could argue that it is only a matter of time until computers too are at this level.