The term was first used in the 1960s in the context of theories that likened the mind to a computer. Before then, what we now call working memory was referred to as short-term memory, sometimes also as primary memory, immediate memory, operant memory, or provisional memory. Short-term memory is the ability to remember information over a brief period of time (in the order of seconds). Most theorists today use the concept of working memory to replace or include the older concept of short-term memory, thereby marking a stronger emphasis on the notion of manipulation of information instead of passive maintenance.
The earliest mention of experiments on the neural basis of working memory can be traced back to over 100 years ago, when Hitzigand Ferrier described ablation experiments of the prefrontal cortex (PFC). They concluded that the frontal cortex was important for cognitive processes rather than sensory ones. In 1935 and 1936, Jacobsen and colleagues were the first to conclude that the cognitive processes in the PFC were notable in delay-dependent tasks; in other words, they suffered from short-term memory loss.
There have been numerous models proposed regarding how working memory functions, both anatomically and cognitively. Of those, three have received the distinct notice of wide acceptance.
Working memory capacity can be tested by a variety of tasks. A commonly used measure is a dual-task paradigm combining a memory span measure with a concurrent processing task. For example, (Daneman & Carpenter, 1980) used "reading span". Subjects read a number of sentences (usually between 2 and 6) and try to remember the last word of each sentence. At the end of the list of sentences, they repeat back the words in their correct order. Other tasks that don't have this dual-task nature have also been shown to be good measures of working memory capacity. The question of what features a task must have to qualify as a good measure of working memory capacity is a topic of ongoing research.
Measures of working-memory capacity are strongly related to performance in other complex cognitive tasks such as reading comprehension, problem solving, and with any measures of the intelligence quotient. Some researchers have argued that working memory capacity reflects the efficiency of executive functions, most notably the ability to maintain a few task-relevant representations in the face of distracting irrelevant information. The tasks seem to reflect individual differences in ability to focus and maintain attention, particularly when other events are serving to capture attention. These effects seem to be a function of frontal brain areas.
Others have argued that the capacity of working memory is better characterized as the ability to mentally form relations between elements, or to grasp relations in given information. This idea has been advanced, among others, by Graeme Halford, who illustrated it by our limited ability to understand statistical interactions between variables. These authors asked people to compare written statements about the relations between several variables to graphs illustrating the same or a different relation, for example "If the cake is from France then it has more sugar if it is made with chocolate than if it is made with cream but if the cake is from Italy then it has more sugar if it is made with cream than if it is made of chocolate". This statement describes a relation between three variables (country, ingredient, and amount of sugar), which is the maximum most of us can understand. The capacity limit apparent here is obviously not a memory limit - all relevant information can be seen continuously - but a limit on how many relationships we can discern simultaneously.
It has been suggested that working memory capacity can be measured as the capacity C of short-term memory (measured in bits of information), defined as the product of the individual mental speed Ck of information processing (in bit/s) (see the external link below to the paper by Lehrl and Fischer (1990)), and the duration time D (in s) of information in working memory, meaning the duration of memory span. Hence:
Lehrl and Fischer measured speed by reading rate. They claimed that C is closely related to general intelligence. Roberts, Pallier, and Stankov have shown, however, that C measures little more than reading speed.
The idea that working memory capacity is underlying general intelligence and can be measured in terms of bits has been inspired by the work of Miller (1956), who demonstrated that working memory capacity depends on the number of chunks, because any overlearned monosyllabic chunk can be processed as one single bit of information. If this would not be the case, any attempt to correlate C with general intelligence would be in vain.
Why is working memory capacity limited at all? If we knew the answer to this question, we would understand much better why our cognitive abilities are as limited as they are. There are several hypotheses about the nature of the capacity limit. One is that there is a limited pool of cognitive resources needed to keep representations active and thereby available for processing, and for carrying out processes. Another hypothesis is that memory traces in working memory decay within a few seconds, unless refreshed through rehearsal, and because the speed of rehearsal is limited, we can maintain only a limited amount of information. Yet another idea is that representations held in working memory capacity interfere with each other. There are several forms of interference discussed by theorists. One of the oldest ideas is that new items simply replace older ones in working memory. Another form of interference is retrieval competition. For example, when the task is to remember a list of 7 words in their order, we need to start recall with the first word. While trying to retrieve the first word, the second word, which is represented in close proximity, is accidentally retrieved as well, and the two compete for being recalled. Errors in serial recall tasks are often confusions of neighboring items on a memory list (so-called transpositions), showing that retrieval competition plays a role in limiting our ability to recall lists in order, and probably also in other working memory tasks. A third form of interference assumed by some authors is feature overwriting. The idea is that each word, digit, or other item in working memory is represented as a bundle of features, and when two items share some features, one of them steals the features from the other. The more items are held in working memory, and the more their features overlap, the more each of them will be degraded by the loss of some features.
None of these hypotheses can explain the experimental data entirely. The resource hypothesis, for example, was meant to explain the trade-off between maintenance and processing: The more information must be maintained in working memory, the slower and more error prone concurrent processes become, and with a higher demand on concurrent processing memory suffers. This trade-off has been investigated by tasks like the reading-span task described above. It has been found that the amount of trade-off depends on the similarity of the information to be remembered and the information to be processed. For example, remembering numbers while processing spatial information, or remembering spatial information while processing numbers, impair each other much less than when material of the same kind must be remembered and processed. Also, remembering words and processing digits, or remembering digits and processing words, is easier than remembering and processing materials of the same category. These findings are also difficult to explain for the decay hypothesis, because decay of memory representations should depend only on how long the processing task delays rehearsal or recall, not on the content of the processing task. A further problem for the decay hypothesis comes from experiments in which the recall of a list of letters was delayed, either by instructing participants to recall at a slower pace, or by instructing them to say an irrelevant word once or three times in between recall of each letter. Delaying recall had virtually no effect on recall accuracy. The interference hypothesis seems to fare best with explaining why the similarity between memory contents and the contents of concurrent processing tasks affects how much they impair each other. More similar materials are more likely to be confused, leading to retrieval competition, and they have more overlapping features, leading to more feature overwriting. One experiment directly manipulated the amount of overlap of phonological features between words to be remembered and other words to be processed. Those to-be-remembered words that had a high degree of overlap with the processed words were recalled worse, lending some support to the idea of interference through feature overwriting.
The theory most successful so far in explaining experimental data on the interaction of maintenance and processing in working memory is the "time-based resource sharing model. This theory assumes that representations in working memory decay unless they are refreshed. Refreshing them requires an attentional mechanism that is also needed for any concurrent processing task. When there are small time intervals in which the processing task does not require attention, this time can be used to refresh memory traces. The theory therefore predicts that the amount of forgetting depends on the temporal density of attentional demands of the processing task - this density is called "cognitive load". The cognitive load depends on two variables, the rate at which the processing task requires individual steps to be carried out, and the duration of each step. For example, if the processing task consists of adding digits, then having to add another digit every half second places a higher cognitive load on the system than having to add another digit every two seconds. Adding larger digits takes more time than adding smaller digits, and therefore cognitive load is higher when larger digits must be added. In a series of experiments, Barrouillet and colleagues have shown that memory for lists of letters depends on cognitive load, but not on the number of processing steps (a finding that is difficult to explain by an interference hypothesis) and not on the total time of processing (a finding difficult to explain by a simple decay hypothesis). One difficulty for the time-based resource-sharing model, however, is that the similarity between memory materials and materials processed also affects memory accuracy.
The first insights into the neuronal basis of working memory came from animal research. Fuster recorded the electrical activity of neurons in the prefrontal cortex (PFC) of monkeys while they were doing a delayed matching task. In that task, the monkey sees how the experimenter places a bit of food under one of two identical looking cups. A shutter is then lowered for a variable delay period, screening off the cups from the monkey’s view. After the delay, the shutter opens and the monkey is allowed to retrieve the food from under the cups. Successful retrieval in the first attempt – something the animal can achieve after some training on the task – requires holding the location of the food in memory over the delay period. Fuster found neurons in the PFC that fired mostly during the delay period, suggesting that they were involved in representing the food location while it was invisible. Later research has shown similar delay-active neurons also in the posterior parietal cortex, the thalamus, the caudate, and the globus pallidus.
Localization of brain functions in humans has become much easier with the advent of brain imaging methods (PET and fMRI). This research has confirmed that areas in the PFC are involved in working memory functions. During the 1990s much debate has centered on the different functions of the ventrolateral (i.e., lower areas) and the dorsolateral (higher) areas of the PFC. One view was that the dorsolateral areas are responsible for spatial working memory and the ventrolateral areas for non-spatial working memory. Another view proposed a functional distinction, arguing that ventrolateral areas are mostly involved in pure maintenance of information, whereas dorsolateral areas are more involved in tasks requiring some processing of the memorized material. The debate is not entirely resolved but most of the evidence supports the functional distinction .
Brain imaging has also revealed that working memory functions are by far not limited to the PFC. A review of numerous studies shows areas of activation during working memory tasks scattered over a large part of the cortex. There is a tendency for spatial tasks to recruit more right-hemisphere areas, and for verbal and object working memory to recruit more left-hemisphere areas. The activation during verbal working memory tasks can be broken down into one component reflecting maintenance, in the left posterior parietal cortex, and a component reflecting subvocal rehearsal, in the left frontal cortex (Broca’s area, known to be involved in speech production)).
There is an emerging consensus that most working memory tasks recruit a network of PFC and parietal areas. One study has shown that during a working memory task the connectivity between these areas increases. Other studies have demonstrated that these areas are necessary for working memory, and not just accidentally activated during working memory tasks, by temporarily blocking them through transcranial magnetic stimulation (TMS), thereby producing an impairment in task performance .
A current debate concerns the function of these brain areas. The PFC has been found to be active in a variety of tasks that require executive functions. This has led some researchers to argue that the role of PFC in working memory is in controlling attention, selecting strategies, and manipulating information in working memory, but not in maintenance of information. The maintenance function is attributed to more posterior areas of the brain, including the parietal cortex . Other authors interpret the activity in parietal cortex as reflecting executive functions, because the same area is also activated in other tasks requiring executive attention but no memory
Most brain imaging studies of working memory have used recognition tasks such as delayed recognition of one or several stimuli, or the n-back task, in which each new stimulus in a long series must be compared to the one presented n steps back in the series. The advantage of recognition tasks is that they require minimal movement (just pressing one of two keys), making fixation of the head in the scanner easier. Experimental research and research on individual differences in working memory, however, has used largely recall tasks (e.g., the reading span task, see above). It is not clear to what degree recognition and recall tasks reflect the same processes and the same capacity limitations.
A few brain imaging studies have been conducted with the reading span task or related tasks. Increased activation during these tasks was found in the PFC and, in several studies, also in the anterior cingulate cortex (ACC). People performing better on the task showed larger increase of activation in these areas, and their activation was correlated more over time, suggesting that their neural activity in these two areas was better coordinated, possibly due to stronger connectivity.
Much has been learned over the last two decades on where in the brain working memory functions are carried out. Much less is known on how the brain accomplishes short-term maintenance and goal-directed manipulation of information. The persistent firing of certain neurons in the delay period of working memory tasks shows that the brain has a mechanism of keeping representations active without external input.
Keeping representations active, however, is not enough if the task demands maintaining more than one chunk of information. In addition, the components and features of each chunk must be bound together to prevent them from being mixed up. For example, if a red triangle and a green square must be remembered at the same time, one must make sure that “red” is bound to “triangle” and “green” is bound to “square”. One way of establishing such bindings is by having the neurons that represent features of the same chunk fire in synchrony, and those that represent features belonging to different chunks fire out of sync . In the example, neurons representing redness would fire in synchrony with neurons representing the triangular shape, but out of sync with those representing the square shape. So far, there is no direct evidence that working memory uses this binding mechanism, and other mechanisms have been proposed as well . It has been speculated that synchronous firing of neurons involved in working memory oscillate with frequencies in the theta band (4 to 8 Hz). Indeed, the power of theta frequency in the EEG increases with working memory load , and oscillations in the theta band measured over different parts of the skull become more coordinated when the person tries to remember the binding between two components of information .
US Patent Issued on Dec. 13 for "Relative-Position, Absolute-Orientation Sketch Pad and Optical Stylus for a Personal Computer" (Utah Inventor)
Dec 21, 2011; ALEXANDRIA, Va., Dec. 21 -- United States Patent no. 8,077,155, issued on Dec. 13. "Relative-Position, Absolute-Orientation...