Cognitive Psychology and Its Implications, Ch. 6

6

Human Memory: Encoding

and Storage

Past chapters have discussed how we perceive and encode what is in our present.

Now we turn to discussing memory, which is the means by which we can

perceive our past. People who lose the ability to create new memories become

effectively blind to their past. I would recommend the movie Memento as providing

a striking characterization of what it would like to have no memory. The protagonist

of the film, Leonard, has anterograde amnesia, a condition that prevents him from

forming new memories. He can remember his past up to the point of a terrible

crime that left him with amnesia, and he can keep track of what is in the immediate

present, but as soon as his attention is drawn to something else, he forgets what

has just happened. So, for instance, he is constantly meeting people he has met

before, who have often manipulated him, but he does not remember them, nor can

he protect himself from being manipulated further. Although Leonard incorrectly

labels his condition as having no short-term memory, this movie is an accurate

portrayal of anterograde amnesia—the inability to form new long-term memories. It

focuses on the amazing ways Leonard tries to connect the past with the immediate

present.

This chapter and the next can be thought of as being about what worked and

did not work for Leonard. This chapter will answer the following questions: • How do we maintain a short-term or working memory of what just happened?

This is what still worked for Leonard. • How does the information we are currently maintaining in working memory

prime knowledge in our long-term memory? • How do we create permanent memories of our experiences? This is what did

not work any more for Leonard. • What factors influence our success in creating new memories?

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 146

Memory and the Brain | 147

•Memory and the Brain

Throughout the brain, neurons are capable of changing in response to experience.

This neural plasticity provides the basis for memory. Although all of the

brain plays a role in memory, there are two regions, illustrated in Figure 6.1,

that have played the most prominent role in research on human memory. First,

there is a region within the temporal cortex that includes the hippocampus,

whose role in memory was already discussed in Chapter 1 (see Figure 1.7). The

hippocampus and surrounding structures play an important role in the storage

of new memories. This is where Leonard had his difficulties. Second, research

has found that prefrontal brain regions are strongly associated with both the

encoding of new memories and the retrieval of old memories. These are the same

regions that were discussed in Chapter 5 with respect to the meaningful encoding

of pictures and sentences. This area also includes the prefrontal region from

Figure 1.15 that was important in retrieval of arithmetic and algebraic facts.

With respect to the prefrontal regions shown in Figure 6.1, note that memory

research has found laterality effects similar to those noted at the beginning

of Chapter 5 (Gabrielli, 2001). Specifically, study of verbal material tends to involve

mostly the left hemisphere. Study of pictorial material, in contrast, tends

to involve sometimes the right hemisphere and sometimes both hemispheres.

Gabrielli notes that the left hemisphere tends to be associated with pictorial

material that is linked to verbal knowledge, such as pictures of famous people

or common objects. It is as if people are naming these objects to themselves.

Human memory depends heavily on frontal structures of the brain for the

creation and retrieval of memories and on temporal structures for the

permanent storage of these memories.

FIGURE 6.1 The brain

structures involved in the

creation and storage of

memories. Prefrontal regions

are responsible for the creation

of memories. The hippocampus

and surrounding structures

in the temporal cortex are

responsible for the permanent

storage of these memories.

Brain Structures

Prefrontal regions

that process information

for storage in posterior regions

Internal hippocampal

regions that store

memories permanently

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 147

•Sensory Memory Holds Information Briefly

Before beginning the discussion of more permanent memories, it is worth

noting the visual and auditory sensory memories that hold information briefly

when it first comes in.

Visual Sensory Memory

Many studies of visual sensory memory have used a procedure in which participants

are presented with a visual array of items, such as the letters shown in

Figure 6.2, for a brief period of time (e.g., 50 ms). When asked to recall the

items, participants are able to report three, four, five, or at most six items. One

might think that only this much material can be held in visual memory—yet

participants report that they were aware of more items but the items faded

away before they could attend to them and report them.

An important methodological variation on this task was introduced by

Sperling (1960). He presented arrays consisting of three rows of four letters.

Immediately after this stimulus was turned off, participants were cued to attend

to just one row of the display and to report only the letters in that row. The cues

were in the form of different tones (high for top row, medium for middle, and

low for bottom). Sperling’s method was called the partial-report procedure, in

contrast to the whole-report procedure, which was what had been used until

then. Participants were able to recall all or most of the items from a row of four.

Because participants did not know beforehand which row would be cued, Sperling

argued that they must have had most or all of the items stored in some sort

of short-term visual memory. Given the cue right after the visual display was

turned off, they could attend to that row in their short-term visual memory and

report the letters in that row. The reason participants could not report more

items in the full-report procedure was that these items had faded from this

memory before the participants could attend to them.

In the procedure just described, the tone cue was presented immediately

after the display was turned off. Sperling also varied the length of the delay

between the removal of the display and the tone. The results he obtained, in

terms of number of letters recalled, are presented in Figure 6.3. As the delay

increased to 1 s, the participants’ performances decayed back to what would be

expected from the original whole-report level of four or five items. That is,

participants were reporting about one-third as many items from the cued row

as they could report from three rows in the whole-report procedure.

Thus, it appears that the memory of the actual display decays very

rapidly and is essentially gone by the end of 1 s. All that is left after

that is what the participant has had time to attend to and convert to

a more permanent form.

Sperling’s experiments indicate the existence of a brief visual sensory

store—a memory system that can effectively hold all the information

in the visual display. While information is being held in this

store, a participant can attend to it and report it. This sensory store

appears to be particularly visual in character. In one experiment that

148 | Human Memory: Encoding and Storage

X

C

V

N K P

F L B

M R J

FIGURE 6.2 An example of

the kind of display used in

a visual-report experiment.

The display is presented briefly

to participants, who are then

asked to report the letters

it contains.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 148

demonstrated the visual character of the sensory store, Sperling (1967) varied

the postexposure field (the visual field after the display). He found that when the

postexposure field was light, the sensory information remained for only 1 s, but

when the field was dark, it remained for a full 5 s. Thus, a bright postexposure

field tends to “wash out’’ memory for the display. Moreover, following a display

with another display of characters effectively “overwrites’’ the first display and so

destroys the memory for the first set of letters. The brief visual memory revealed

in these experiments is sometimes called iconic memory. Unless information

in the display is attended to and processed further, it will be lost.

Auditory Sensory Memory

There is similar evidence for a brief auditory sensory store, which is sometimes

called echoic memory. There are behavioral demonstrations (e.g.,Moray,

Bates, & Barnett, 1965; Darwin, Turvey, & Crowder, 1972; Glucksberg & Cowan,

1972) of an auditory sensory memory, similar to Sperling’s visual sensory

memory, by which people can report an auditory stimulus with considerable

accuracy if probed for it soon after onset. One of the more interesting measures

of auditory sensory memory involves an ERP measure called the mismatch

negativity. When a sound is presented that is different from recently heard

sounds in such features as pitch or magnitude (or is a different phoneme), there

is an increase in the negativity of the ERP recording 150 to 200 ms after the

discrepant sound (for a review, read Näätänen, 1992). In one study, Sams,

Hari, Rif, and Knuutila (1993) presented one tone followed by another at various

intervals. When the second tone was different from the first, it produced a

Sensory Memory Holds Information Briefly | 149

0.0

Mean number of letters reported

Delay of tone (s)

0.2 0.4 0.6 0.8 1.0

FIGURE 6.3 Results from

Sperling’s experiment demonstrating

the existence of a brief

visual sensory store. Participants

were shown arrays consisting of

three rows of four letters. After

the display was turned off, they

were cued by a tone, either

immediately or after a delay,

to recall a particular one of the

three rows. The results show

that the number of items reported

decreased as the delay in

the cuing tone increased. (After

Sperling, 1960. Adapted by permission of

the publisher. © 1960 by Psychological

Monographs.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 149

mismatch negativity as long as the delay between the two tones was less than 10 s.

This indicates that an auditory sensory memory can last up to 10 s, consistent

with other behavioral measures. It appears that the source of this neural

response in the brain is at or near the primary auditory cortex. Similarly, it

appears that the information held in visual sensory memory is in or near the

primary visual cortex. Thus, these basic perceptual regions of the cortex hold

a brief representation of sensory information for further processing.

Sensory information is held briefly in cortical sensory memories so that

we can process it.

•A Theory of Short-Term Memory

A very important event in the history of cognitive psychology was the development

of a theory of short-term memory in the 1960s. It clearly illustrated the

power of the new cognitive methodology to account for a great deal of data in a

way that had not been possible with previous behaviorist theories. Broadbent

(1958) had anticipated the theory of short-term memory, and Waugh and

Norman (1965) gave an influential formulation of the theory. However, it was

Atkinson and Shiffrin (1968) who gave the theory its most systematic development.

It has had an enormous influence on psychology, and although few

researchers still accept the original formulation, similar ideas play a crucial role

in some of the modern theories that we will be discussing.

Figure 6.4 illustrates the basic theory. As we have just seen, information coming

in from the environment tends to be held in transient sensory stores from

which it is lost unless attended to. The theory of short-term memory proposed

that attended information went into an intermediate short-term memory where

it had to be rehearsed before it could go into a relatively permanent long-term

memory. Short-term memory had a limited capacity to hold information. At

one time, its capacity was identified with the memory span. Memory span

refers to the number of elements one can immediately repeat back. Ask a friend

to test your memory span. Have that friend make up lists of digits of various

lengths and read them to you. See how many digits you can repeat back. You

will probably find that you are able to remember no more than around seven or

eight perfectly. The size of the memory span was considered convenient in

those days when American phone numbers were seven digits. One view was

that short-term memory has room for about seven elements, although other

theorists (e.g., Broadbent, 1975) proposed that its capacity is smaller and that

memory span depends on other stores as well as short-term memory.

In a typical memory experiment, it was assumed that participants rehearsed

the contents of short-term memory. For instance, in a study of memory span,

participants might rehearse the digits by saying them over and over again to

themselves. It was assumed that every time an item was rehearsed, there was a

probability that the information would be transferred to a relatively permanent

long-termmemory. If the item left short-termmemory before a permanent longterm

memory representation was developed, however, it would be lost forever.

150 | Human Memory: Encoding and Storage

Sensory

store

Attention

Rehearsal

Short-term

memory

Long-term

memory

FIGURE 6.4 A model of

memory that includes an

intermediate short-term

memory. Information coming in

from the environment is held in

a transient sensory store from

which it is lost unless attended

to. Attended information goes

into an intermediate short-term

memory with a limited capacity

to hold information. The

information must be rehearsed

before it can move into a

relatively permanent long-term

memory.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 150

One could not keep information in short-term

memory indefinitely because new information

would always be coming in and pushing out

old information from the limited short-term

memory.

An experiment by Shepard and Teghtsoonian

(1961) is a good illustration of these ideas.

They presented participants with a long sequence

of 200 three-digit numbers. The task

was to identify when a number was repeated.

The investigators were interested in how participants’

ability to recognize a repeated number

changed as more numbers intervened between

the first appearance of the number and its repetition.

The number of intervening items is referred

to as the lag. If the participant tended to

keep only the most recent numbers in shortterm

memory, memory for the last few numbers

would be good but would get progressively worse as the numbers were pushed

out of short-term memory. The results are presented in Figure 6.5. Note that

recognition memory drops off rapidly over the first few numbers, but then the

drop-off slows to the point where it appears to be reaching some sort of asymptote

at about 60%. The rapid drop-off can be interpreted as reflecting the decreasing

likelihood that the numbers are being held in short-term memory. The

60% level of recall for the later numbers reflects the amount of information

that got into long-term memory.1

A critical assumption in this theory was that the amount of rehearsal controls

the amount of information transferred to long-term memory. For instance,

Rundus (1971) asked participants to rehearse out loud and showed that

the more participants rehearsed an item, the more likely they were to remember

it. Data of this sort were perhaps most critical to the theory of short-term memory

because they reflected the fundamental property of short-termmemory: It is

a necessary halfway station to long-term memory. Information has to “do time”

in short-term memory to get into long-term memory, and the more time done,

the more likely it is to be remembered.

In an influential article, Craik and Lockhart (1972) argued that what was critical

was not how long information is rehearsed, but rather the depth to which it is

processed. This theory, called depth of processing, held that rehearsal improves

memory only if the material is rehearsed in a deep and meaningful way. Passive

rehearsal does not result in better memory.A number of experiments have shown

that passive rehearsal results in little improvement in memory performance. For

instance, Glenberg, Smith, and Green (1977) had participants study a four-digit

number for 2 s, then rehearse a word for 2, 6, or 18 s, and then recall the four

digits. Participants thought that their task was to recall the digits and that they

A Theory of Short-Term Memory | 151

0 10 20 30 40 50 60

.9

.8

.7

.6

Lag

p(“old” old)

FIGURE 6.5 Results from

Shepard and Teghtsoonian’s

experiment demonstrating that

information cannot be kept in

short-term memory indefinitely

because new information will

always be coming in and

pushing out old information.

The probability of an “old”

response to old items is plotted

as a function of the number of

intervening presentations (the

lag) since the last presentation

of a stimulus. (From Shepard &

Teghtsoonian, 1961. Reprinted by

permission of the publisher. © 1961

by the American Psychological

Association.)

1 This level of memory is not really 60% because participants were incorrectly accepting more than 20% of

new items as repeated. The level of memory is really the difference between this 60% hit rate and the 20%

false alarm rate.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 151

were just rehearsing the word to fill the time. However, they were given a final

surprise test for the words. On average, participants recalled 11%, 7%, and 13%

of the words they had rehearsed for 2, 6, and 18 s. Their recall was poor and

showed little relationship to the amount of rehearsal.2 On the other hand, as we

saw in Chapter 5, participants’ memories can be greatly improved if they process

material in a deep and meaningful way. Thus, it seems that there may be no shortterm,

halfway station to long-term memory. Rather, it is critical that we process

information in a way that is conducive to setting up a long-term memory trace.

Information may go directly from sensory stores to long-termmemory.

Kapur et al. (1994) did a PET study of the difference between brain correlates

of the deep and shallow processing of words. In the shallow processing

task, participants judged whether the words contained a particular letter;

in the deep processing task, they judged whether the words described living

things. Even though the study time was the same, participants remembered

75% of the deeply processed words and 57% of the shallowly processed words.

Kapur et al. (1994) found that there was greater activation during deep processing

in the left prefrontal regions indicated in Figure 6.1. A number of subsequent

studies have also shown that this region of the brain is more active

during deep processing (for a review, see Wagner, Bunge, & Badre, 2004).

Atkinson and Shiffrin’s theory of short-term memory postulated that as

information is rehearsed in a limited-capacity short-term memory, it is

deposited in long-term memory; but what turned out to be important

was how deeply the material is processed.

•Working Memory Holds the Information

Needed to Perform a Task

Baddeley’s Theory of Working Memory

Baddeley (1986) proposed a theory of the rehearsal processes that did not tie

them to storage in long-term memory. He hypothesized that there are two systems,

a visuospatial sketchpad and a phonological loop, that are what he

called “slave systems” for maintaining information, and he speculated that there

might be more such systems. These systems compose part of what he calls

working memory, which is a system for holding information that we need to

perform a task. For instance, try multiplying 35 by 23 in your head. You may

find yourself developing a visual image of part of a written multiplication

problem (visuospatial sketchpad) and you may find yourself rehearsing partial

products like 105 (phonological loop). Figure 6.6 illustrates Baddeley’s overall

conception of how these various slave systems interact. A central executive

controls how the slave systems are used. The central executive can put information

into any of the slave systems or retrieve information from them. It can

152 | Human Memory: Encoding and Storage

2 Although recall memory tends not to be improved by the amount of passive rehearsal, Glenberg et al. (1977)

did show that recognition memory is improved by rehearsal. Recognition memory may depend on a kind

of familiarity judgment that does not require creation of new memory traces.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 152

also translate information from one system to

another. Baddeley claimed that the central executive

needs its own temporary store of information

to make decisions about how to control

the slave systems.

The phonological loop has received much

more extensive investigation than the visuospatial

sketchpad. Baddeley proposed that the

phonological loop consists of multiple components.

One is an articulatory loop by which an

“inner voice” rehearses verbal information. A classic example of how we use the

articulatory loop is in remembering a phone number.When one is told a phone

number, one rehearses the number over and over again to oneself until one dials

it. Many brain-imaging studies (see E. E. Smith & Jonides, 1995, for a review)

have found activation in Broca’s area (the region labeled “J” in the frontal portion

of the Figure 4.1 brain illustration) when participants are trying to remember a

list of items. This activation occurs even if the participants are not actually talking

to themselves. Patients with damage to this region show deficits in tests of shorttermmemory

(Vallar,Di Betta,& Silveri, 1997).

Another component of the phonological loop is the phonological store.

Baddeley proposed that this store is in effect an “inner ear” that hears the inner

voice and stores the information in a phonological form. It has been proposed

that this region is associated with the parietal-temporal region of the brain (the

region labeled “J” in the parietal-temporal region of the Figure 4.1 brain illustration).

A number of imaging studies (Henson, Burgess, & Frith, 2000; Jonides

et al., 1998) have found activation of this region during the storage of verbal

information. Also, patients with lesions in this region suffer deficits of shortterm

memory (Vallar et al., 1997).

One of the most compelling pieces of evidence for the existence of the articulatory

loop is the word-length effect (Baddeley, Thomson, & Buchanan, 1975).

Read the five words below and then try to repeat them back without looking

at the page:

• wit, sum, harm, bay, top

Most people can do this. Baddeley et al. found that participants were able to

repeat back an average of 4.5 words out of 5 such one-syllable words. Now read

and try to repeat back the following five words:

• university, opportunity, aluminum, constitutional, auditorium

Participants were able to recall only an average of 2.6 words out of 5 such fivesyllable

words. The crucial factor appears to be how long it takes to say the word.

Vallar and Baddeley (1982) looked at recall for words that varied from one to

five syllables. They also measured how many words of the various lengths participants

could say in a second. Figure 6.7 shows the results. Note that the percentage

of sequences correctly recalled almost exactly matches the reading rate.

Trying to maintain information in working memory is analogous to the

circus act that involves spinning plates on a stick. The circus performer will get

one plate spinning on one stick, then another on another stick, then another,

Working Memory Holds the Information Needed to Perform a Task | 153

Central

executive Phonological

loop

Visuospatial

sketchpad

FIGURE 6.6 Baddeley’s theory

of working memory in which a

central executive coordinates

a set of slave systems. (From

Baddeley, 1986. Reprinted by permission

of the publisher. © 1986 by Oxford

University Press.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 153

and so on. Then he runs back to the first before it slows

down and falls off. He respins it and then respins the rest.

He can keep only so many plates spinning at the same time.

Baddeley proposed that it is the same situation with respect

to working memory. If we try to keep too many items in

working memory, by the time we get back to rehearse the

first one, it will have decayed to the point that it takes too

long to retrieve and re-rehearse. Baddeley proposed that

we can keep about 1.5 to 2.0 seconds’ worth of material

rehearsed in the articulatory loop.

There is considerable evidence that this articulatory

loop truly involves speech. For instance, the research of

R. Conrad (1964) showed that participants suffered more

confusion when they tried to remember spans that had a

high proportion of rhyming letters (such as BCTHVZ) than

when they tried to remember spans that did not (such as

HBKLMW). Also, as we just discussed, there is evidence for

activation in Broca’s area, part of the left prefrontal cortex,

during the rehearsal of such memories.

One might wonder what the difference is between

short-term memory and Baddeley’s articulatory loop. The crucial difference is

that processing information in the phonological loop is not critical to getting

it into long-term memory. Rather, the phonological loop is just an auxiliary

system for keeping information available.

Baddeley proposed that we have an articulatory loop and a visuospatial

sketchpad, both of which are controlled by a central executive, which are

systems for holding information and are part of working memory.

The Frontal Cortex and Primate Working Memory

The frontal cortex gets larger in the progression from lower mammals, such as

the rat, to higher mammals, such as the monkey; and it shows a proportionately

greater development between the monkey and the human. It has been

known for some time that the frontal cortex plays an important role in tasks

that can be thought of as working-memory tasks in primates. The task that has

been most studied in this respect is the delayed match-to-sample task, which

is illustrated in Figure 6.8. The monkey is shown an item of food that is placed

in one of two identical wells (Figure 6.8a). Then the wells are covered, and the

monkey is prevented from looking at the scene for a delay period—typically 10 s

(Figure 6.8b). Finally, the monkey is given the opportunity to retrieve the

food, but it must remember in which well it was hidden (Figure 6.8c). Monkeys

with lesions in the frontal cortex cannot perform this task (Jacobsen,

1935, 1936). A human infant cannot perform similar tasks until its frontal cortex

has matured somewhat, usually at about 1 year of age (Diamond, 1991).

When a monkey must remember where a food item has been placed, a

particular area of the frontal cortex is involved (Goldman-Rakic, 1988). This

154 | Human Memory: Encoding and Storage

1 2 3

Number of syllables

Items correct (%)

% correct

Reading rate

Reading rate (words/s)

4 5

0.5

0.7

0.9

1.1

1.3

1.5

1.7

1.9

2.1

2.5

2.3

10

20

30

40

50

60

70

80

90

100

FIGURE 6.7 Results of

Vallar and Baddeley’s (1982)

experiment showing the

existence of the articulatory

loop. Mean reading rate and

percentage of correct recall

of sequences of five words are

plotted as a function of word

length. (From Baddeley, 1986. Reprinted

by permission of the publisher. © 1986

by Oxford University Press.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 154

small region, called area 46 (Figure 6.9), is found on the side of the frontal

cortex. Lesions in this specific area produce deficits in this task. It has been

shown that neurons in this region fire only during the delay period of the task,

as if they are keeping information active during that interval. They are inactive

before and after the delay.Moreover, different neurons in that region seem tuned

to remembering objects in different portions of the visual field (Funahashi,

Bruce,& Goldman-Rakic, 1991).

Goldman-Rakic (1992) examined monkey performance on other tasks that

require maintaining different types of information over the delay interval. In

one task, monkeys had to remember different objects. For example, the animal

would have to remember to select a red circle, and not a green square, after a

delay interval. It appears that a different region of the prefrontal

cortex is involved in this task. Different neurons in

this area fire when a red circle is being remembered than

when a green square is being remembered. Goldman-Rakic

speculated that the prefrontal cortex is parceled into many

small regions, each of which is responsible for remembering

a different kind of information.

Like many neuroscience studies, these experiments are

correlational—they show a relationship between neural activity

and memory function, but they do not show that the

neural activity is essential for the memory function. In an

effort to show a causal role, Funahashi, Bruce, and Goldman-

Rakic (1993) trained monkeys to remember the location of

objects in their visual field and move their eyes to these

locations after a delay—an oculomotor equivalent of the

task described in Figure 6.8. They selectively lesioned this

Working Memory Holds the Information Needed to Perform a Task | 155

(a) Cue (b) Delay (c) Response

Wrong Right

10

10 12

11 47

45

45

44

8A

8B

46

46

FIGURE 6.8 An illustration of the delayed match-to-sample task. (a) Food is placed in the well

on the right and covered. (b) A curtain is drawn for the delay period. (c) The curtain is raised,

and the monkey can lift the cover from one of the wells. (From Goldman-Rakic, 1987. Reprinted by

permission of the publisher. © 1987 by the American Physiological Society.)

FIGURE 6.9 Lateral views

of the cerebral cortex of a

human (top) and of a monkey

(bottom). Area 46 is the region

shown in darker color. (From

Goldman-Rakic, 1987. Reprinted by

permission of the publisher. © 1987

by the American Physiological Society.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 155

area of the prefrontal cortex in the left hemisphere. They found that monkeys

were no longer able to remember the locations in the right visual field (recall

from Chapter 2 that the left visual field projects to the right hemisphere; see

Figure 2.5), but their ability to remember objects in the left visual field was

unimpaired. When they lesioned the right hemisphere region, their ability to

remember the location of objects in the left visual field was also impacted.

Thus, it does seem that activity in these prefrontal regions is critical to the

ability to maintain these memories over delays.

E. E. Smith and Jonides (1995) used PET scans to see whether there are similar

areas of activation in humans. When participants held visual information

in working memory, there was activation in right prefrontal area 47, which is

adjacent to area 46. The monkey brain and the human brain are not identical

(see Figure 6.8), and we would not necessarily expect a direct correspondence

between regions of their brains. Smith and Jonides also looked at a task in

which participants rehearsed verbal labels; they found that left prefrontal area 6

was active in this task. This region of the prefrontal cortex is associated with

linguistic processing (Petrides, Alvisatos, Evans, & Meyer, 1993). One might

view these two tasks as invoking Baddeley’s two slave systems, with the visuospatial

sketchpad associated with right prefrontal regions and the phonological

loop associated with left prefrontal regions.

Different areas of the frontal cortex appear to be responsible for maintaining

different types of information in working memory.

•Activation and on Long-Term Memory

So far, we have discussed how information from the environment comes into

working memory and is maintained by rehearsal. There is another source of

information besides the environment, however: long-term memory. For instance,

rather than reading a phone number and holding it in working memory,

we can retrieve a familiar number and hold it in working memory. A number

of theories have assumed that different pieces of information in long-term

memory can vary from moment to moment in terms of how easy it is to retrieve

them into working memory. Various theories use different words to describe the

same basic idea. The language I use in this chapter is similar to that used in my

ACT (AdaptiveControl of Thought) theory (J. R.Anderson, 1983; J. R.Anderson

& Lebiere, 1998). In ACT, one speaks of memory traces as varying in their

activation. Another well-known theory, SAM (Search of Associative Memory)

(Gillund & Shiffrin, 1984; Raaijmakers & Shiffrin, 1981), speaks of images

(memory traces) as varying in their familiarity (activation).

An Example of Activation Calculations

Activation determines both the probability and the speed of access to memory.

The free-association technique is sometimes used to get at levels of activation

in memory. Whatever ideas come to mind as you are free-associating can be

156 | Human Memory: Encoding and Storage

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 156

taken as reflecting the things that are currently most active in your long-term

memory.What do you think of when you read the three words below?

Bible

animals

flood

If you are like the students in my classes, you will think of the story of Noah.

The curious fact is that when I ask students to associate to just the word Bible,

they come up with such terms as Moses and Jesus—almost never Noah. When

I ask them to associate to just animals, they come up with farm and zoo, but

almost never Noah; and when I ask them to associate to just flood, they come up

with Mississippi and Johnstown (the latter being perhaps a Pittsburgh-specific

association), but almost never Noah. So why do they come up with Noah when

given all three terms together? Figure 6.10 represents this phenomenon in terms

of activation computations and shows three kinds of things:

1. Various words that might come to mind, such as Jesus,Moses, and

Mississippi.

2. Various terms that might be used to prime memory, such as Bible,

animals, and flood.

3. Associations between the primes and the responses. In this illustration,

these associations are indicated by the triangular connections where the

input line touches the output line.

Activation and on Long-Term Memory | 157

Strengths of association (Sji )

Baseline activation

(Bi )

Input weight

(Wj )

2 2

1 Noah

Moses

Jesus

Farm

Zoo

Mississippi

Johnstown

1 0 1 0 0 1 0 0

Bible Animals Flood

FIGURE 6.10 A representation

of how activation accumulates

in a neural network such as that

assumed in the ACT theory.

Activation coming from various

stimulus words—such as Bible,

animals, and flood—spreads

activation to associated

concepts, such as Noah,

Moses, and farm.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 157

The ACT theory has an equation to represent how the activation, Ai , of any

particular element i reflects the structure and circumstance of this network:

This equation relates the activation Ai to the three components in the network:

1. Bi reflects the base-level activation of the words that might be recalled.

Some concepts, such as Jesus and Mississippi, are more common than

others, such as Noah, and so would have greater base-level activation.

Just to be concrete, we might assign the base-level activations for Jesus

and Mississippi to be 3 and for Noah to be 1.

2. Wj reflects the weight given to various terms j that might prime the

memory. For instance, we might assume that the Wj for any word we

present is 1 and the Wj for words we do not present is 0. The indicates

that we are summing over all of the potential primes j.

3. Sji reflects the strength of associations between potential primes j in item

2 above and potential responses i in item 1 above. To keep things simple,

we might assume that the strength of association is 2 in the case of

related pairs such as Bible-Jesus and flood-Mississippi and 0 in the case

of unrelated pairs such as Bible-Mississippi and flood-Jesus.

With this equation, these concepts, and these numbers, we can explain why the

students in my class associate Noah when prompted with all three words but

never do so when presented with any word individually. Consider what happens

when I present just the word Bible. There is only one prime with a positive

Wj, and this is Bible. In this case, the activation of Noah is

ANoah _ 1 _ (1 _ 2) _ 3

where the first 1 is its base-level activation Bi, the second 1 is its weight Wj of

Bible, and the 2 is the strength of association Sji between Bible and Noah. The

activation for Jesus is different because it has a higher base-level activation,

reflecting its greater frequency:

AJesus _ 3 _ (1 _ 2) _ 5

The reason Jesus and not Noah comes to mind in this case is that Jesus has

higher activation. Now let’s consider what happens when I present all three

words. The activation of Noah will be

ANoah _ 1 _ (1 _ 2) _ (1 _ 2) _ (1 _ 2) _ 7

where there are three (1 _ 2)’s because all three of the terms—Bible, animals,

and flood—have associations to Noah. The activation equation for Jesus remains

AJesus _ 3 _ (1 _ 2) _ 5

because only Bible has the association with Jesus. Thus, the extra associations to

Noah have raised the current activation of Noah to be greater than the activation

of Jesus, despite the fact that it has lower base-level activation.

©

Ai = Bi + a

j

WjSji

158 | Human Memory: Encoding and Storage

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 158

There are two critical factors in this activation equation: the base-level activation,

which sets a starting activation for the idea, and the activation received

through the associations, which adjusts this activation to reflect the current

context. The next section will explore this associative activation, and the section

after that will discuss the base-level activation.

The speed and probability of accessing a memory are determined by the

memory’s level of activation, which in turn is determined by its base-level

activation and the activation it receives from associated concepts.

Spreading Activation

Spreading activation is the term often used to refer to the process by which currently

attended items can make associated memories more available.Many studies

have examined how terms to which we currently attend can prime memories.

One of the earliest was a study by Meyer and Schvaneveldt (1971) in which participants

were asked to judge whether or not pairs of items were words. Table 6.1

shows examples of the materials used in their experiments, along with participants’

judgment times. The items were presented one above the other. If either

item in a pair was not a word, participants were to respond no. It appears from

examining the negative pairs that participants judged first the top item and then

the bottom item.When the top item was not a word, participants were faster to

reject the pair than when only the bottom item was not a word. (When the top

item was not a word, participants did not have to judge the bottom item and so

could respond sooner.) The major interest in this study was in the positive

pairs. There were unrelated items, such as nurse and butter, and pairs with an

associative relation, such as bread and butter. Participants were 85 ms faster on

the related pairs. This result can be explained by a spreading-activation analysis.

When the participant read the first word in the related pair, activation would

spread from it to the second word. This would make information about the

spelling of the second word more active and make that word easier to judge.

The implication of this result is that the associative spreading of information

activation through memory can facilitate the rate at which words are read.

Activation and on Long-Term Memory | 159

TABLE 6.1

Examples of the Pairs Used to Demonstrate Associative Priming

Positive Pairs Negative Pairs

Nonword Nonword Both

Unrelated Related First Second Nonwords

Nurse Bread Plame Wine Plame

Butter Butter Wine Plame Reab

940 ms 855 ms 904 ms 1,087 ms 884 ms

From Meyer and Schvaneveldt, 1971. Reprinted by permission of the publisher. © 1971 by the Journal

of Experimental Psychology.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 159

Thus, we can read material that has a strong associative coherence more rapidly

than we can read incoherent material in which the words are unrelated.

Kaplan (1989), in his dissertation research, reported an effect of associative

priming at a very different time scale of information processing. The “participants”

in the study were members of his dissertation committee. I was one of

these participants, and it was a rather memorable and somewhat embarrassing

experience. He gave us riddles to solve, and each of us was able to solve about

half of them. One of the riddles that I was able to solve was

What goes up a chimney down but can’t come down a chimney up?

The answer is umbrella. Another faculty member was not able to solve this

one, and he has his own embarrassing story to tell about it—much like the one

I have to tell about the following riddle that I could not get:

On this hill there was a green house. And inside the green house there was a

white house. And inside the white house, there was a red house. And inside

the red house there were a lot of little blacks and whites sitting there. What

place is this?

More or less randomly, different faculty members were able to solve various

riddles.

Then Kaplan gave us each a microphone and tape recorder and told us that

we would be beeped at various times over the next week and that we should then

record what we had thought about the unsolved riddles and whether we had

solved any new ones. He said that he was interested in the steps by which we

came to solve these problems. That was essentially a lie to cover the true purpose

of the experiment, but it did keep us thinking about the riddles over the week.

What Kaplan had done was to split the riddles each of us could not solve

randomly into two groups. For half of these unsolved problems, he seeded our

environment with clues to the solution. He was quite creative in how he did

this: In the case of the riddle above that I could not solve, he drew a picture of a

watermelon as graffiti in the men’s restroom. Sure enough, shortly after seeing

this graffiti I thought again about this riddle and came up with the answer—

watermelon! I congratulated myself on my great insight, and when I was next

beeped I proudly recorded how I had solved the problem—quite unaware of

the role the bathroom graffiti had played in my solution.

Of course, that might just be one problem and one foolish participant.

Averaged over all the problems and all the participants (which included a

Nobel Laureate), however, we were twice as likely to solve a problem that had

been primed in the environment than one that had not been. Basically, activation

from the picture of the watermelon was spreading to my knowledge about

watermelons and priming it when I thought about this problem. We were all

unaware of the manipulation that was taking place. This example illustrates the

importance of priming to issues of insight (a topic we will consider at length in

Chapter 8) and also shows that one is not aware of the associative priming that

is taking place, even when one is trained to spot such things, as I am.

Activation spreads from presented items through a network to memories

related to that prime item.

160 | Human Memory: Encoding and Storage

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 160

•Practice and Memory Strength

Spreading activation concerns how the context can make some memories more

available. However, some memories are just more available because they are

used frequently in all contexts. So, for instance, you can recall the names of

close friends almost immediately, anywhere and anytime. The quantity that

determines this inherent availability of a memory is referred to as its strength.

In contrast to the activation level of a trace, which can have rapid fluctuations

depending on whether associated items are being focused upon, the strength of

a trace changes more gradually. Each time we use a memory trace, it increases a

little in strength. The strength of a trace determines in part how active it can

become and hence how accessible it will be. The strength of a trace can be gradually

increased by repeated practice.

The Power Law of Learning

The effects of practice on memory retrieval are extremely regular and very

large. In one study, Pirolli and Anderson (1985) taught participants a set of

facts and had them practice the facts for 25 days; then they looked at the speed

with which the participants could recognize these facts. Figure 6.11a plots how

participants’ time to recognize a fact decreased with practice. As can be seen, participants

sped up from about 1.6 s to 0.7 s, cutting their retrieval time by more

than 50%. The illustration also shows that the rate of improvement decreases

Practice and Memory Strength | 161

Recognition time (s)

1.8

1.6

1.4

1.2

1.0

0.8

0.6

0 10 20 30

Days of practice

0.5

(1.5)

(1.0)

(0.5)

0.3

0.1

– 0.1

– 0.3

– 0.5

– 0.7

0 1 2 3 4

(1) (5) (25)

Log (days of practice)

Log (recognition time)

(a) (b)

FIGURE 6.11 Results of Pirolli and Anderson’s study to determine the effects of practice

on recognition time. (a) The time required to recognize sentences is plotted as a function

of the number of days of practice. (b) The data in (a) are log–log transformed to reveal a

power function. The data points are average times for individual days, and the curves are

the best-fitting power functions. (From Pirolli & Anderson, 1985. Reprinted by permission of the publisher.

© 1985 by the Journal of Experimental Psychology: Learning, Memory, and Cognition.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 161

with more practice. Increasing practice has diminishing returns. The data are

nicely fit by a power function of the form

T _ 1.40 P_0.24

where T is the recognition time and P is the number of days of practice. This

is called a power function because the amount of practice P is being raised

to a power. This power relationship between performance (measured in terms

of response time and several other variables) and amount of practice is a ubiquitous

phenomenon in learning. One way to see that data correspond to a

power function is to plot logarithm of time (the y-axis) against the logarithm of

practice (the x-axis). If we have a power function in normal coordinates, we

should get a linear function in log–log coordinates:

ln T _ 0.34 _ 0.24 ln P

Figure 6.11b shows the data so transformed. As can be seen, the relationship

is quite close to a linear function (straight line).

Newell and Rosenbloom (1981) refer to the way that memory performance

improves as a function of practice as the power lawof learning. Figure 6.12 shows

some data from Blackburn (1936), who looked at the effects of practicing addition

problems for 10,000 trials by two participants. The data are plotted in log–log

terms, and there is a linear relationship. On this graph and on some others in this

book, the original numbers (i.e., those given in parentheses in Figure 6.11b) are

plotted on the logarithmic scale rather than being expressed as logarithms. Blackburn’s

data show that the power law of learning extends to amounts of practice

far beyond that shown in Figure 6.11. Figures 6.11 and 6.12 reflect the gradual

162 | Human Memory: Encoding and Storage

2 2

0.5

1.0

2.0

5.0

5 10 20

Problem number (logarithmic scale)

Time (s) (logarithmic scale)

50 100 200 500 1000 10,000

FIGURE 6.12 Data from Blackburn’s study on the effects of practicing addition problems for

10,000 trials. The results are presented as improvement with practice in the time taken to add

two numbers. Data are plotted separately for two participants. Both the time required to solve

the problem and the number of problems are plotted on a logarithmic scale. (Plot by Crossman, 1959,

of data from Blackburn, 1936. Adapted by permission of the publisher. © 1959 by Ergonomics.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 162

increase in memory trace strength with practice. As memory traces become

stronger, they can reach higher levels of activation and so can be retrieved more

rapidly.

As a memory is practiced, it is strengthened according to a power function.

Neural Correlates of the Power Law

One might wonder what really underlies the power law of practice. Some evidence

suggests that the law may be related to basic neural changes involved in

learning. One kind of neural learning that has attracted much attention is called

long-term potentiation (LTP), which occurs in the hippocampus and cortical

areas. This form of neural learning seems to be related to behavioral measures

of learning like those in Figures 6.11 and 6.12. When a pathway is stimulated

with a high-frequency electric current, cells along that pathway show increased

sensitivity to further stimulation. Barnes (1979) looked at this phenomenon in

rats by measuring the percentage increase in excitatory postsynaptic potential

(EPSP) over its initial value.3 Barnes stimulated the hippocampus of her rats

each day for 11 successive days and measured the growth in LTP as indicated by

the percentage increase in EPSP. Figure 6.13a displays her results in a plot of percentage

change in LTP versus days of practice. There appears to be a diminishing

increase with amount of practice. Figure 6.13b, which plots log percentage

change in LTP against log days of practice, shows that the relationship is approximately

linear, and therefore the relationship in Figure 6.13 is approximately a

power function. Thus, it does seem that neural activation changes with practice,

just as behavioral measures do.

Note that the activation measure shown in Figure 6.13a increases more

and more slowly, whereas recognition time (see Figure 6.11a) decreases more and

more slowly. In other words, a performance measure such as recognition time

is an inverse reflection of the growth of strength that is happening internally.

As the strength of the memory increases, the performance measures improve

(which means shorter recognition times and fewer errors). You remember something

faster after you’ve thought about it more often.

The hippocampal region being observed here is the area that was damaged

in the fictional Leonard character discussed at the beginning of the chapter.

Damage to this region often results in amnesia. Studies of the effects of practice

on participants without brain damage have found that activation in the hippocampus

and the prefrontal regions decreases as participants become more

practiced at retrieving memories (Kahn & Wagner, 2002).4

Practice and Memory Strength | 163

3 As discussed in Chapter 1, the difference in electric potential between the outside and inside of the cell

decreases as the dendrite and cell body of a neuron become more excited. EPSP is described as increasing

when this difference decreases.

4 Note that neural activation decreases with practice because it takes less effort to retrieve the memory. This

can be a bit confusing—greater trace activation resulting from practice results in lower brain activation. This

happens because trace activation reflects the availability of the memory, whereas brain activation reflects

the hemodynamic expenditure required to retrieve the memory. Trace activation and brain activation refer

to different concepts.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 163

The relationship between the hippocampus and regions of the prefrontal

cortex is interesting. In normal participants, these regions are often active at the

same time, as they were in the Kahn and Wagner study. It is generally thought

(e.g., Paller & Wagner, 2002) that processing activity in prefrontal regions

regulates input to hippocampal regions that store the memories. Patients with

hippocampal damage show the same prefrontal activation as normal people do;

but because of the hippocampal damage, they fail to store these memories

(R. L. Buckner, personal communication, 1998).

Two studies illustrating the role of the prefrontal cortex in forming new

memories appeared back to back in the same issue of Science magazine. One

study (Wagner et al., 1998) investigated memory for words; the other (J. B. Brewer

et al., 1998) studied memory for pictures. In both cases, participants remembered

some of the items and forgot others. Using fMRI measures of the hemodynamic

response, the researchers contrasted the brain activation at the time of study for

those words that were subsequently remembered and those that were subsequently

forgotten.Wagner et al. found that left prefrontal regions were predictive

of memory for words, whereas Brewer et al. found that right prefrontal regions

were predictive of memory for pictures. Figure 6.14a shows the results for words;

Figure 6.14b, the results for pictures. In both cases, the rise in the hemodynamic

response is plotted as a function of time from stimulus presentation.As discussed

in Chapter 1, the hemodynamic response lags, so that it is at maximum about 5 s

after the actual neural activity. The correspondence between the results from the

two laboratories is striking. In both cases, remembered items received greater

164 | Human Memory: Encoding and Storage

Change (%)

50

40

30

20

10

0 2 4 6 8 10

Days of practice

Log (change)

3.8

3.6

3.4

3.2

3.0

2.8

2.6

0 1 2 3

(a) (b) Log (days of practice)

FIGURE 6.13 Results from Barnes’s study of long-term potentiation (LTP) demonstrating that

when a neural pathway is stimulated, cells along that pathway show increased sensitivity to

further stimulation. The growth in LTP is plotted as a function of number of days of practice

(a) in normal scale and (b) in log–log scale. (From Barnes, 1979. Reprinted by permission of the publisher.

© 1979 by the Journal of Comparative Physiology.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 164

Practice and Memory Strength | 165

activation from the prefrontal regions, supporting the conclusion that prefrontal

activation is indeed critical for storing a memory successfully.5 Also, note that

these studies are a good example of the lateralization of prefrontal processing,

with verbal material involving the left hemisphere to a greater extent and visual

material involving the right hemisphere to a greater extent.

Activation in prefrontal regions appears to drive long-term potentiation in

the hippocampus. This activation results in the creation and strengthening

of memories.

(a)

100

90

70

80

60

50

40

30

20

10

2 4 6 8 10 12

Remembered

Forgotten

Time from stimulus (s)

Hemodynamic response (% of maximum)

(b)

100

90

80

70

60

50

40

30

20

10

2 4

Time from stimulus (s)

Hemodynamic response (% of maximum)

6 8 10 12

Remembered

Forgotten

FIGURE 6.14 Results from two studies illustrating the role of the prefrontal cortex in forming

new memories. (a) Data from the study by Wagner et al. show the rise in the hemodynamic

response in the left prefrontal cortex while participants studied words that were subsequently

remembered or forgotten. (After Wagner et al., 1998. Adapted by permission of the publisher. © 1998 by Science.)

(b) Data from the study by Brewer et al. show the rise in the hemodynamic response in the right

prefrontal cortex while participants studied pictures that were subsequently remembered or

forgotten. (After J. B. Brewer et al., 1998. Adapted by permission of the publisher. © 1998 by Science.)

5 Greater hemodynamic activation at study results in a stronger memory—which, as we noted, can lead

to reduced hemodynamic activation at test.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 165

•Factors Influencing Memory

A reasonable inference from the preceding discussion might be that the only thing

determining memory performance is how much we study and practice the material

to be remembered. However, we earlier reviewed some of the evidence that

mere study of material will not lead to better recall.How one processes the material

while studying it is important. We saw in Chapter 5 that more meaningful

processing of material results in better recall. Earlier in this chapter, with respect

to Craik and Lockhart’s (1972) depth-of-processing proposal, we reviewed the

evidence that shallow study results in little memory improvement. As a different

demonstration of the same point, D. L. Nelson (1979) had participants read

paired associates that were either semantic associates (e.g., tulip-flower) or rhymes

(e.g., tower-flower). Better memory (81% recall) was obtained for the semantic

associates than for the rhymes (70% recall). Presumably, participants tended to

process the semantic associates more meaningfully than the rhymes. In Chapter 5,

we also saw that people retain more meaningful information better. In this section,

we will review some other factors, besides depth of processing and meaningfulness

of the material, that determine our level of memory.

Elaborative Processing

There is evidence that more elaborative processing results in better memory.

Elaborative processing involves creating additional information that relates

and expands on what it is that needs to be remembered. J. R. Anderson and

Bower (1973) did an experiment to demonstrate the importance of elaboration.

They had participants try to remember simple sentences such as The doctor

hated the lawyer. In one condition, participants just studied the sentence; in the

other, they were asked to generate an elaboration of their choosing—such as

because of the malpractice suit. Later, participants were presented with the subject

and verb of the original sentence (e.g., The doctor hated) and were asked to

recall the object (e.g., the lawyer). Participants who just studied the original sentences

were able to recall 57% of the objects, but those who generated the elaborations

recalled 72%. The investigators proposed that this advantage resulted

from the redundancy created by the elaboration. If the participants could not

originally recall lawyer but could recall the elaboration because of the malpractice

suit, they might then be able to recall lawyer.

A series of experiments by B. S. Stein and Bransford (1979) showed why selfgenerated

elaborations are often better than experimenter-provided elaborations.

In one of these experiments, participants were asked to remember 10 sentences,

such as The fat man read the sign. There were four conditions of study.

• In the base condition, participants studied just the sentence. • In the self-generated elaboration condition, participants were asked to

create an elaboration of their own. • In the imprecise elaboration condition, participants were given a continuation

of the sentence poorly related to the meaning of the sentence, such

as that was two feet tall. • In the precise elaboration condition, they were given a continuation that

gave context to the sentence, such as warning about the ice.

166 | Human Memory: Encoding and Storage

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 166

After studying the material, participants in all conditions were presented with

such sentence frames as The _______ man read the sign, and they had to recall

the missing adjective. Participants recalled 4.2 of the 10 adjectives in the base

condition and 5.8 of the 10 when they generated their own elaborations.

Obviously, the self-generated elaborations had helped. They could recall only

2.2 of the adjectives in the imprecise elaboration condition, replicating the

typical inferiority found for experimenter-provided elaborations relative to selfgenerated

ones. However, participants recalled the most (7.8 of 10 adjectives) in

the precise elaboration condition. So, by careful choice of words, experimenter

elaborations can be made better than those of participants. (For further research

on this topic, read Pressley,McDaniel, Turnure,Wood, & Ahmad, 1987.)

It appears that the critical factor is not whether the participant or the experimenter

generates the elaborations but whether the elaborations prompt

the material to be recalled. Participant-generated elaborations are effective because

they reflect the idiosyncratic constraints of each particular participant’s

knowledge. As Stein and Bransford demonstrated, however, it is possible for

the experimenter to construct elaborations that facilitate even better recall.

Memory for material improves when it is processed with more meaningful

elaborations.

•Techniques for Studying Textual Material

Frase (1975) found evidence of the benefit of elaborative processing with text material.

He compared how participants in two groups remembered text: One group

was given questions to think about before reading the text—sometimes called

advance organizers (Ausubel, 1968)—and a control group that simply studied the

text without advance questions. Participants in the first group were asked to find

answers to the advance questions as they read the text. This requirement should

have forced them to process the text more carefully and to think about its implications.

In a subsequent test, the advance-organizer group answered 64% of the

questions correctly, whereas the control group answered only 57% correctly. The

questions in the test were either relevant or irrelevant to the advance organizers.

For instance, a test question about an event that precipitated America’s entry into

WorldWar II would be considered relevant if the advance questions directed participants

to learn why America entered the war. A test question would be considered

irrelevant if the advance questions directed participants to learn about the

economic consequences of WorldWar II. The advance-organizer group correctly

answered 76% percent of the relevant questions and 52% of the irrelevant ones.

Thus, they did only slightly worse than the control group on topics for which they

had been given only irrelevant advance questions but did much better on topics

for which they had been given relevant advance questions.

Many college study-skills departments, as well as private firms, offer courses

designed to improve students’ memory for text material. These courses teach

study techniques mainly for texts such as those used in the social sciences, not

for the denser texts used in the physical sciences and mathematics or for literary

Techniques for Studying Textual Material | 167

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 167

materials such as novels. The study techniques from different programs are

rather similar, and their success has been fairly well documented. One example

of such a study technique is the PQ4R method (Thomas & Robinson, 1972).

The Implications box in Chapter 1 described a slight variation on this technique

as a method for studying this book.

The PQ4R method derives its name from the six phases it advocates for

studying a chapter in a textbook:

1. Preview: Survey the chapter to determine the general topics being

discussed. Identify the sections to be read as units. Apply the next

four steps to each section.

2. Questions:Make up questions about each section. Often, simply

transforming section headings results in adequate questions.

3. Read: Read each section carefully, trying to answer the questions you

have made up about it.

4. Reflect: Reflect on the text as you are reading it. Try to understand it, to

think of examples, and to relate the material to your prior knowledge.

5. Recite: After finishing a section, try to recall the information contained

in it. Try to answer the questions you made up for the section. If you

cannot recall enough, reread the portions you had trouble remembering.

6. Review: After you have finished the chapter, go through it mentally,

recalling its main points. Again try to answer the questions you made up.

The central features of the PQ4R technique are the generation and answering of

questions. There is reason to think that the most important aspect of these features

is that they encourage deeper and more elaborative processing of the text material.

At the beginning of this section, we reviewed the Frase (1975) experiment that

demonstrated the benefit of reading a text with a set of advance questions in mind.

It seems that the benefit was specific to test items related to the questions.

Another experiment by Frase (1975) compared the effects of making up

questions with the effects of answering them. He asked pairs of participants to

study a text passage that was divided into halves. For one half, one participant in

the pair read the passage and made up study questions during the process. The

second participant then tried to answer these questions while reading the text.

The participants switched roles for the second half of the text. All participants

answered a final set of test questions about the passage. A control group, which

just read the text without doing anything special, correctly answered 50% of the

final set of test questions. Experimental participants who made up questions

while they read the text correctly answered 70% of the final test items that were

relevant to their questions and 52% of the items that were irrelevant. Experimental

participants who answered questions while they read the text correctly

answered 67% of the relevant test items and 49% of the irrelevant items. Thus, it

seems that both question generation and question answering contribute to good

memory. If anything, the creation of questions contributes the most benefit.

Reviewing the text with the questions in mind is another important component

of the PQ4R technique. Rothkopf (1966) compared the benefit of reading a

text with questions in mind with the benefit of considering a set of questions after

reading the text. Rothkopf instructed participants to read a long text passage with

168 | Human Memory: Encoding and Storage

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 168

questions interspersed every three pages. The questions were relevant to the three

pages either following or preceding the questions. In the former condition, participants

were supposed to read the subsequent text with these questions in mind.

In the latter condition, they were to review what they had just read and answer

the questions. The two experimental groups were compared with a control group,

which read the text without any special questions. The control group answered

30% of the questions correctly in a final test of the whole text. The experimental

group whose questions previewed the text correctly answered 72% of the test

items relevant to their questions and 32% of the irrelevant items—basically the

same results as those Frase (1975) obtained in comparing the effectiveness of relevant

and irrelevant test items. The experimental group whose questions reviewed

the text correctly answered 72% of the relevant items and 42% of the irrelevant

items. Thus, it seems that using questions to review the text just read may be more

beneficial than reading the textwith the questions in mind.

Study techniques that involve generating and answering questions lead

to better memory for text material.

Meaningful versus Nonmeaningful Elaborations

Although the research just reviewed indicates that meaningful processing leads

to better memory, other research demonstrates that other sorts of elaborative

processing also result in better memory. For instance, Kolers (1979) looked at

participants’ memory for sentences that were read in normal form versus sentences

that were printed upside down and found that participants remembered

more about the upside-down sentences. He argued that the extra processing

involved in reading the typography of upside-down sentences provides the

basis for the improved memory. It is not a case of more meaningful processing,

but one of more extensive processing.

A study by Slamecka and Graf (1978) demonstrated separate

effects on memory of elaborative and meaningful processing.

They contrasted two conditions. In the generate condition, participants

were given a word and had to generate either a synonym

that began with a particular letter (e.g.,What is a synonym of sea

that begins with the letter o? Answer: ocean) or a rhyme that began

with a particular letter (e.g.,What is a rhyme of save that begins

with the letter c? Answer: cave). In the read condition, they just

read the rhyme pair or the synonym pair and then were tested for

their recognition of the second word. Figure 6.15 shows the

results. Participants performed better with synonyms and better

when they had to generate either a synonym or a rhyme. Thus, it

appears that there are beneficial effects on memory of both

semantic processing and elaborative processing.

Otten, Henson, and Rugg (2001) noted that the prefrontal

and hippocampal regions involved in memory for material that

is processed meaningfully and elaborately seem to be the same

regions that are involved in memory for material that is

processed shallowly. High activity in these regions is predictive of

Techniques for Studying Textual Material | 169

0.0

Proportion recognized

Synonym Rhyme

0.2

0.4

0.6

0.8

1.0

Generate

Read

FIGURE 6.15 Results of a

study by Slamecka and Graf

demonstrating separate effects

on memory of elaborative and

meaningful processing. In the

generate condition, participants

were presented with a word

and had to generate either a

synonym or a rhyme that began

with a particular letter. In the

read condition, participants

read the synonym pair or rhyme

pair, then were tested for their

recognition of the second

word. The proportion of words

recognized is shown as a

function of the type of

elaboration (synonym or rhyme)

and whether it was generated

or read. (From Slamecka & Graf, 1978.

Reprinted by permission of the publisher.

© 1978 by the Journal of Experimental

Psychology: Human Learning and Memory.)

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 169

subsequent recall (see Figure 6.13). Elaborative, more meaningful processing

tends to evoke higher levels of activation than the shallow processing (Wagner et

al., 1998). Thus, it appears that meaningful, elaborate processing is effective

because it is better at driving the brain processes that result in successful recall.

170 | Human Memory: Encoding and Storage

bananas, and bread. To associate the milk with the

bookstore, we might imagine books lying in a puddle of

milk in front of the bookstore. To associate hot dogs

with the record shop (the next location on the path

from the bookstore), we might imagine

a package of hot dogs spinning on a

phonograph turntable. The pizza shop is

next, and to associate it with dog food,

we might imagine a dog-food pizza (well,

some people even like anchovies). Then

we come to an intersection; to associate

it with tomatoes, we can imagine an

overturned vegetable truck with tomatoes

splattered everywhere. Next we

come to the administration building and

create an image of the president coming

out, wearing only a hula skirt made of

bananas. Finally, we reach the library and

associate it with bread by imagining a

huge loaf of bread serving as a canopy

under which we must pass to enter. To re-create the

list, we need only take an imaginary walk down this

path, reviving the association for each location. This

technique works well even with very much longer lists;

all we need is more locations. There is considerable

evidence (e.g., Christen & Bjork, 1976) that the same

loci can be used over and over again in the learning of

different lists.

Two important principles underlie this method’s effectiveness.

First, the technique imposes organization on an

otherwise unorganized list. We are guaranteed that if we

follow the mental path at the time of recall, we will pass

all the locations for which we created associations. The

second principle is that imagining connections between

the locations and the items forces us to process the

material meaningfully, elaboratively, and by use of visual

imagery.

Implications

How does the method of loci help us organize recall?

Mental imagery is an effective method for developing

meaningful elaborations. A classic mnemonic technique,

the method of loci, depends heavily on visual imagery

and the use of spatial knowledge to organize recall. This

technique, used extensively in ancient

times when speeches were given without

written notes or teleprompters, is still

used today. Cicero (in De Oratore) credits

the method to a Greek poet,

Simonides, who had recited a lyric poem

at a banquet. After his delivery, he was

called from the banquet hall by the gods

Castor and Pollux, whom he had praised

in his poem. While he was absent, the

roof fell in, killing all the people at the

banquet. The corpses were so mangled

that relatives could not identify them.

Simonides was able to identify each

corpse, however, according to where each

person had been sitting in the banquet

hall. This feat of total recall convinced Simonides of the

usefulness of an orderly arrangement of locations into

which a person could place objects to be remembered.

This story may be rather fanciful, but whatever its true

origin, the method of loci is well documented (e.g., Christen

& Bjork, 1976; Ross & Lawrence, 1968) as a useful

technique for remembering an ordered sequence of

items, such as the points a person wants to make in a

speech.

To use the method of loci, one imagines a specific

path through a familiar area with some fixed locations

along the path. For instance, if there were such a path

on campus from the bookstore to the library, we might

use it. To remember a series of objects, we simply walk

along the path mentally, associating the objects with

the fixed locations. As an example, consider a grocery

list of six items—milk, hot dogs, dog food, tomatoes,

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 170

More elaborate processing results in better memory, even if that processing

is not focused on the meaning of the material.

Incidental versus Intentional Learning

So far, we have talked about factors that affect memory. Now we will turn to a

factor that does not affect memory, despite people’s intuitions to the contrary:

It does not seem to matter whether people intend to learn the material; what is

important is how they process it. This fact is illustrated in an experiment by

Hyde and Jenkins (1973). Participants saw groups of 24 words presented at the

rate of 3 s per word. One group of participants was asked to check whether each

word had a letter e or a letter g. The other group was asked to rate the pleasantness

of the words. These two tasks were called orienting tasks. It is reasonable to

assume that the pleasantness rating involved more meaningful and deeper processing

than the letter-verification task. Another variable was whether participants

were told that the true purpose of the experiment was to learn the words.

Half the participants in each group were told the true purpose of the experiment

(the intentional-learning condition). The other half of participants in

each group thought the true purpose of the experiment was to rate the words

or check for letters (the incidental-learning condition). Thus, there were four

conditions: pleasantness-intentional, pleasantness-incidental, letter checkingintentional,

and letter checking-incidental.

After studying the list, all participants were asked to recall as many words as

they could. Table 6.2 presents the results from this experiment in terms of percentage

of the 24 words recalled. Two results are noteworthy. First, participants’

knowledge of the true purpose of studying the words had relatively little effect

on performance. Second, a large depth-of-processing effect was demonstrated;

participants showed much better recall in the pleasantness rating condition,

independent of whether they expected to be tested on the material later. In

rating a word for pleasantness, participants had to think about its meaning,

which gave them an opportunity to elaborate upon the word.

The Hyde and Jenkins (1973) experiment illustrates

an important finding that has been proved over

and over again in the research on intentional versus

incidental learning: Whether a person intends to learn

or not really does not matter (see Postman, 1964, for a

review).What matters is how the person processes the

material during its presentation. If one engages in

identical mental activities when processing the material,

one gets identical memory performance whether

one is intending to learn the material or not. People

typically show better memory when they intend to

learn because they are likely to engage in activities

more conducive to good memory, such as rehearsal

and elaborative processing. The small advantage for

Techniques for Studying Textual Material | 171

TABLE 6.2

Words Recalled as a Function of Orienting Task

and Participant Awareness of Learning Task

Words Recalled (%)

Orienting Task

Learning-Purpose Rate

Conditions Pleasantness Check Letters

Incidental 68 39

Intentional 69 43

After Hyde and Jenkins, 1973. Adapted by permission of the publisher.

© 1973 by the Journal of Verbal Learning and Verbal Behavior.

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 171

participants in the intentional learning condition of the Jenkins and Hyde experiment

may reflect some small variation in processing. Experiments in which great

care is taken to control processing find that intention to learn or amount of motivation

to learn has no effect (see T.O.Nelson, 1976).

There is an interesting everyday example of the relationship between intention

to learn and type of processing. Many students claim they find it easier to

remember material from a novel, which they are not trying to remember, than

from a textbook, which they are trying to remember. The reason is that students

find a typical novel much easier to elaborate, and a good novel invites

such elaborations (e.g.,Why did the suspect deny knowing the victim?).

Level of processing, and not whether one intends to learn, determines the

amount of material remembered.

Flashbulb Memories

Although it does not appear that intention to learn affects memory, various sets

of data support the conclusion that people display better memory for events that

are important to them. One class of research involves flashbulb memories

events so important that they seem to burn themselves into memory forever

(R. Brown & Kulik, 1977). The event these researchers used as an example was

the assassination of President Kennedy in 1963, which was a particularly traumatic

event for Americans of their generation. They found that most people had

vivid memories of the event 13 years later. They proposed that we have a special

biological mechanism to guarantee that we will remember those things that are

particularly important to us. The interpretation of this result is problematic,

however, because R. Brown and Kulik did not really have any way to assess the

accuracy of the reported memories.

Since the Brown and Kulik proposal, a number of studies have been done to

determine what participants remembered about a traumatic event immediately

after it occurred and what they remembered later. For instance, McCloskey,

Wible, and Cohen (1988) did a study involving the 1986 space shuttle Challenger

explosion. At that time, many people felt that this was a particularly traumatic

event they had watched with horror on television. McCloskey et al. interviewed

participants 1 week after the incident and then again 9 months later.Nine months

after the accident, one participant reported:

When I first heard about the explosion I was sitting in my freshman dorm

room with my roommate and we were watching TV. It came on a news flash

and we were both totally shocked. I was really upset and I went upstairs to talk

to a friend of mine and then I called my parents. (Neisser & Harsch, 1992, p. 9)

McCloskey et al. found that although participants reported vivid memories

9 months after the event, their reports were in fact inaccurate. For instance, the

participant just quoted had actually learned about the Challenger explosion in

class a day after it happened and then watched it on television.

Palmer, Schreiber, and Fox (1991) came to a somewhat different conclusion

in a study of memories of the 1989 San Francisco earthquake. They compared

172 | Human Memory: Encoding and Storage

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 172

participants who had actually experienced the earthquake firsthand with those

who had only watched it on TV. Those who had experienced it in person

showed much superior long-term memory of the event. Conway et al. (1994)

argued that McCloskey et al. (1988) failed to find a memory advantage in the

Challenger study because their participants did not have true flashbulb memories.

They contended that it is critical for the event to have been consequential

to the individual remembering it. Hence, only people who actually experienced

the San Francisco earthquake, and not those who saw it on TV, had flashbulb

memories of the event. Conway et al. studied memory for Margaret Thatcher’s

resignation as prime minister of the United Kingdom in 1990. They compared

participants from the United Kingdom, the United States, and Denmark, all

of whom had followed news reports of the resignation. It turned out that

11 months later, 60% of the participants from the United Kingdom showed

perfect memory for the events surrounding the resignation, whereas only 20%

of those who did not live in the United Kingdom showed perfect memory.

Conway et al. argued that they obtained this result because the Thatcher resignation

was really consequential only for the U.K. participants.

On September 11, 2001, Americans suffered a particularly traumatic event.

The terrorist attacks of that day have come to be known simply as “9/11.”

A number of studies were undertaken to study the effects of these events on

memory. Talairco and Rubin (2003) report a study of the memories of students

at Duke University for details of the terrorist attacks (flashbulb memories) versus

details of ordinary events that happened that day. They were contacted and

tested for their memories the morning after the attacks. They were then tested

again either 1 week later, 6 weeks later, or 42 weeks later. Figure 6.16 shows the

results. The figure shows both recall of details that are consistent with what

they said the morning after and recall of details that were inconsistent (presumably

false memories). By neither measure is there any evidence that the flashbulb

memories were better retained than the everyday memories. Of course,

these were students at Duke and not people who were experiencing the terrorist

attacks firsthand.

Sharot, Martorella, Delgado, and Phelps (2007) reported

a study of people who were in Manhattan, where the Twin

Towers were struck. The study was performed three years

after the attack and people were asked to recall the events

from the attack and events from the summer before. Because

the study was 3 years after the event and they could not

verify participants’ memories for accuracy, but they could

study their brain responses while they were recalling the

events, Sharot et al. also interviewed the participants to find

out where they were in Manhattan when the twin towers

were struck. They broke the participants into two groups—

a downtown group who were approximately 2 miles away

and a midtown group who were approximately 5 miles away.

They focused on activity in the amygdala, which is a brain

structure known to reflect emotional response. They found

greater amgydala activation in the downtown group when

Techniques for Studying Textual Material | 173

Number of details

12

10

1 7 42

Inconsistent

Consistent

Flashbulb

Everyday

224

Days since 9/11 (log scale)

FIGURE 6.16 The mean number

of consistent and inconsistent

details for the flashbulb and

everyday memories (from Talairco &

Rubin, 2003).

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 173

they were recalling events from September 11 than in the midtown group. This is

significant because there is evidence that amygdala activity enhances retention

(Phelps, 2004). In a state of arousal, the amygdala releases hormones that influence

the processing in the hippocampus that is critical in forming memories

(McGaugh & Roozendaal, 2002).

Behaviorally there is considerable evidence that memories learned in high

arousal states are better retained (J. R. Anderson, 2000). This evidence can be

interpreted in two alternative ways. The first is that we have some special biological

mechanism that reinforces memory for events that are important to us.

The second is that people simply rehearse the material that is more important

to them more often (as argued by Neisser et al., 1996). Both factors probably

play a role in better retention of information learned in a high arousal state.

People certainly do tend to rehearse important memories, but evidence about

the amygdala involvement in memory suggests that there is an effect of arousal

over and above rehearsal.

People report better memories for particularly traumatic events, and there is

evidence that memories in high arousal states are better retained.

•Conclusions

This chapter has focused on the processes involved in getting information into

memory. We saw that a great deal of information gets registered in sensory

memory, but relatively little can be maintained in working memory and even

less survives for long periods of time. However, an analysis of what actually gets

stored in long-term memory really needs to consider how that information is

retained and retrieved—which is the topic of the next chapter. For instance, the

effects of arousal on memory that we have just reviewed seem to be more about

retention than encoding. Likewise, many of the issues considered in this chapter

are complicated by retrieval issues. This is certainly true for the effects of elaborative

processing that we have just discussed. There are important interactions

between how a memory is processed at study and how it is processed at test.

Even in this chapter, we were not able to discuss the effects of such factors as

practice without discussing the activation-based retrieval processes that are

facilitated by these factors. Chapter 7 will also have more to say about the activation

of memory traces.

174 | Human Memory: Encoding and Storage

1. Many people write notes on their bodies to remember

things like phone numbers. In the movie Memento,

Leonard tattoos information that he wants to remember

on his body. Describe instances where storing information

on the body works like sensory memory,where it is like

working memory, and where it is like long-termmemory.

2. The chapter mentions a colleague of mine who was

stuck solving the riddle “What goes up a chimney down

but can’t come down a chimney up?”How would you

have seeded the environment to subconsciously prime

a solution to the riddle? To see what Kaplan did, read

J. R. Anderson (2007, pp. 93–94).

Questions for Thought

Anderson7e_Chapter_06.qxd 8/20/09 9:45 AM Page 174

Key Terms | 175

3. Figures 6.11 and 6.12 describe situations in which the

experiment arranges for a participant to study memories

many times to improve its strength. Can you describe

situations in your schooling where this sort of

practice happened to improve your memory for facts?

4. Think of the most traumatic events you have experienced.

How have you rehearsed and elaborated upon

these events? What influence might such rehearsal and

elaboration have on these memories? Could it cause

you to remember things that did not happen?

Key Terms

activation

ACT (Adaptive Control

of Thought)

anterograde amnesia

articulatory loop

associative spreading

auditory sensory store

central executive

depth of processing

echoic memory

elaborative processing

flashbulb memories

iconic memory

long-term potentiation

(LTP)

memory span

method of loci

partial-report

procedure

phonological loop

power function

power law of learning

SAM (Search of Associative

Memory)

short-term memory

spreading activation

strength

visual sensory store

visuospatial sketchpad

whole-report

procedure

working memory