SEARCH

SEARCH BY CITATION

Keywords:

  • Dual-code theory;
  • Mental imagery;
  • Perceptual simulation;
  • Spatial associations

Abstract

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

Spatial aspects of words are associated with their canonical locations in the real world. Yet little research has tested whether spatial associations denoted in language comprehension generalize to their corresponding images. We directly tested the spatial aspects of mental imagery in picture and word processing (Experiment 1). We also tested whether spatial representations of motion words produce similar perceptual-interference effects as demonstrated by object words (Experiment 2). Findings revealed that words denoting an upward spatial location produced slower responses to targets appearing at the top of the display, whereas words denoting a downward spatial location produced slower responses to targets appearing at the bottom of the display. Perceptual-interference effects did not obtain for pictures or for words lacking a spatial relation. These findings provide greater empirical support for the perceptual-symbols system theory (Barsalou, 1999, 2008).

The moon was a sharply defined crescent and the sky was perfectly clear. The stars shone with such fierce, contained brilliance that it seemed absurd to call the night dark. The sea lay quietly, bathed in a shy, light-footed light, a dancing play of black and silver that extended without limits all about me. — Yann Martel, Life of Pi

In prose, we embark on voyages and delight in adventures without forsaking the comfort of our homes. To illustrate, in Life of Pi, Martel (2001) transports the reader to imagine viewing the moon, sky, and sea while surviving alone on the Pacific Ocean. Indeed, writers depend on words to convey time, space, and emotions not directly experienced by the reader (e.g., Isen, 1984; Morrow, Greenspan, & Bower, 1987). The power of language thus partly rests on the assumption that words elicit vivid representations in the brain (e.g., Pulvermüller, 2001). In this paper, we examine whether spatial aspects of words and pictures evoke mental imagery in language comprehension.

1. Amodal and perceptual symbol systems

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

How do humans use symbols to represent the external world? Two views provide disparate accounts on the relationship between symbols and referents. According to the amodal symbol systems theory, direct experiences are transduced into logical propositions (e.g., Anderson & Bower, 1972; Fodor, 1975; Kintsch, 1974; Pylyshyn, 1973, 1984). That is, initial processing of stimuli involves translating those forms into abstract, conceptual nodes for representing meaning (Glaser, 1992). Consequently, amodal systems do not retain modality-specific characteristics of objects. The relationship between a symbol and referent is therefore arbitrary and used for linguistic conventions.

In contrast, perceptual symbol systems theory posits an analog representation between symbols and referents (Barsalou, 1999, 2008). These representations reenact, or simulate, perceptual, motor, and introspective processes directly experienced in the real world. Hence, perceptual simulations integrate constituent features (e.g., shape, color) and orientations (e.g., up, down) to form a single, multimodal representation. The relationship between a symbol and referent is thus modality specific and grounded in the real world.

Evidence for amodal and perceptual representations has been drawn from several findings in cognitive science. Amodal symbols system has received broad support from artificial intelligence, connectionist models, and schema-based representations (e.g., Anderson, Conrad, & Corbett, 1989; Kintsch, 1974; Norman & Rumelhart, 1975). Perceptual symbols system has received growing support from embodied cognition, situation models, evolutionary and embodied robotics, as well as neuroscience (e.g., Barsalou, 2003; Bower & Morrow, 1990; Cangelosi & Riga, 2006; Kosslyn, Thompson, & Ganis, 2006). Over the last 10 years, greater evidence has favored perceptual symbols over amodal representations (Barsalou, 2008).

2. Spatial imagery in language comprehension

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

Mental imagery simulates perceptual-motor experiences that occur in the absence of external stimuli. In language comprehension, spatial imagery is evoked when reading sentences that use directional terms (e.g., Glenberg & Kaschak, 2002) and sentences that imply orientation (e.g., Stanfield & Zwaan, 2001). Across several paradigms, facilitation is produced when there is a match between a sentence and its implied direction in mental imagery (Borghi, Glenberg, & Kaschak, 2004; Glenberg & Kaschak, 2002; Zwaan, Madden, Yaxley, & Aveyard, 2004). Facilitation is therefore due to integrating linguistic cues with simulations that have similar perceptual representations.

Interference effects may also occur in spatial imagery. To illustrate, Richardson, Spivey, Barsalou, and McRae (2003) found targets (circles and squares) located on a vertical axis were responded to more quickly following sentences that implied a horizontal cue (i.e., “The mechanic pulls the chain.”). Similarly, targets located on the horizontal axis were responded to more quickly following sentences that implied a vertical cue (i.e., “The balloon floats through the cloud.”). Evidence revealed that both abstract and literal verbs activated spatial imagery in sentence processing. These findings suggest imaging a mental image interfered with perceiving an object located on the same axis as the simulation.

More recently, Bergen, Lindsay, Matlock, and Narayanan (2007) extended prior findings of perceptual-interference effects in language comprehension (Richardson et al., 2003). Across five experiments, verbs and nouns that depicted an upward or downward spatial representation were used to specify the exact location of sentence primes. Perceptual-interference effects were obtained for concrete verbs (Experiment 1) and nouns (Experiment 2) used in literal sentences. However, interference effects were not observed for abstract verbs or nouns used in metaphorical sentences (Experiments 3–5). Bergen and colleagues therefore conclude that perceptual simulations produce holistic representations when concrete verbs and nouns are used in literal, but not in metaphorical sentences.

Perceptual-interference effects, however, have been found using nouns presented in isolation from sentences (Estes, Verges, & Barsalou, 2008). In those experiments, target letters (X or O) appeared at the top or bottom of the visual field following the presentation of a noun that implied an upward or downward spatial representation. Targets presented at the top of the visual field were responded faster following a noun that implied a downward (i.e., boot) spatial cue. In addition, targets presented at the bottom of the visual field were responded faster following a noun that implied an upward (i.e., hat) spatial cue. These interference effects were attenuated, but not eliminated, in a masking paradigm (Experiment 3). These findings suggest that object words may automatically evoke spatial imagery, and that interference effects occur when linguistic cues and simulations activate different perceptual representations.

3. Processing words and pictures in mental imagery

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

Words index and manipulate perceptual simulations in language and thought (Barsalou, Santos, Simmons, & Wilson, 2008). Word activation occurs quickly, usually within 200 ms of word-onset presentation (Pulvermüller, Shtyrov, & Ilmoniemi, 2005). Words also elicit perceptual simulations tailored to a situated action (Barsalou, 2003). For instance, simulating the inside perspective of driving a car produces faster responses to car parts (e.g., horn) relevant to the simulation (Borghi et al., 2004). Word processing is therefore a critical component in activating perceptual simulations because they function as cues for assembling contextualized representations.

Less is known, however, about the role of perceptual simulations on mental imagery using pictures (Barsalou et al., 2008). According to dual-coding theory (Paivio, 1971, 1986, 2007), words and pictures activate separate, but interconnected verbal and nonverbal symbolic systems, respectively. These two systems have different structural and functional characteristics: Whereas the verbal system is directly accessed by linguistic stimuli, the nonverbal system is directly accessed by pictures and other nonverbal stimuli. In addition, the verbal system processes abstract, conceptual information (e.g., categorical domain) that is not directly represented by the physical stimulus, relying more on top-down mechanisms (Barsalou, 2003; Paivio, 2007). In contrast, the nonverbal system processes concrete attributes (i.e., size) directly represented from the physical stimulus, relying more on bottom-up mechanisms (Barsalou, 2003; Paivio, 2007). Thus, processing words and pictures involves separate but interrelated mechanisms that preserve qualitative distinctions in their representations (Barsalou, 1999, 2008; Paivio, 1971, 1986, 2007).

Taken together, pictures and words engage in similar, but not identical, representations and processes because pathway and activation mechanisms are distinct (Barsalou, 2003; Paivio, 2007). Indeed, differences between the processing of words and pictures have been shown across several tasks, including picture naming (Glaser, 1992), speeded categorization (Smith & Magee, 1980), and recognition memory (Paivio, 1991). In conjunction with sentences, pictures have also been used to produce facilitative and interference effects in mental imagery (Kaschak et al., 2005; Richardson et al., 2003; Zwaan et al., 2004). To date, however, research has not examined whether pictures shown in isolation of linguistic stimuli activate perceptual simulations in mental imagery.

4. Current investigation

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

Surprisingly few studies have tested whether pictorial stimuli evoke perceptual simulations in mental imagery. At worst, this limitation may lead to distorted views on cognition (Barsalou et al., 2008). More optimistically, using pictures and words in this investigation may lead to an improved understanding of the relationship between symbols and referents in mental imagery. Whereas the amodal view implies that words and pictures are transduced into arbitrary and abstract nodes, the perceptual view posits that words and pictures activate multimodal representations directly grounded in the brain’s modality-specific systems. We therefore explore these contrasting views by using object images and their corresponding names in Experiment 1.

In addition, we sought to extend prior findings on the spatial representations of verbs in Experiment 2 (Bergen et al., 2007; Estes et al., 2008; Richardson et al., 2003). Although Estes and colleagues found interference effects for nouns, they did not examine whether verbs would produce similar effects when presented in isolation from sentences. Moreover, it is unclear whether spatial imagery of verbs only occurs during sentential processing. Whereas Richardson and colleagues speculated that spatial imagery of verbs was not due to sentential processing, Bergen and colleagues argued that spatial imagery of verbs occurs only when processing sentences in literal contexts. Thus, the purpose of Experiment 2 was to test whether spatial imagery of verbs would produce similar interference effects during word processing.

5. Experiment 1

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

The purpose of Experiment 1 was to test whether pictorial stimuli elicit perceptual-interference effects in mental imagery as found with linguistic stimuli (Bergen et al., 2007; Estes et al., 2008; Richardson et al., 2003). If mental imagery is due to the activation of an abstract, propositional code, as implied by the amodal view (Pylyshyn, 1981), perceptual interference should occur for both pictures and their corresponding names. If, however, mental imagery entails the processing of distinct, multimodal representations, as predicted by the perceptual systems view (Barsalou, 1999, 2008), perceptual interference should occur for words, but not pictures.

5.1. Method

5.1.1. Participants

Twenty-eight undergraduates from Rutgers University received partial course credit for participating in the experiment.

5.1.2. Materials

Stimulus items consisted of 32 object words that denoted an upward (i.e., hat) or downward (i.e., puddle) spatial relation and 32 object words that denoted a nonspatial (i.e., cake) relation. In addition, 64 line drawings that corresponded to object words were selected from Snodgrass and Vanderwart (1980). Picture size was approximately 4.9° of the visual angle, which is within optimal range for object recognition (Biederman & Cooper, 1992). Words were matched on word length, Kucera-Francis frequency, and imageability (Table 1).1 See Table 2 for a complete list of stimulus items used in Experiments 1 and 2.

Table 1.    Means and standard errors in parentheses for verbs and nouns by word length, frequency, concreteness, imageability, and familiarity, Experiments 1 and 2
 UpDownNoneF-ratio
Nouns
 Length4.75 (0.30)4.75 (0.23)4.97 (0.22)< 1
 K-F freq30.93 (7.84)29.79 (9.41)30.75 (6.75)< 1
 Concreteness590.60 (8.33)606.75 (5.98)601.07 (3.86)F (2,56) = 1.70, = .19
 Familiarity548.00 (8.64)536.75 (12.84)550.24 (9.66)< 1
 Imageability594.80 (6.36)599.88 (6.68)593.14 (5.77)< 1
Verbs
 Length4.44 (0.18)4.00 (0.16)4.47 (0.15)F (2,61) = 2.15, = .12
 K-F freq42.87 (10.94)35.94 (11.62)45.81 (8.33)< 1
 Concreteness411.08 (15.01)438.70 (27.17)397.35 (15.40)F (2,42) = 1.16, = .32
 Familiarity542.42 (5.52)541.18 (11.68)541.75 (11.90)< 1
 Imageability460.08 (16.80)513.27 (18.45)443.46 (15.85)F (2,44) = 3.82, = .03
Table 2.    Stimulus items used in Experiments 1 and 2
 Spatial Primes
UpDownNone
Nounsballoonanchorbagfence
bellbedbananaglass
birdcarrotbarnguitar
branchcrabbarrelharp
cloudfishbowlhorn
crownflowerbreadjacket
flagfrogbuskettle
hangerfootcakelemon
hatmousecandlelock
helmetpantscarriagemirror
flyshoechainpan
kitesnakeclockpen
moonsockclownpiano
planeturtlecombscissors
rocketwhaledresstable
treewheelfaucettrain
Verbsbuildcatchbakeserve
climbchopbegshine
fillcrushbragsing
floatdigbuysketch
flydivechoosesnap
growdripclaimsplit
hurldropclingstay
leapdunkcookteach
liftfalldrawtickle
raisehangeatthaw
riselayfreezetrot
soarpourguesswait
splashsinkhearwash
standsitknitwring
throwslippluckwrite
tossspillrestyelp
5.1.3. Procedure

The procedure in these experiments was identical to the protocol used in prior research (Estes et al., 2008). As shown in Fig. 1, each trial was initiated by pressing the spacebar, which signaled a centrally located fixation cross for 250 ms.

image

Figure 1.  Illustration of word and picture conditions, Experiment 1.

Download figure to PowerPoint

A prime word then appeared at the center of the computer screen for 100 ms. After a 50 ms inter-stimulus interval, a target letter subtending approximately 1° of visual angle appeared centered at the top or bottom of the computer screen. Participants were asked to identify the target letter as quickly and as accurately as possible by pressing the appropriate key. Spatial Prime (up, down, none), Target Location (top, bottom), and Target Letter (X, O) were fully crossed and balanced to ensure that each target letter was equiprobable across conditions. Picture and word conditions were organized into two blocks, which were separated by a filler task unrelated to the current experiment. Order of picture and word blocks was counterbalanced across participants, and items were randomly presented within each block. Ten practice trials preceded 128 experimental trials.

5.2. Results and discussion

Mean response times and the proportion of correct responses served as the dependent measures. Response times on incorrect trials were removed prior to analyses, and responses greater than 2.5 SD from participants’ mean were eliminated, which resulted in the exclusion of 2% of trials. Three participants were removed because their response latencies or accuracy measures were greater than 2.5 SD from group means. We first submitted data to a 2 (Stimulus type: pictures, words) × 3 (Spatial prime: up, down, none) × 2 (Target location: top, bottom) repeated-measures anova and found marginally significant three-way interactions in mean response times (Fp [2, 48] = 3.08, = .06, η2 = 0.11, Fi [2, 58] = 2.77, = .07, η2 = 0.09). To elucidate these results, we separated picture and word conditions by conducting two 3 (Spatial prime: up, down, none) by 2 (Target location: top, bottom) repeated-measures anova across participants (Fp) and items (Fi).

5.2.1. Words

In mean reaction times, the spatial prime by target location interaction was significant, (Fp [2, 48] = 5.54, = .01, η2 = 0.19; Fi [2, 58] = 6.26, = .003, η2 = 0.18). In mean accuracy, the interaction was unreliable (Fig. 2B), (Fp [2, 48] = 2.56, = .12; Fi [2, 58] = 1.08, < .35). Targets in the lower visual field were identified more quickly (Fig. 2A) following upward spatial primes. Similarly, targets in the upper visual field were identified more quickly following downward spatial primes. No differences in mean response times or accuracy obtained for target responses in the nonspatial prime condition. Because the spatial prime by target location interaction revealed a disordinal interaction, main effects produced in the analysis cannot be interpreted independently of this interaction (Keppel, 1991). Consequently, the main effects of spatial prime and target location are not reported in the paper.2

image

Figure 2.  Mean response times and proportion of correct responses for picture and word conditions with standard errors in parentheses, Experiment 1.

Download figure to PowerPoint

5.2.2. Pictures

In mean reaction times, the spatial prime by target location interaction was not reliable, both < 1. The main effect of spatial prime was significant, (Fp [2, 48] = 3.97, p = .03, η2 = 0.14; Fi [2, 58] = 4.06, = .02, η2 = 0.12). Post-hoc tests using a Bonferroni correction indicated slowest responses to targets following upward (Mup = 481.36, SEup = 17.15) rather than downward (Mdown = 459.16, SEdown = 12.72) and nonspatial primes (Mnone = 456.23, SEnone = 16.20). The main effect of target location was not reliable (Fp [1, 24] = 2.84, p = .11, η2 = 0.11; Fi [1, 58] = 2.58, = .11, η2 = 0.04). In mean accuracy, the spatial prime by target location interaction was not reliable, both < 1. The main effects of spatial relation and target location were also unreliable, all < 1.

5.2.3. Discussion

Results from Experiment 1 replicate prior findings of a perceptual-interference effect for object words (Estes et al., 2008). Moreover, this interference effect may be attributable to spatial representations activated by words during language processing, which subsequently interfered with perception. These findings suggest that perceptual simulations occur automatically with words, but not with pictures. In short, these findings lend support for the perceptual symbols theory (Barsalou, 1999) and dual-code processing (Paivio, 1971, 1986, 2007). One concern, however, was not obtaining a cross-over interaction for words in the accuracy data. This limitation may be due to half of the participants viewing pictures prior to viewing words. To examine this issue more closely, we used the same nouns in Experiment 2. We also examined whether perceptual simulations activate the mental imagery of verbs when individually presented from sentences. To date, it is largely unknown whether sentential processing is required for eliciting perceptual-interference effects using verbs (Bergen et al., 2007; Estes et al., 2008; Richardson et al., 2003).

6. Experiment 2

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

Prior research suggests that interference effects using verbs requires holistic representations found in literal sentences (Bergen et al., 2007). Yet perceptual-interference effects have been observed using individually presented nouns (Estes et al., 2008). Therefore, the purpose of Experiment 2 was to examine whether verbs denoting an upward or downward spatial location would produce similar perceptual-interference effects when presented in isolation from sentences. If perceptual simulations of verbs require holistic, sentential processing, then perceptual-interference effects should not obtain in the verb condition. If, however, perceptual simulations of verbs do not require sentential processing, then perceptual-interference effects should obtain for both verbs and nouns (Estes et al., 2008).

6.1. Method

6.1.1. Participants

Forty-eight undergraduates from Indiana University, South Bend, received partial course credit for their participation.

6.1.2. Materials

Motion words were generated by the authors and taken from prior investigations regarding spatial aspects of mental imagery in sentence processing (Bergen et al., 2007; Richardson et al., 2003). Object words were taken from Experiment 1. This generated a total of 32 verbs and 32 nouns that denoted an upward or downward spatial relation for critical trials. Up-down verbs were selected because prior research suggests people experience greater confusions disambiguating left-right judgments based on bilateral symmetry of the body (Franklin & Tversky, 1990). In addition, 64 nonspatial verbs and nouns served as fillers in the experiment. All verbs were presented using the present-tense form.

6.1.3. Procedure

The procedure was identical to that of Experiment 1.

6.2. Results and discussion

Mean response times and the proportion of correct responses served as the dependent measures. Response times on incorrect trials were removed prior to analyses, and responses greater than 2.5 SD from participants’ mean were eliminated, which resulted in the exclusion of 2% of trials. Data were submitted to a 2 (Tense: noun, verb) × 3 (Spatial prime: up, down, none) × 2 (Target location: top, bottom) repeated-measures anova. A three-way interaction in mean response times (Fp [2, 94] = 1.03, < .36, η2 = 0.02; Fi < 1) and accuracy (both < 1) were not significant. Given the theoretical motivation of this paper—that is, to examine the spatial aspects of verbs in mental imagery—we analyzed nouns and verbs separately.

6.2.1. Nouns

A spatial prime by target location interaction was significant in mean response times (Fp [2, 94] = 25.72, < .0001, η2 = 0.35; Fi [2, 58] = 6.40, < .003, η2 = 0.18) and accuracy (Fp [2, 94] = 20.35, < .0001, η2 = 0.30; Fi [2, 58] = 7.80, < .001, η2 = 0.21). Targets in the lower visual field were identified more quickly (Fig. 3A) and more accurately (Fig. 3B) following upward spatial primes. Similarly, targets in the upper visual field were identified more quickly and more accurately following downward spatial primes. No differences were observed for targets in the nonspatial prime condition ( = .99).

image

Figure 3.  Mean response times and proportion of correct responses for noun and verb conditions with standard errors in parentheses, Experiment 2.

Download figure to PowerPoint

6.2.2. Verbs

A spatial prime by target location interaction was significant in mean response times (Fp [2, 94] = 39.63, < .0001, η2 = 0.46; Fi [2, 58] = 14.79, < .0001, η2 = 0.34) and accuracy (Fp [2, 94] = 21.74, < .0001, η2 = 0.32; Fi [2, 58] = 10.74, < .0001, η2 = 0.27). Targets in the upper visual field were identified more quickly (Fig. 3C) and more accurately (Fig. 3D) following downward spatial primes. Similarly, targets in the lower visual field were identified more quickly and more accurately following upward spatial primes. No differences were observed for targets in the nonspatial prime condition ( < .32).

6.2.3. Discussion

Results from Experiment 2 replicate findings obtained in Experiment 1 and prior research concerning perceptual-interference effect s with nouns (Estes et al., 2008). In addition, findings from Experiment 2 revealed a perceptual-interference effect with verbs. This finding obtained despite the fact we did not present verbs in sentences (c.f., Bergen et al., 2007; Richardson et al., 2003). In sum, these results support the perceptual symbols system theory (Barsalou, 1999, 2008), which claims that perceptual simulations activate mental imagery of words. In this experiment, perceptual simulations activated by nouns and verbs interfered with the actual perception of targets in a letter-identification task.

7. General discussion

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References

Like having vivid imaginations, perceptual simulations reflect dynamic representations that allow for the creation of meaning and drawing inferences through the medium of language (Barsalou, 1999, 2008). In this investigation, we examined whether pictures would elicit perceptual-interference effects as demonstrated by their linguistic counterparts. We also sought to extend prior findings on the spatial aspects of language processing in mental imagery (Bergen et al., 2007; Estes et al., 2008; Richardson et al., 2003).

Most critical in the target-identification paradigm was observing an interference effect between spatial primes and target responses: Words that denoted objects and actions located in high places (e.g., cloud, soar) produced slower responses to targets located at the top of the visual field, whereas words that denoted objects and actions located in low places (e.g., root, dig) produced slower responses to targets located at the bottom of the visual field. In two experiments, we found perceptual-interference effects for nouns and verbs that denoted spatial associations only; interference effects were not found with pictures or words lacking an obvious spatial association. These findings are interpreted to support the perceptual symbol system view of representation, as discussed below.

In Experiment 1, perceptual-interference effects were obtained with object words but not with their corresponding pictures. This finding replicates prior research on spatial attention and perceptual simulation of object words in mental imagery (Estes et al., 2008). Object words denoting an upward or downward spatial association automatically orient one’s attention to the object’s typical location, and object words elicit perceptual simulations of the denoted object. Consequently, object words followed by a visual target located at the top or bottom of the display produced interference because the simulated object (e.g., hat) shared few, if any, perceptual features with the visual target (e.g., X).

Moreover, the perceptual-interference effect observed in the word condition was eliminated in the picture condition. This qualitatively distinct pattern suggests that words and pictures engage in different operating mechanisms, which is indicative of dual-code processing in mental imagery (Paivio, 2007). In terms of activation pathways, words are first encoded in the verbal system, whereas pictures are first encoded in the nonverbal system. Because the verbal system is closely integrated with the perceptual simulation system (Barsalou et al., 2008), linguistic stimuli have privileged access over pictorial stimuli in activating simulations during the online processing of mental imagery. By activating the nonverbal system first, pictures are instead more likely to engage in bottom-up, perceptual processing prior to activating the simulation system. Findings from Experiment 1 therefore support the claim of two independent subsystems used in mental imagery (Barsalou et al., 2008; Glaser, 1992; Paivio, 2007).

Yet dual-code theory also posits that nonverbal transformations (e.g., mental rotation) can operate on spatial properties of objects and events through imagery (Paivio, 2007). In a classic study, Cooper and Shepard (1973) found reaction-time differences in the mental rotation of block diagrams. Indeed, prior work suggests that pictorial stimuli may activate mental imagery under deliberate processing (e.g., Brooks, 1968; Shepard & Metzler, 1971). Of course, the paradigm from the current investigation used extremely short presentation times, which more likely reflects automatic rather than deliberate processing. Future research may consider varying the presentation rates of pictorial and linguistic stimuli to reveal possible time-course effects in mental imagery (Borreggine & Kaschak, 2006; Verges & Duffy, 2009).

In Experiment 2, perceptual-interference effects were found for both nouns and verbs. This finding extends prior research regarding the spatial aspects of mental imagery in sentential processing (Bergen et al., 2007; Richardson et al., 2003). In conjunction with Bergen and colleagues, we replicate the finding that spatial imagery is location specific with respect to up-down spatial representations. In contrast to Bergen and colleagues, however, findings from Experiment 2 also suggest that interference effects are not limited to holistic representations in sentences. On the contrary, we found that individually presented words activate perceptual simulations in mental imagery. This finding applies to literal representations of word meaning as most items used in this investigation consisted of concrete nouns and verbs. In prior studies, Bergen and colleagues did not obtain interference effects using figurative verbs in metaphorical sentences (e.g., “Oil prices climbed above $51 per barrel”). Richardson and colleagues, however, did observe perceptual interference using both abstract and concrete verbs. Consequently, the issue of whether mental imagery occurs for abstract and concrete linguistic stimuli remains a fruitful avenue for future research.

In sum, results from this investigation provide further evidence in support of the perceptual symbol systems theory (Barsalou, 1999, 2008). These findings also reveal that object and motion words activate perceptual simulations in language comprehension, which are not constrained by holistic, sentential processing (Meteyard, Bahrami, & Vigliocco, 2007; see also van Dantzig, Pecher, Zeelenberg, & Barsalou, 2008). More broadly, these results illuminate the role of perceptual simulations in mental imagery and language comprehension. In The Shadow of the Wind, Zafón (2004) captures the integration of perceptual simulation and mental imagery in literature: “Under the warm light cast by the reading lamp, I was plunged into a new world of images and sensations, peopled by characters who seemed as real to me as my room (p. 7).” Thus, spatial representations associated with linguistic elements exert powerful influences between language and perception.

Footnotes
  • 1

    A one-way anova treating spatial prime as a factor in the verb condition revealed an overall effect in imageability, as shown in Table 1. Post-hoc comparisons using Tukey’s HSD revealed no significant differences in imageability for upward and downward spatial primes ( = .17). Nor was there a significant difference between upward and nonspatial primes ( = .78). Downward spatial primes were more imageable than nonspatial primes ( = .02). Critical to this investigation, these findings indicate that upward and downward spatial primes were equally imageable. Although these results suggest that perceptual interference may be due to greater imageability among spatial primes, from a theoretical perspective, we were primarily interested in comparing differences between spatial conditions, and not differences between downward and nonspatial primes.

  • 2

    Full analysis from Experiments 1 and 2. Experiment 1: Words. In mean response times, the main effect of spatial relation was significant in the item analysis (Fp [2, 48] = 2.25, = .12; Fi [2, 58] = 5.92, < .01, η2 = 0.17). The main effect of target location was unreliable, both < 1. In mean accuracy, the main effect of spatial relation was unreliable (Fp [2, 54] = 1.18, = .32; Fi [2, 58] = 1.22, < .30). Nor was the main effect of target location reliable, both < 1. Experiment 2: Nouns. A main effect of spatial prime obtained for mean response times (Fp [2, 94] = 15.32, < .0001, η2 = 0.25; Fi [2, 58] = 8.02, = .001, η2 = 0.22). Post-hoc tests using a Bonferroni correction revealed slower responses following an upward (Mup = 555.10, SEup = 6.98) than a downward (Mdown = 515.90, SEdown = 6.98) or a nonspatial prime (Mnone = 531.71, SEnone = 4.94). The main effect of spatial prime was unreliable in the accuracy data (Fp [2, 94] = 1.10, = .34, η2 = 0.02; Fi [2, 58] = 1.22, = .30). Nor was the main effect of target location reliable for mean response times (both < 1) or accuracy (Fp [1, 47] = 3.23, = .08, η2 = 0.06; Fi [1, 58] = 1.35, = .25). Verbs. A main effect of spatial prime obtained for mean response times (Fp [2, 94] = 16.27, < .0001, η2 = 0.26; Fi [2, 58] = 9.44, < .0001, η2 = 0.25). Post-hoc tests using a Bonferroni correction revealed slower responses following an upward (Mup = 557.86, SEup = 17.76) than a downward (Mdown = 521.88, SEdown = 14.53) or a nonspatial prime (Mnone = 528.87, SEnone = 15.87). The main effect of spatial prime was unreliable for accuracy data (Fp [2, 94] = 2.54, = .09, η2 = 0.05; Fi [2, 58] = 1.35, = .27). The main effects of target location were nonsignificant, all < 1.

References

  1. Top of page
  2. Abstract
  3. 1. Amodal and perceptual symbol systems
  4. 2. Spatial imagery in language comprehension
  5. 3. Processing words and pictures in mental imagery
  6. 4. Current investigation
  7. 5. Experiment 1
  8. 6. Experiment 2
  9. 7. General discussion
  10. References
  • Anderson, J. R., & Bower, G. H. (1972). Configural properties in sentence memory. Journal of Verbal Learning and Verbal Behavior, 11, 594605.
  • Anderson, J. R., Conrad, F. G., & Corbett, A. T. (1989). Skill acquisition and the LISP Tutor. Cognitive Science, 13, 467506.
  • Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22(4), 577660.
  • Barsalou, L. W. (2003). Situated simulation in the human conceptual system. Language and Cognitive Processes, 18, 513562.
  • Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617645.
  • Barsalou, L. W., Santos, A., Simmons, W. K., & Wilson, C. D. (2008). Language and simulation in conceptual processing. In M.De Vega, A. M.Glenberg, & A. C.Graesser (Eds.), Symbols, embodiment, and meaning. Oxford, England: Oxford University Press.
  • Bergen, B., Lindsay, S., Matlock, T., & Narayanan, S. (2007). Spatial and linguistic aspects of visual imagery in sentence comprehension. Cognitive Science: A Multidisciplinary Journal, 31, 733764.
  • Biederman, I., & Cooper, E. (1992). Size invariance in visual object priming. Journal of Experimental Psychology: Human Perception and Performance, 18, 121133.
  • Borghi, A. M., Glenberg, A. M., & Kaschak, M. P. (2004). Putting words in perspective. Memory & Cognition, 32, 863873.
  • Borreggine, K. L., & Kaschak, M. P. (2006). The action-sentence compatibility effect: It’s all in the timing. Cognitive Science, 30, 10971112.
  • Bower, G. H., & Morrow, D. G. (1990). Mental models in narrative comprehension. Science, 247, 4448.
  • Brooks, L. R. (1968). Spatial and verbal components of the act of recall. Canadian Journal of Psychology, 22, 349368.
  • Cangelosi, A., & Riga, T. (2006). An embodied model for sensorimotor grounding and grounding transfer: experiments with epigenetic robots. Cognitive Science: A Multidisciplinary Journal, 30, 673689.
  • Cooper, L. A., & Shepard, R. N. (1973). Chronometric studies of the rotation of mental images. In W. G.Chase (Ed.), Visual information processing (pp. 75176). New York: Academic Press.
  • Van Dantzig, S., Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2008). Perceptual processing affects conceptual processing. Cognitive Science, 32, 579590.
  • Estes, Z., Verges, M., & Barsalou, L. (2008). Head up, foot down: Object words orient attention to the objects’ typical location. Psychological Science, 19, 9397.
    Direct Link:
  • Fodor, J. (1975). The language of thought. Cambridge, MA: Harvard University Press.
  • Franklin, N., & Tversky, B. (1990). Searching imagined environments. Journal of Experimental Psychology: General, 119, 6376.
  • Glaser, W. R. (1992). Picture naming. Cognition, 42, 61105.
  • Glenberg, A., & Kaschak, M. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558565.
  • Isen, A. M. (1984). Toward understanding the role of affect in cognition. In R. S.WyerJr., & T. K.Srull (Eds.), Handbook of social cognition (pp. 179236). Hillsdale, NJ: Erlbaum.
  • Kaschak, M., Madden, C., Therriault, D., Yaxley, R., Aveyard, M., Blanchard, A., & Zwaan, R. A. (2005). Perception of motion affects language processing. Cognition, 94, B79B89.
  • Keppel, G. (1991). Design and analysis: A researcher’s handbook (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall.
  • Kintsch, W. (1974). The representation of meaning in memory. Hillsdale, NJ: Erlbaum.
  • Kosslyn, S. M., Thompson, W. L., & Ganis, G. (2006). The case for mental imagery. Oxford, England: Oxford University Press.
  • Martel, Y. (2001). Life of Pi: A novel. New York: Harcourt Press.
  • Meteyard, L., Bahrami, B., & Vigliocco, G. (2007). Comprehension of motion verbs affects the detection of visual motion: Evidence from psychophysics. Psychological Science, 18, 10071013.
    Direct Link:
  • Morrow, D. G., Greenspan, S. L., & Bower, G. H. (1987). Accessibility and situation models in narrative comprehension. Journal of Memory and Language, 26, 165187.
  • Norman, D. A., & Rumelhart, D. E. (1975). Explorations in cognition. New York: W. H. Freeman.
  • Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart, & Winston.
  • Paivio, A. (1986). Mental representations: A dual coding approach. New York: Oxford University Press.
  • Paivio, A. (1991). Images in mind: The evolution of a theory. New York: Harvester Wheatsheaf.
  • Paivio, A. (2007). Mind and its evolution: A dual coding theoretical approach. Mahwah, NJ: Erlbaum.
  • Pulvermüller, F. (2001). Brain reflections of words and their meaning. Trends in Cognitive Sciences, 5, 517524.
  • Pulvermüller, F., Shtyrov, Y., & Ilmoniemi, R. J. (2005). Brain signatures of meaning access in action word recognition. Journal of Cognitive Neuroscience, 17, 884892.
  • Pylyshyn, Z. W. (1973). What the mind’s eye tells the mind’s brain: A critique of mental imagery. Psychological Bulletin, 80, 124.
  • Pylyshyn, Z. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological Review, 88, 1645.
  • Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: Bradford Books.
  • Richardson, D. C., Spivey, M. J., Barsalou, L. W., & McRae, K. (2003). Spatial representations activated during real-time comprehension of verbs. Cognitive Science, 27, 767780.
  • Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701703.
  • Smith, M. C., & Magee, L. E. (1980). Tracing the time course of picture-word processing. Journal of Experimental Psychology: General, 109, 373392.
  • Snodgrass, J., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174215.
  • Stanfield, R., & Zwaan, R. (2001). The effect of implied orientation derived from verbal context on picture recognition. Psychological Science, 12(2), 153156.
    Direct Link:
  • Verges, M., & Duffy, S. (2009). Time-course effects of linguistic and pictorial stimuli in mental imagery. Unpublished manuscript. South Bend, IN: Indiana University.
  • Zafón, C. R. (2004). The shadow of the wind. New York: Penguin Press.
  • Zwaan, R. A., Madden, C. J., Yaxley, R. H., & Aveyard, M. E. (2004). Moving words: Dynamic mental representations in language comprehension. Cognitive Science, 28, 611619.