The ability to understand written or spoken language not only requires comprehension of the individual words comprising a text or utterance but also involves the development of rich multidimensional situation models that can incorporate spatial, causal, emotional, temporal, event, and expectation-related information (Ditman, Holcomb, & Kuperberg, 2008; Gernsbacher, 1990; Gernsbacher & Foertsch, 1999; Graesser, Gernsbacher, & Goldman, 2003; Zwaan, 1999; Zwaan, Langston, & Graesser, 1995; Zwaan & Radvansky, 1998). In developing these situation models, readers appear to spontaneously activate mental representations grounded at least in the visual, auditory, and motoric modalities, and the neural mechanisms responsible for such activations tend to overlap with those involved during real-world experiences (Brunye, Ditman, Mahoney, Augustyn, & Taylor, 2009; Brunyé, Ditman, Mahoney, Walters, & Taylor, 2010; Ditman, Brunyé, Mahoney, & Taylor, 2010; Glenberg & Kaschak, 2002; Kaschak, Zwaan, Aveyard, & Yaxley, 2006; Pulvermüller, 2005; Richardson, Spivey, Barsalou, & McRae, 2003; Ruby & Decety, 2001; Scorolli, Borghi, & Glenberg, 2008). Whereas a growing number of studies demonstrate that language can prime visual, auditory, motoric, kinematic, and affective representations, it is unknown whether similar effects will emerge when examining the tactile sensory modality, given that this particular modality is thought to receive relatively few endogenous attentional resources (Turatto, Galfano, Bridgeman, & Umilta, 2004). The present research thus examined whether tactile properties implied during language comprehension might prime tactile representations and, in turn, influence direct tactile perception. A second goal was to examine the specificity of these effects: Does the degree of relatedness between described and directly perceived tactile properties modulate priming effects? We begin by reviewing the literature examining the nature of representations activated during reading, and then motivate our study by considering the extant literature specifically examining the nature of tactile representations and how they may manifest during language comprehension.
The present studies examined whether implied tactile properties during language comprehension influence subsequent direct tactile perception, and the specificity of any such effects. Participants read sentences that implicitly conveyed information regarding tactile properties (e.g., Grace tried on a pair of thick corduroy pants while shopping) that were either related or unrelated to fabrics and varied in implied texture (smooth, medium, rough). After reading each sentence, participants then performed an unrelated rating task during which they felt and rated the texture of a presented fabric. Results demonstrated that the texture properties implied in sentences influence direct tactile perception. Specifically, after reading about a smooth or rough texture, subsequent fabric ratings became notably smoother or rougher, respectively. However, we also show that there was some specificity to these effects: Fabric-related sentences elicited more specific and interactive effects on subsequent ratings. Together, we demonstrate that under certain circumstances, language comprehension can prime tactile representations and affect direct tactile perception. Results are discussed with regard to the nature and scope of multimodal mental simulation during reading.
1.1. Sensorimotor simulation during reading
Traditional theories of language comprehension propose that readers develop and manipulate amodal propositional representations that are more reliant upon linguistic regularities as opposed to sensorimotor experiences (Collins & Quillian, 1969; Fodor, 1975; Fodor & Pylyshyn, 1988; Kintsch, 1974; for review, see Murphy, 2002). Under these theories, word representations are translated into syntactic structures and semantic propositions that are both amodal in nature; these syntactic and semantic representations are combined to form an integrated archival (i.e., Barsalou, 1992) representation of a text. Critically, according to traditional theories of cognition, the comprehension of a previously learned word is removed from bodily experience, and instead depends on abstracted, amodal symbols. Although the word sandpaper may be learned in a multimodal context (e.g., in a garage, touching and seeing the sandpaper), the resulting concept is stored amodally. Some contemporary theories also express the importance of amodal symbols in language comprehension (e.g., Landauer, 2002; Landauer, McNamara, Dennis, & Kintsch, 2007), and there is some convincing evidence that amodal computational models (i.e., HAL, LSA) can account for a range of human performance (Burgess & Lund, 1997; Landauer, Foltz, & Laham, 1998).
Although such amodal propositional theories are pervasive throughout the language comprehension literature, they are challenged by other theories proposing that readers construct modal experiential simulations that help ground linguistic meaning in bodily senses and activities (Barsalou, 1999; Fincher-Kiefer, 2001; Glenberg, 1997; Lakoff, 1987; Zwaan, 2004). The modal view of language comprehension has increasingly been supported by a number of behavioral and neuroscientific findings (e.g., Zwaan, 2004; Willems, Labruna, D’Esposito, Ivry, & Casasanto, 2011). First, accumulating evidence indicates that language can prime visual representations corresponding to described perceptual features such as shape, visibility, and orientation (Horton & Rapp, 2003; Stanfield & Zwaan, 2001; Yaxley & Zwaan, 2007; Zwaan, Stanfield, & Yaxley, 2002; Zwaan & Yaxley, 2003). Second, language can prime motoric representations that guide and constrain subsequent motor activity, such that readers encounter difficulty attempting to perform a motor response incongruent with an action implied in a text (i.e., the Action Compatibility Effect; Glenberg & Kaschak, 2002; Tucker & Ellis, 2004; Zwaan & Taylor, 2006). In general, links between language and movement extend beyond gross motor movements to studies involving eye movements (Spivey, Tyler, Richardson, & Young, 2000) and hand aperture (Glover, Rosenbaum, Graham, & Dixon, 2004). Language can also prime aural representations that affect direct sound perception; for instance, language implying specific sounds can prime the recognition of similar directly perceived sounds (Brunyé et al., 2010; Kaschak et al., 2006).
Finally, the array of behavioral findings is bolstered by a growing body of neuroimaging data, suggesting that readers use functionally linked brain areas for understanding language and processing direct perception and action (Martin & Chao, 2001; Pulvermüller, Shtyrov, & Ilmoniemi, 2005; Tettamanti et al., 2005). For instance, Pulvermüller et al. (2005) used magnetoencephalography (MEG) to examine brain activations while participants listened to words describing actions involving the face (e.g., eat) or the leg (e.g., kick). Words describing facial and leg actions elicited early activation of somatotopically organized areas of the premotor and motor cortices, suggesting that comprehending action words automatically triggers activation in brain areas responsible for action preparation and execution.
If simulated sensorimotor experiences share overlapping neural mechanisms with those activated during actual sensation and perception, one might expect the relatedness of language stimuli to a motor or perceptual task to differentially modulate direct action or perception. For instance, hand-based Action Compatibility Effects might be strongest following language describing hand movements (i.e., close the drawer) relative to foot movements (i.e., kick the ball), and vice versa. Some work suggests that this may be the case; Tettamanti et al. (2005) presented participants with sentences describing an action involving the foot, hand, or mouth while measuring brain activity with functional magnetic resonance imaging (fMRI). The authors found activation of the motor cortex during reading that largely overlapped with activations found when moving the actual body part (e.g., the hand area of the motor cortex while reading about grasping a knife). Importantly, the most neural overlap was found between cued body parts and areas of the motor cortex specifically related to them; less overlap was found in brain areas associated with areas of the body relatively unrelated to the described action (see also, Hauk, Johnsrude, & Pulvermuller, 2004; Masson, Bub, & Warren, 2008). This recent work raises the possibility that there may be a certain degree of both generality and specificity of mental simulations during language comprehension. In the case of Hauk et al. (2004), effects were found in somewhat distributed areas of the motor cortex, but the strongest effects were found in specifically related areas; thus, mental simulations do not appear to be evoked only in a direct and simple relationship to literal sentence meaning (cf., Masson et al., 2008), and the magnitude of influence on direction perception might be modulated by the relatedness of primed concepts to a behavioral criterion task.
Thus, there is converging behavioral and neuroscience evidence that language comprehension can prime representations grounded in sensory and motoric modalities. However, much of the literature is limited to examining the visual and motoric properties of mental representations developed during language comprehension, and as reviewed below only a single study has examined whether language can prime tactile representations. More convincing evidence for the breadth of mental simulations in language comprehension would come from studies examining a broader range of possible human sensory activations (e.g., gustatory, olfactory, tactile). Indeed, some recent investigations extend theories of grounded language comprehension to the auditory domain and between multiple modalities (Brunyé et al., 2010; Kaschak et al., 2006; Pecher, Zeelenberg, & Barsalou, 2003). Given the proposed importance of mental simulation as a component of language comprehension (i.e., Glenberg, 2010), such simulations might also be expected to code for an even broader range of sensory experience.
1.2. Tactile mental simulations
To our knowledge, only one study to date has directly assessed whether people activate tactile mental simulations during reading (Connell & Lynott, 2010). In that study, participants were asked to categorize single words (e.g., fluffy or shiny) according to whether they could be experienced in a target modality (e.g., touch or vision). The authors found that participants showed a tactile disadvantage during language processing: People were less accurate at detecting tactile-related properties than properties related to any of the other four sensory modalities. The authors interpret these findings as modality-specific modulation of attentional control. In other words, the conceptual system relies on perceptual representations to categorize words into concrete modalities. The lower accuracy for the tactile modality suggests that this modality is relatively deprived of endogenous attention and is at a specific disadvantage relative to attention allocated to the other four sensory modalities (see also Turatto et al., 2004).
Other support for a link between conceptual and perceptual processing within the tactile modality comes from neuroimaging work investigating tactile imagery and illusory tactile experiences. When participants are asked to imagine being touched on certain areas of the hand, they show largely overlapping neural activations (via fMRI) relative to actual tactile stimulation (Yoo, Freeman, McCarthy, & Jolesz, 2003). Second, other studies show activations in the tactile-relevant areas of the somatosensory cortices while watching another person (Blakemkore, Bristow, Bird, Frith, & Ward, 2005) or an object (Keysers et al., 2004) being touched. These findings suggest that the brain may use a visuotactile mirroring mechanism that allows for the mental simulation of touch without any actual tactile input (Ebisch et al., 2008; Gallese, 2005). Third, there is some evidence that people can enact illusory tactile simulations that are both phenomenologically and neurally similar to actual tactile experiences (Johnson, Burton, & Ro, 2006; McKenzie, Poliakoff, Brown, & Lloyd, 2010). The question remains whether people might enact such tactile simulations during reading, and if so, whether they would carry over onto performance of a seemingly unrelated tactile task.
1.3. The present studies
To determine whether language can prime tactile representations and, in turn, influence direct tactile perception, we asked participants to read sentences describing tactile or nontactile experiences and then perform a fabric-rating task. Participants read sentences that implied (e.g., Grace tried on a pair of thick corduroy pants while shopping) or did not imply (e.g., Sophie’s friends thought she owned way too many pairs of pants) texture-specific tactile properties. Half of the tactile and nontactile sentences were related to fabric (i.e., describing a cloth), and half unrelated (i.e., describing a nonfabric object). The tactile sentences were divided into thirds, describing either a soft, medium, or rough texture. After reading a sentence, participants performed an ostensibly unrelated task in which they touched and rated the texture of a presented fabric that ranged from smooth (i.e., silk) to rough (i.e., canvas).
As reviewed above, there is some evidence that language may prime tactile representations. If so, then the tactile properties implied during a reading experience might alter subsequent perceived tactile properties, similar to effects found with visual, auditory, and motoric descriptions. Alternatively, because past research suggests that touch, unlike vision, is a nondominant sensory modality (Posner, Nissen, & Klein, 1976), and the tactile sensory modality might be deprived of endogenous attention during reading (i.e., Turatto et al., 2004), and consequently implied tactile content may not induce tactile simulations that would modulate subsequent fabric ratings. If implied tactile properties influence subsequent tactile experiences, the directionality of any effect is more difficult to hypothesize. There is some evidence leading us to expect that readers would impose implied tactile properties on subsequent perceptual experiences; for instance, reading about a silky dress would cause readers to subsequently rate a rough fabric as smoother than they would after reading about a burlap sack. We will refer to this as a congruence effect. For example, this type of relationship supports previous work suggesting that color perception can be biased toward people’s preconceived notions of a described object’s properties (Hansen, Olkkonen, Walter, & Gegenfurtner, 2006). On the other hand, some postulate that competition for processing resources within a given modality may produce seemingly paradoxical effects of language on direct perception (Kaschak et al., 2006); for instance, if language primes a soft tactile representation, it would be relatively easy to perceive a rough tactile property, given competition for similar processing resources in a single modality (e.g., Tellinghuisen & Nowak, 2003). We will refer to this possibility as an incongruence effect.
Our second set of hypotheses was related to the specificity of any language-driven tactile priming effects; in other words, might direct tactile perception of fabrics be affected by language both related and unrelated to fabrics? Some literature suggests that mental simulations appear to be generally activated with language of both high and low relatedness to the demands imposed by a criterion task (Glenberg & Kaschak, 2002; Masson et al., 2008; Tettamanti et al., 2005). In line with this work, we might expect both fabric-related and fabric-unrelated tactile mental simulations to similarly affect the direct perception of tactile properties. On the other hand, if the tactile modality is generally deprived of attentional resources (i.e., Connell & Lynott, 2010; Turatto et al., 2004), then highly related language and perceptual tasks might be necessary to elicit priming effects.
Below, we first describe two pilot studies aimed at validating and standardizing the tactile properties implied by our stimulus sentences (Pilot Study 1) and the tactile perceptions elicited by our fabrics (Pilot Study 2). We then describe our main experiment, examining the effect of described tactile information on subsequent perceptions of actual tactile properties.
2. Pilot study 1: Sentence development
Our first pilot study was designed to gather ratings of objects described in 54 tactile-related sentences. Our intention was to confirm that our 27 fabric-related and 27 fabric-unrelated tactile sentences effectively represented each of three textures: smooth (18), medium (18), or rough (18). To do so, we asked 10 Tufts University undergraduates (six male, four female, age M = 18.9, SD = 1.37) to read and rate 54 sentences, each designed to describe either a smooth (e.g., silk ribbon), medium (e.g., cotton sheets), or rough (e.g., burlap sack) object. Half of the 54 sentences were related to fabrics (e.g., burlap sack) and the other half were unrelated but involved concrete objects (e.g., sandpaper). For example, a smooth sentence in the fabric-related category read, “Candice tied a long silk ribbon onto each of her wrapped gifts.” A rough sentence in the fabric-unrelated category read, “Karen touched the grainy sandpaper.” Each sentence was presented in the center of a 17” computer monitor for 3 s, immediately after which a scale was presented in the center of the screen and the participant rated “How rough or smooth is the object in the previous sentence?” with the scale endpoints being “roughest” and “smoothest.” The rating scale was adapted from textile research (Sular & Okur, 2007) and was 860-pixels wide, with three anchors: one on each end (labeled roughest, smoothest) and a vertical line at center. Participants made each rating by clicking the computer mouse at a desired position along the line; software automatically recorded the mouse click position in coordinate space along the x-axis. Each participant read and rated the sentences twice in blocks of 54 randomly ordered trials.
Sentence ratings varied widely and reached the scale limits (in pixels), from smoothest (−430) to roughest (+426) (overall M = −43.7, SD = 223.4). Results confirmed that participants differentially rated the three texture categories of fabric-related sentences (smooth: M = −307.6, SD = 116.6; medium: M = −129.4, SD = 132.4; rough: M = 126.7, SD = 152.6) in a repeated-measures anova, F(2, 18) = 81.66, p < .01, η2 = .90 (all three paired t tests, p < .01). The same was found for the fabric-unrelated sentences (smooth: M = −257, SD = 119.2; medium: M = −55.4, SD = 174.2; rough: M = 243.9, SD = 152.9), F(2, 18) = 111.67, p < .01, η2 = .93 (all three paired t tests, p < .01). Sample stimuli are provided in Table 1.
|Fabric Related||Fabric Unrelated|
Smooth: silk ribbon
Medium: cotton sheets
Rough: burlap sack
Smooth: fluffy kitten
Medium: stucco wall
Rough: grainy sandpaper
Reupholster a couch
Visit tailor shop
3. Pilot study 2: Fabric selection
Our second pilot study was designed to gather tactile ratings of 17 fabric samples. Our intention was to determine nine fabric samples that best categorized smooth (3), medium (3), and rough (3) textures, to be used in the main experiment. To do so, we asked ten Tufts University undergraduates (six male, four female, age M = 19.2, SD = .60) to rate a pool of 17 fabrics, five of which were chosen to potentially represent the smooth (e.g., silk, satin), seven for the medium (e.g., cotton, polyester), and five for the rough (e.g., canvas, corduroy) tactile categories. Each 6 × 6-cm square fabric swatch was affixed to an individual cardboard backing. Participants sat with their arms inside a box workstation that prevented them from seeing either their own or the experimenter’s hands. The experimenter sat on the opposite side of the workstation and watched a participant’s progress via a cloned computer monitor, presenting each fabric swatch to the participant in succession. The participant could take as much time as necessary to feel the fabric with his or her dominant hand and then provide a rating on the scale described above. Participants responded using a small USB track-pad mouse positioned in the center of the workstation between their hands. At no time could the participant see the fabric. Each participant rated the pool of fabrics thrice in blocks of 17 randomly ordered trials.
Fabric ratings varied widely and reached the scale limits (overall M = −46.6, SD = 213.9). On the basis of these mean fabric ratings, we rank ordered the 17 fabrics from smoothest to roughest. We then selected nine, including the three smoothest and three roughest rated fabrics, as well as the median fabric and the fabrics immediately preceding and following it. Analyses of these nine fabrics confirmed that they were rated differently across the three texture categories (smooth: M = −296.4, SD = 141.9; medium: M = −55.1, SD = 104.6; rough: M = 203.7, SD = 194.5) in a repeated-measures anova, F(2, 18) = 25.74, p < .01, η2 = .67 (all three paired t tests, p < .01).
4. Main experiment: Tactile simulation
Our main experiment used the pool of 54 tactile and 54 nontactile concrete (see Table 1) sentences and the nine selected fabrics. In this study, participants read either nontactile filler sentences, or sentences describing a smooth, medium, or rough object that was either fabric related or fabric unrelated. After each sentence, they felt and rated a fabric for smoothness/roughness. We hypothesized that if readers mentally simulate tactile experiences by activating sensory representations of described objects, then such simulations should affect subsequent fabric ratings. We expected, based on some limited evidence, that mental simulations evoked during language comprehension would affect subsequent tactile perception by biasing fabric ratings in the direction of the tactile property evoked by the preceding language.
Forty-eight Tufts University undergraduates (29 female, 19 male; age M = 19.6, SD = 1.9) participated for monetary compensation ($20 USD). None of these participants had taken part in either of the pilot studies. Participants visited the laboratory on two separate days, first to provide baseline tactile fabric ratings and second to perform the main portion of the experiment.
All language and fabric materials were those selected through the pilot studies. We also developed 54 nontactile filler sentences in addition to the 54 validated in the pilot study. As with our tactile sentences, these nontactile sentences were either generally related to fabric (27 sentences; e.g., Danny accompanied his mother to pick out new bed linens) or unrelated (27 sentences; e.g., Manny went grocery shopping on Tuesday to buy produce).
The 54 tactile sentences were evenly divided into smooth, medium, and rough categories. Within each set of 27 tactile sentences, the three sentence types (smooth, medium, rough) were pseudorandomly paired with each fabric type (smooth, medium, rough) to ensure an evenly crossed (sentence roughness x fabric roughness) design. The 27 nontactile sentences were treated the same way but without regard to sentence roughness. This process resulted in 108 sentence–fabric pairings. We randomly generated two fixed orders of these 108 pairings to be used in this experiment (orders A & B).
A recognition test was developed to ensure that participants were reading for comprehension. This test included 48 tactile-related sentences, 24 of which were originally presented (six from each category) and 24 were modified from their originally presented versions. Of the 24 modified sentences, 12 were modified in terms of their tactile properties (e.g., grainy sandpaper became smooth sandpaper), and 12 on nontactile dimensions (e.g., event order, described location, character name).
Participants visited the laboratory on two separate occasions, each separated by at least 24 h. During the first session, they provided baseline fabric ratings three times in succession, after feeling the nine fabrics presented in one of the two random orders described above. Baseline fabric ratings were provided one at a time using the response scale described in Pilot Study 2. No sentences were presented during the first session.
During the second session, participants sat at the box workstation described in Pilot Study 2 and received instructions to read and memorize presented sentences and to feel and rate each fabric; they were also told that they would be tested on their memory of the sentences upon completion of the experiment. The sentence–fabric pairing order was whichever one was not used during the first session; half of the participants received fabric pairing order A and B during the first and second sessions, respectively, and half received the opposite order. Each sentence was presented in the center of the computer monitor with a presentation time corresponding to 375 ms/word. While the participant read a sentence, the experimenter directed a fabric toward the participant’s hands; when the participant finished reading, he or she felt the fabric with his or her fingertips and rated its tactile properties using the scale described in Pilot Study 2. The participant could take as much time as needed to rate the fabric; response times averaged 4.76 s (SD = 3.35) and exploratory analyses revealed they did not vary as a function of sentence or fabric type (all p’s > .05). This read-then-rate succession repeated 108 times, and then the memory test began.
The memory test involved responding old/new to 48 sentences; participants were instructed to respond old if they recognized the sentence as previously presented, and new if they did not. As described above, 24 of the trials included sentences matching those originally presented, and 24 included sentences modified from their original version. Of the 24 modified sentences, 12 were modified in terms of tactile qualities and 12 on other nontactile dimensions. Upon completion of the memory test, participants were compensated and thanked for their time.
Overall memory test performance was moderately high (M = .80, SD = .09), confirming that participants read for comprehension.
Participants showed overall high consistency in their baseline ratings of each individual fabric, with mean Pearson’s r = .56 (SD = .14, all p’s < .01). Fabric-rating data were analyzed by examining how individual participant’s ratings differed from their own first-session baseline by calculating difference scores that compared session-2 fabric ratings to averaged baseline ratings for that fabric. In this manner, we were able to assess, at the individual level, how sentences implying varied tactile properties affected perceptions of tactile properties.
6.1. Fabric-rating data
Two separate repeated-measures analyses of variance (anovas) were conducted: one examining ratings following nontactile and one following tactile sentences. Data for all conditions are detailed in Table 2. Analyses by items are denoted as F2.
|Sentence Type||Fabric Type|
6.1.1. Nontactile sentences
First, we examined ratings following nontactile sentences as a function of Relatedness (2: fabric related, fabric unrelated) and Fabric Type (3: smooth, medium, rough). This analysis revealed only a main effect of Fabric Type, F(2, 94) = 20.23, p < .01, η2 = .26 (F2 = 6.82, p < .05; all other p’s > .05). Overall, one-sample t tests (comparing rating difference scores to zero) revealed that participants’ ratings of smooth fabrics were generally smoother during the second session, t(47) = 2.59, p = .01, d = .37, and medium fabrics were generally rougher during the second session, t(47) = 4.89, p < .01, d = .71 (rough fabrics did not differ from baseline, p > .05).
6.1.2. Tactile sentences
Second, we examined ratings following tactile sentences as a function of Sentence Type (3: smooth, medium, rough), Relatedness (2: fabric-related, fabric-unrelated), and Fabric Type (3: smooth, medium, rough). This analysis revealed main effects of Sentence Type, F(2, 94) = 9.66, p < .01, η2 = .01 (F2 = 4.69, p < .05), and Fabric Type, F(2, 94) = 24.8, p < .01, η2 = .20 (F2 = 5.89, p < .05). These effects were qualified by a Sentence Type x Relatedness x Fabric Type interaction, F(4, 188) = 4.23, p < .01, η2 < .01 (F2 = 1.69, p = .22).
To further examine this three-way interaction, we conducted two separate anovas examining Sentence Type (3: smooth, medium, rough) and Fabric Type (3: smooth, medium, rough), one each for the fabric-unrelated and the fabric-related conditions.
6.1.3. Tactile, fabric unrelated
Analysis demonstrated only main effects of Sentence Type, F(2, 94) = 7.19, p < .01, η2 = .02 (F2 = 2.84, p = .09), and Fabric Type, F(2, 94) = 20.14, p < .01, η2 = .21 (F2 = 4.49, p = .06), but no interaction (p > .05). As depicted in Fig. 1, participants overall provided rougher ratings following rough sentences and smoother ratings following smooth sentences, and ratings following medium sentences patterned between these two conditions. Paired t tests using a Bonferroni correction term (α = .017) specifically compared the three sentence types within the smooth-fabric condition revealing rougher ratings following rough relative to smooth sentences, t(47) = 2.55, p = .01, d = .37 (all other p’s > .05). The same effect was found within the medium-fabric condition, t(47) = 3.19, p < .01, d = .46; although results patterned identically in the rough-fabric condition, differences between sentence types did not reach significance (p’s > .05).
6.1.4. Tactile, fabric related
Analysis demonstrated main effects of Sentence Type, F(2, 94) = 3.34, p < .05, η2 < .01 (F2 = 2.18, p = .16), and Fabric Type, F(2, 94) = 21.67, p < .01, η2 = .20 (F2 = 6.08, p < .05). These factors also interacted, F(2, 94) = 4.05, p < .01, η2 = .02 (F2 = 1.91, p = .17), as depicted in Fig. 2. In the smooth-fabric condition, participants gave smoother ratings following smooth sentences relative to medium, t(47) = 3.95, p < .01, d = .57, and rough, t(47) = 4.23, p < .01, d = .61, sentences. In the medium-fabric condition, there were no differences as a function of Sentence Type (all p’s > .05). In the rough-fabric condition, participants gave rougher ratings following rough sentences relative to medium, t(47) = 2.36, p = .02, d = .34, and smooth, t(47) = 2.43, p < .017, d = .35, sentences.
7. Main experiment discussion
Our main experiment was designed to test whether the comprehension of sentences implying nontactile or tactile properties that were either related or unrelated to fabric would affect the direct perception of fabric textures. As would be expected, reading nontactile sentences did not affect fabric ratings, regardless of whether the sentences were related or unrelated to fabrics. After reading sentences conveying tactile information, however, a different pattern emerged. When the tactile sentences described objects not directly related to fabric, there was an overall congruence effect: Reading smooth, medium, or rough sentences promoted fabric ratings that were congruent with these implied properties, regardless of the type of fabric. In general, all fabric ratings became smoother after reading a sentence, implying a smooth tactile property, and rougher after reading a sentence, implying a rough tactile property. These effects were generally consistent across participants, with the majority (38/48) showing overall rougher fabric ratings following rough relative to smooth sentences.
However, when the tactile sentences described a fabric, more specific congruence effects emerged: Reading smooth, medium, or rough sentences seemed to only promote fabric ratings congruent with these implied properties under certain conditions. Specifically, reading smooth sentences promoted smoother fabric ratings only when touching a smooth fabric; similarly, reading rough sentences promoted rougher fabric ratings only when touching a rough fabric. These effects were generally consistent across participants, with only 4 of the 48 participants now showing either (1) rougher fabric ratings following rough relative to smooth sentences in the rough-fabric condition or (2) smoother fabric ratings following smooth relative to rough sentences in the smooth-fabric condition.
As discussed below, results of our main experiment speak to the specificity of interactions between language comprehension and direct tactile perception.
8. General discussion
The present studies assessed whether sentences implying tactile properties would prime tactile representations that affect direct perception, and whether any such effect would be modulated by the relatedness between the described and directly perceived stimuli. Overall, results demonstrate that comprehending implied tactile properties during reading can alter real-world tactile perception. This is particularly compelling given that touch is a nondominant sensory modality (Posner et al., 1976) and is sometimes thought to be deprived of endogenous attention (Turatto et al., 2004). These results extend research demonstrating that visual, action-based, and auditory simulation during language comprehension can alter the way people directly perceive the world (Brunyé et al., 2010; Glenberg & Kaschak, 2002; Glenberg et al., 2008; Stanfield & Zwaan, 2001; Yaxley & Zwaan, 2007). In this case, reading appears to prime tactile mental representations and these activations influence subsequent direct perception. Notably, the scope of these influences was modulated by the relatedness of language stimuli to the rating experience.
In general, when examining sentences describing tactile properties unrelated to fabrics, implied tactile properties elicited tactile ratings that were more broadly biased in congruence with the described property. This finding suggests that activating category-general tactile representations biases subsequent tactile perception. This result lends support to the Perceptual Symbol Systems Theory (Barsalou, 1999), which proposes that conceptual representations integrate the activation of a wide range of perceptual symbols involving the full range of human sensory capabilities. In other words, readers perform multimodal mental simulations that involve the tacit reactivation of perceptual experiences and the reinstatement of neural patterns similar to those activated during direct perception and action (i.e., experiential traces; Zwaan, 2004). Broadly speaking, Barsalou’s Perceptual Symbol Systems Theory thus predicts that readers would spontaneously activate tactile simulations that conceptually represent the act of touching what is being described. The finding that reading about tactile properties affects direct tactile perception might be similar to enacting illusory tactile simulations (e.g., McKenzie et al., 2010). The body of literature examining illusory tactile experiences suggests that somatic experience is shaped by not only input from sensory modalities and events on and within the body but also by cognitive factors. In some cases, simulated tactile experiences can be so rich as to drive illusions such as “phantom limbs” or “rubber hands” (e.g., Botvinick & Cohen, 1998; Ramachandran & Hirstein, 1998). We also support earlier work demonstrating that activating perceptual properties can bias direct perception toward the direction of the activated property; for instance, presenting an object word can activate particular color and shape properties that bias subsequent direct color perception in congruence with the implied color (Hansen et al., 2006; Richter & Zwaan, 2010). In a more general sense, representing concepts in a grounded manner facilitates direct interaction with what is being described; in other words, grounding meaning in perception and action is in the service of doing (Prinz, 2005). This work provides some support for such a stance.
In contrast, sentences describing tactile properties directly related to fabrics elicited tactile ratings that interacted with the implied texture of the sentence. Only when there was a direct overlap between implied and actual tactile properties were ratings biased in congruence with the implied tactile property. Specifically, only smooth fabrics were rated smoother following a smooth fabric-related sentence and only rough fabrics rougher following a rough fabric-related sentence. On the other hand, reading a smooth fabric-related sentence did not influence subsequent ratings of nonsmooth (i.e., medium and rough) fabrics, and vice versa. It could be the case that when there is a large degree of overlap between implied and actual experience, simulations become highly specified. Qualitatively specific simulations elicited during reading may only influence tactile perception under situations with high congruence between simulated and actual properties. The interactive effect may reflect the finding that there are property-specific tactile areas of the somatosensory and primary motor cortices highly tuned to the nature and properties of tactile experiences (Servos, Lederman, Wilson, & Gati, 2001; Talati, Valero-Cuevas, & Hirsch, 2005). Moreover, these qualitatively specific, task-related simulations (e.g., touching a silk ribbon) might only activate a subset of specifically related brain areas, whereas more general, task-unrelated simulations (e.g., touching a fluffy kitten) might activate a broader range of brain areas not specific to fabric-related experiences. With regard to this experiment, the repeated touching of fabrics may prime certain cortical regions to instantiate somewhat specific simulations while reading highly related sentences, and in turn, these precise simulations might only influence tactile perception during a relatively narrow range of experiences. On the other hand, reading relatively task-unrelated sentences may elicit more general tactile simulations, and subsequently influence ratings for a wider range of experiences.
The differential specificity of tactile simulations evoked during language comprehension speaks to theoretical discussions regarding the automaticity versus flexibility of activating perceptual and motoric representations during reading. At least two competing theoretical perspectives can be defined. First, some posit that language comprehension involves an automatic activation of the perceptual and motor information that constitute semantic representation (Boulenger, Hauk, & Pulvermueller, 2009; Pulvermüller, 2005). In this view, sensorimotor areas of the brain are responsible for storing meaning-based representations. Evidence for this strong embodied perspective remains sparse and equivocal (e.g., Dove, 2009, 2011; Kranjec & Chatterjee, 2010; Pecher & Boot, 2011). Second, an opposing position states that words flexibly activate meanings in a context-dependent manner (van Dam, van Dijk, Bekkering, & Rueschemeyer, 2011). For instance, there is conflicting evidence concerning whether action verbs presented in idiomatic contexts activate motor-related areas of the brain, with some supporting such a view (Boulenger et al., 2009) and others refuting it (Raposo, Moss, Stamatakis, & Tyler, 2009). These and other (e.g., Rueschemeyer, Brass, & Friederici, 2007) findings suggest that contextual meaning may flexibly define the nature and extent of perceptual and motoric activations during reading. In this work, the context-dependent generality and specificity of tactile-related activations during language comprehension seem to support a flexible embodied perspective.
The involvement of simulation in linguistic comprehension may be partially dependent on recent experiences with a given modality, in that a recent experience emphasizing one modality may encourage subsequent simulation in that modality when reading about related (and perhaps even normally unrelated) concepts. For instance, people might be more likely to simulate gustatory/olfactory concepts immediately after eating a meal or to simulate auditory concepts after listening to a song. Because the present experiment contained a task involving the tactile modality, there is a possibility that tactile simulations may not occur when reading implied tactile text in a different context. Likewise, depriving the visual modality and thus promoting reliance on the tactile modality may have prompted participants to activate tactile simulations when reading to a greater extent than they would otherwise. Yet recent research indicates that brain areas involved in specific sensorimotor processes (e.g., human action) are modulated by implicit simulation when reading related words while performing a basic lexical decision task (e.g., Willems, Hagoort, & Casasanto, 2010), indicating that a modality-related task is not necessary to elicit simulation in that modality. Thus, it seems unlikely that simulation of tactile words is solely dependent on the present experiment’s task demands. However, it remains a possibility that the fabric-rating task promoted a more specialized simulation of fabric-related tactile modalities, and outside of this context, mental simulation of tactile properties while reading may be more generalized.
Furthermore, our results with fabric-related sentences may also reflect awareness of task demands among participants; during tactile fabric-related sentences, participants may have been consciously aware of the overlap between the reading and fabric-rating tasks (i.e., Machery, 2007). This awareness may have resulted in some degree of recognition due to direct correspondence between described and perceived fabrics. We find this explanation somewhat unlikely, however, given that debriefings revealed that participants largely suspected that we were interested in how feeling fabrics affects memory for sentences (rather than how the sentences affected fabric perception). We also note that related research effectively separating the reading and criterion tasks tends to show evidence for embodied language comprehension even under these reduced task demands (e.g., Ditman et al., 2010; Van Dantzig, Pecher, Zeelenberg, & Barsalou, 2008). Furthermore, effects of task demands make no clear hypotheses regarding the interactive effects of relatedness on activating tactile properties during language comprehension. Thus, we find it unlikely that some transparency in our design elicited performance changes that can account for our results.
Future research might consider several factors that might modulate the activation of tactile representations during language comprehension. Some of these factors include extending the study–test interval, altering the stimulus modality (e.g., aural vs. visual), and minimizing task demands by performing a rating task on only a limited number of trials.
8.2. Concluding remarks
To our knowledge, we provide the first results demonstrating that readers may mentally simulate described tactile properties, adding to a growing body of literature examining the scope and nature of mental simulations during reading. Overall, we support and extend the notion that language comprehension and sensorimotor experiences are closely related, and that mentally simulating described information likely involves the entire range of human sensory capabilities. We also support the notion that there is some apparent flexibility in the activation of perceptual representations during reading; the specificity of mental simulations can be modulated by the relatedness of language stimuli in a direct perception criterion task. What functional purpose might simulating implied perceptual properties of described objects serve? Some propose that simulation may be necessary to draw abstract associations between concepts, increase elaboration about described scenarios, augment memory, promote inference generation, and prepare readers for action (i.e., Bergen & Chang, 2005; Fincher-Kiefer, 2001).
This work was completed as partial fulfillment of EKW’s undergraduate honors thesis requirements. TD was supported by a NARSAD Young Investigator Award (with the Sidney Baer Trust).