Thinking ahead: The role and roots of prediction in language comprehension


  • Kara D. Federmeier

    1. Department of Psychology, Program in Neuroscience, and the Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana-Champaign, Champaign, Illinois, USA
    Search for more papers by this author

  • This article is based on an address presented upon receipt of the Award for Distinguished Early Career Contributions to Psychophysiology at the 46th annual meeting of the Society for Psychophysiological Research, Vancouver, BC, Canada, October 2006.

  • Thanks to Marta Kutas for many years of collaboration and mentorship and to the members of the Cognition and Brain Laboratory at the University of Illinois (especially Karen Evans, Padmapriya Kandhadai, Sarah Laszlo, Chia-lin Lee, Aaron Meyer, and Edward Wlotko) for hard work and helpful feedback. Support from NIA grant AG026308 is gratefully acknowledged.

Address reprint requests to: Kara Federmeier, Department of Psychology, University of Illinois, Urbana-Champaign, 603 E. Daniel, Champaign, IL 61820, USA. E-mail:


Reviewed are studies using event-related potentials to examine when and how sentence context information is used during language comprehension. Results suggest that, when it can, the brain uses context to predict features of likely upcoming items. However, although prediction seems important for comprehension, it also appears susceptible to age-related deterioration and can be associated with processing costs. The brain may address this trade-off by employing multiple processing strategies, distributed across the two cerebral hemispheres. In particular, left hemisphere language processing seems to be oriented toward prediction and the use of top-down cues, whereas right hemisphere comprehension is more bottom-up, biased toward the veridical maintenance of information. Such asymmetries may arise, in turn, because language comprehension mechanisms are integrated with language production mechanisms only in the left hemisphere (the PARLO framework).

The brain is an amazing pattern recognition device, and this is perhaps nowhere more apparent than in the domain of language comprehension. In only hundreds of milliseconds, the brain of an experienced language user can analyze a complex, often ambiguous perceptual signal—that is, a spoken, written, or signed word—and link it to meaning. In doing so, the brain is mapping between that stimulus and information stored in long-term memory that has been derived from multiple sensory modalities, across a variety of contexts, over the lifespan. Furthermore, as the brain derives the meaning of a word, it also uses that information to build and update its representation of the larger language context, for example, the sentence, discourse, or text that is currently unfolding. To date, no artificial or biological device other than the human brain has been found or created that is even capable of this feat, let alone able to do it with the speed, accuracy, and seeming effortlessness that adult humans routinely exhibit for comprehension. An understanding of the functional and neurobiological processes that make this quintessentially human ability possible lies at the very heart of an understanding of human cognition.

How does the human brain so effectively move from perception to meaning? One clear part of the answer seems to be that the brain supplements the information available from the bottom-up sensory signal with information from the linguistic and communicative context in which the word is experienced. A substantial body of literature attests to the impact of context information on word processing. Naming and lexical decision (word/nonword judgment) times are decreased and perception enhanced for words in supportive contexts (Fischler & Bloom, 1979; Hess, Foss, & Carroll, 1995; Jordan & Thomas, 2002; Kleiman, 1980; McClelland & O'Regan, 1981; Schuberth, Spoehr, & Lane, 1981; Stanovich & West, 1979). During natural reading, such words are more likely to be skipped and less likely to be regressed to, and, when fixated, are viewed for less time (Ehrlich & Rayner, 1981; McDonald & Shillcock, 2003; Morris, 1994). Context information also changes the brain's scalp-recorded electrophysiological response to words, in the form of amplitude reductions of an event-related potential (ERP) component elicited by words and other meaningful stimuli: the N400.

The N400 is a negative-going voltage deflection peaking around 400 ms after stimulus onset (Kutas & Hillyard, 1980). All potentially meaningful items elicit this response, regardless of modality or task (for a review, see Kutas & Federmeier, 2001). N400 amplitude is reduced by factors related to the ease of accessing information from memory, such as word frequency and repetition, and by the presence of supportive context information. Indeed, N400 amplitudes show a strong, inverse correlation with the predictability of the eliciting word within a given context (e.g., word list, sentence, discourse; Bentin et al., 1985; Kutas & Hillyard, 1984; van Berkum, Hagoort, & Brown, 1999). In her own “Distinguished Early Career Contribution to Psychophysiology Award Address,” Cyma Van Petten reviewed the growing literature demonstrating the influence of context on factors such as lexical ambiguity resolution, semantic access, and even on the processing of basic lexical parameters, such as word frequency (Van Petten, 1995). Work since that time has continued to solidify and extend those results, such that the question now has become not whether context affects word processing, but when and how it does so.

Integration and Prediction

Consider two accounts of how semantic context might influence the processing of a particular word. In many prominent comprehension models/frameworks (e.g., Forster, 1989; Marslen-Wilson, 1989; Norris, 1994), early stages of word recognition are taken to proceed in a strictly bottom-up fashion or with bottom-up priority—that is, based wholly or primarily on the available sensory input. On such accounts, it is only after word recognition, or after a set of lexical candidates has been initially identified, that context can exert its effects, for example, by easing the integration of the lexical and semantic information that has been accessed during word recognition with the on-going sentence- and/or discourse-level representation. In contrast to such “integrative” accounts of context effects, other accounts emphasize the early and continual use of all available information sources during word processing (e.g., McClelland & Elman, 1986; McClelland & Rumelhart, 1981). Importantly, such accounts allow context information to be used in an anticipatory or “predictive” manner. Thus, for example, processing a sentence may lead to the activation of semantic, lexical, or even perceptual features of items that are likely to appear; the ease of processing a word, even prior to lexical activation or word selection, will then be influenced by the extent to which its properties have been preactivated by the available context information.

To begin to adjudicate between these accounts, we recorded ERPs while participants read (Federmeier & Kutas, 1999b) or listened to (Federmeier, McLennan, De Ochoa, & Kutas, 2002) pairs of sentences leading to an expectation for a particular item in a particular semantic category, for example:

They wanted to make the hotel look more like a tropical resort.

So, along the driveway, they planted rows of …

Sentence pairs were completed with one of three types of endings: (1) the expected item, as established by cloze probability norming (Taylor, 1953; “expected exemplars,” e.g., palms), (2) an unexpected item from the expected category (“within category violations,” e.g., pines), or (3) an unexpected item from a different semantic category (“between category violations,” e.g., tulips). Norming studies established that both violation types were unexpected and implausible completions for the sentence pairs. Thus, on an integration account of context effects, within- and between-category violations should be equivalently difficult to process, as both contain features that do not cohere well with the context. For example, unlike palms, neither pines nor tulips is associated with “tropical.” Critically, however, there is more out-of-context featural overlap between pines and the expected word palms than between tulips and palms. On a predictive account of context effects, this similarity between a given word and a contextually based expectation can affect the ease with which that word is processed (see also Schwanenflugel & LaCount, 1988). For example, a prediction for palms might lead to the preactivation of features such as “tree,”“evergreen,” and “tropical.” On this account, then, within-category violations (pines) should be easier to process (as indexed by smaller N400 amplitudes) than between-category violations because of their greater similarity to the predicted—although never actually presented—sentence completion.

The top part of Figure 1 shows the pattern of results (for auditory presentation; the pattern for visual presentation was the same). Consistent with their higher cloze probabilities, expected exemplars elicited smaller N400s than either violation type. Critically, despite equivalent cloze probabilities, within-category violations also elicited smaller N400s than the between-category violations, in line with the prediction-based account of context effects. The evidence for prediction is strengthened further by considering the impact of contextual constraint on the N400 pattern (as shown in the bottom part of Figure 1). Strongly constraining sentences led to a stronger expectation (as indexed by cloze probability) for the expected completions than did more weakly constraining sentences. Correspondingly, norming data showed that both violation types were rated as less expected and less plausible in the strongly than in the weakly constraining contexts. Yet, facilitation (amount of N400 amplitude reduction) for the within-category violations was actually greater in the strongly constraining sentences; that is, the pattern went counter to plausibility. In other words, N400 amplitude to the within-category violations was not a monotonic function of these items' bottom-up fit to the sentence contexts, as would be expected on an integration account. Instead, facilitation was a function of both the featural similarity between these items and the words most expected in the contexts and the strength of the prediction for those expected (but not presented) words. Such data strongly suggest that the processing of context information leads to the preactivation of semantic features associated with likely upcoming words/concepts.

Figure 1.

 Grand average ERPs (N=21 young adults) shown at the middle central electrode site (Cz). Negative is plotted up in this and all subsequent plots. At the top is plotted the overall response to auditory sentence-final words that were either (1) most expected in the context (expected exemplars; solid line), (2) unexpected in the context but from the same semantic category as the expected completion (within-category violations; dashed line), or (3) unexpected and from a different semantic category (between-category violations; dotted line). At the bottom is plotted the response to the same three ending types in strongly (left) and weakly (right) constrained sentence contexts. Within-category violations were facilitated relative to between-category violations, especially when presented in strongly constrained contexts, despite their lower plausibility in those sentences. Data are from Federmeier et al. (2002).

Further evidence for prediction in sentence processing has come from a set of studies showing that readers' or listeners' ERPs to articles or adjectives differ according to whether they match or mismatch the gender of an expected upcoming noun (written, spoken, or in picture form; Van Berkum, Brown, Zwitserlood, Kooijman, & Hagoort, 2005; Wicha, Bates, Moreno, & Kutas, 2003; Wicha, Moreno, & Kutas, 2003, 2004). Thus, prediction extends not only to semantic/conceptual but also to morpho-syntactic features of likely forthcoming words. To further examine whether prediction extends to individual lexical items, Delong, Urbach, and Kutas (2005) took advantage of the fact that English indefinite articles differ as a function of whether the nouns they modify begin with a consonant (in which case “a” is used) or a vowel (in which case “an” is used). They found larger N400-like responses to indefinite articles that mismatched the expected upcoming noun (e.g., “The day was breezy, so the boy went outside to fly an …” where the word “kite” is most expected). In fact, the size of the N400 to the article was graded by the expected (but as yet unseen) noun's cloze probability. Thus the language comprehension system seems to be able to make detailed, multifaceted predictions about fairly specific aspects of upcoming words.

In sum, a number of electrophysiological studies, as well as some eye-tracking studies (e.g., Altmann & Kamide, 1999; Kamide, Altmann, & Haywood, 2003), have now demonstrated that context information can be used to anticipate features of likely upcoming words, at least when young adults are processing language under conditions in which contextual support is fairly strong. In fact, these might be the ideal circumstances in which to observe predictive processing. A priori arguments against prediction in language comprehension (e.g., Jackendoff, 2002; Marslen-Wilson & Welsh, 1978) have pointed to the fact that context information may often be insufficient to narrow down the possible inputs further than broad syntactic categories (e.g., that a noun is likely to follow shortly after a sentence-initial “the”), raising the question of whether anticipatory mechanisms would be an effective use of limited processing resources—although, of course, even broad constraints could arguably be helpful for handling rapid, complex input in a timely fashion. Furthermore, it has been suggested that prediction is likely to be error prone and, again, revision of errors could be resource intensive.

In fact, under some circumstances, prediction-related costs do seem to be evident in the electrophysiological response. Federmeier et al. (2007) recorded ERPs as participants read weakly and strongly constraining sentence contexts; the latter context type was expected to lead to strong predictions for a particular sentence completion. Both types of sentences ended with either the best completion for that context, as determined by cloze probability norming, or with a plausible but unexpected ending that was not semantically related to the best completion. For example:

Strongly constraining: The child was born with a rare disease/gift.

Weakly constraining: Mary went into her room to look at her clothes/gift.

The same unexpected words were used in both strongly and weakly constraining contexts, and their cloze probabilities were carefully matched. Thus, on an integrative account, no constraint-based difference would be expected for the processing of the unexpected words. However, if the strongly constraining contexts lead to a prediction for the best completion, then the presentation of an unexpected word in those contexts might elicit activity associated with the suppression and/or revision of that prediction.

Two different effects were evident in the results. N400 responses to the four word types were graded by cloze probability: smallest to the best completions in the strongly constraining contexts, intermediate to the best completions in the weakly constraining contexts (whose cloze probability was, by definition, lower), and largest (and identical across constraint) to the unexpected completions. Thus, the N400 seemed to index the processing benefit provided by the context, which, in this case, was identical for the two types of unexpected completions, because neither shared features with the best completion. However, a very different pattern emerged later in time, in the form of a slow, frontally distributed positivity that was selectively enhanced in response to unexpected words embedded in strongly constraining contexts. Though the precise cognitive and neurobiological nature of this response is not yet known, similar ERPs have been observed in other studies examining the processing of unexpected words (Coulson & Wu, 2005; Moreno, Federmeier, & Kutas, 2002). Thus, this response may be associated with processes involved in inhibiting and/or revising a strong prediction when unexpected input is encountered.

Given the theoretical and empirical indications that prediction may, at least in some cases, entail additional resources or costs, it becomes important to ask whether predictive processing occurs under all circumstances. The following sections of this review discuss experiments that have examined the scope of prediction in language comprehension by considering, first, the consequences for processing that are entailed by the kind of resource changes that accompany normal aging and, second, differences in the language comprehension mechanisms instantiated in the two cerebral hemispheres.

Aging Effects

Normal aging causes widespread neural changes, including grey and white matter atrophy and neurochemical and blood flow reductions or alterations, many of which could be expected to impact cognitive processing (for a review, see Cabeza, Nyberg, & Park, 2004). Indeed, aging is associated with change along many dimensions of cognitive functioning, including attention, memory, and executive control. However, a number of language functions stand out for their relative resistance to age-related decline (e.g., Park et al., 2002). Older adults seem to retain, and sometimes even increase, their store of word-related knowledge (Salthouse, 1993), and the organization of this information, in both off-line and online assessments, seems to remain relatively unchanged across the adult years (Burke & Peters, 1986; Burke, White, & Diaz, 1987; Laver & Burke, 1993; Lovelace & Cooley, 1982; Scialfa & Margolis, 1986). Furthermore, the presence of lexical or sentence-level context information shapes word processing in older adults as in younger adults (Balota & Duchek, 1988; Burke & Harrold, 1993; Hopkins, Kellas, & Paul, 1995; Wingfield, Alexander, & Cavigelli, 1994), and the pattern of contextual facilitation on behavioral language measures can be quite similar across age groups, aside from main effects of factors such as response speed (e.g., Stine-Morrow, Miller, & Nevin, 1999).

Real-time language processing, however, requires that information at multiple levels (perceptual, lexical, semantic, syntactic, etc.) be rapidly accessed and integrated, and normal aging does seem to affect the efficacy with which this can be accomplished. At the most basic level, sensory thresholds increase with age, such that even the initial analysis of the language stream may be delayed and/or “noisier” for older adults (Kline & Schieber, 1985; Olsho, Harkins, & Lenhardt, 1985). A few studies suggest that word recognition may also be slowed or become less accurate with age (Balota & Duchek, 1988; Bowles & Poon, 1985), though results on this point conflict (Bowles & Poon, 1981; Stern, Prather, Swinney, & Zurif, 1991). Perhaps related to these changes, some studies find that older adults' word processing seems to rely more on context information, particularly when input is speeded or degraded or when processing resources are taxed (Laver & Burke, 1993; Madden, 1988; Myerson, Ferraro, Hale, & Lima, 1992; Stine & Wingfield, 1994; Stine-Morrow, Loveless, & Soederberg, 1996). Indeed, several lines of research suggest that increased cognitive loads (of various kinds) may often disproportionately affect older adults' language comprehension. For example, links have been drawn between aging effects on language and reductions in working memory capacity (Gunter, Jackson, & Mulder, 1995; Kemper, 1986; Kemtes & Kemper, 1997; Kynette & Kemper, 1986; Light & Albertson, 1993; Light & Capps, 1986) as well as changes in the ability to inhibit competing and/or irrelevant information (Hamm & Hasher, 1992; Hartman & Hasher, 1991; Hasher & Zacks, 1988; Phillips & Lesperance, 2003).

However, even under relatively non-taxing language processing circumstances, when older and younger adults' language comprehension behavior may be largely or entirely the same, electrophysiological measures consistently reveal age-related differences in the pattern of response to words. N400 responses have been found to decrease in amplitude and increase in latency with increasing age for both visual (Gunter, Jackson, & Mulder, 1992) and auditory (Woodward, Ford, & Hammett, 1993) language processing. In a systematic study of N400 responses across a range of six decades of age (from the 20s through the 70s), Kutas and Iragui (1998) found a linear decrease in N400 amplitude (just under 0.1 μV per year) and a linear increase in N400 peak latency (between 1 and 2 ms per year) with increasing age. Supportive contextual information seems to be able to mitigate the age-related N400 delays in the auditory modality (e.g., Federmeier et al., 2002), though not the amplitude changes. In addition to these changes on the N400 component itself, there are also changes in the size or timing of effects of language variables on the N400, and, critically, these effects are not uniform across language processing stages. For example, we have found that normal aging leaves the appreciation of lexical relatedness (i.e., facilitation for processing the word “shoots” after previously seeing “gun”) intact in terms of the size, timing, and distribution of the electrophysiological response (Federmeier, van Petten, Schwartz, & Kutas, 2003; though see Gunter, Jackson, & Mulder, 1998, for a case in which effects of semantic primes on the N400 were delayed for older adults). However, in the same sentences, effects of message-level context information (i.e., facilitation for “rushed” in the congruous sentence, “The mill worker caught his hand in a piece of machinery and was rushed to the hospital,” as compared with the response in an anomalous sentence such as, “The young shoes took their promotion in a discussion of machinery and were rushed to the aliens.”) were delayed by more than 200 ms in the older adult group (Federmeier et al., 2003).

Older adults' relative difficulty in making use of message-level information seems to arise specifically in how effectively they can take advantage of the richer cues available in strongly constraining contexts. In a study comparing responses to identical words embedded in strongly constraining (e.g., “The cold drink was served with a slice of lemon.”) and weakly constraining (e.g., “The only food left in the barren refrigerator was a moldy lemon.”) sentences, we found no significant age-related differences in the N400 response to the plausible but weakly constrained sentence endings (Federmeier & Kutas, 2005). Responses to the strongly constrained endings, however, were larger (and, thus, less facilitated) for older adults, and N400 effects of constraint (response differences between words in weakly and strongly constrained sentences) were delayed by more than 100 ms. These changes were correlated with measures of the availability of working memory resources in the older adults. Thus, older adults—perhaps especially those with relatively lower working memory capacities—seem to be less successful at exploiting the information available in a constraining sentence context.

Evidence that predictive processing becomes less likely or less efficacious with age can also be seen in the results of an experiment using the stimuli from Federmeier and Kutas (1999b) (sentence pairs ending with expected exemplars, within-category violations, and between-category violations), in this case presented to younger and older adults as natural speech (Federmeier et al., 2002). As already described, for auditory processing, as for word-by-word reading, young adults' brainwaves reveal evidence that they are predicting—that is, anticipating features of words they have not yet been exposed to. Young adults' N400 responses are facilitated for incongruent words that are semantically related to the word expected, but never actually presented, in the context (within-category violations), and this facilitation is greater for high than for low constraint contexts (Figure 1). In contrast, as can be seen the top half of Figure 2, older adults as a group show a clear effect of contextual fit (i.e., an N400 reduction to expected exemplars relative to violations), but little facilitation for the within-category violations. Furthermore, strikingly, even the small degree of facilitation for the within-category violations is driven by responses in the low, rather than the high, constraint contexts—consistent with the rated plausibility of these items (see the bottom half of Figure 2). In other words, older adults' N400 response pattern is well predicted by the plausibility of the words in their contexts, consistent with an integrative, as opposed to a prediction-based, account of sentence context effects.

Figure 2.

 Grand average ERPs (N=24 older adults) shown at the middle central electrode site (Cz). Analogous to Figure 1, at the top is plotted the overall response to expected exemplars (solid line), within-category violations (dashed line), and between-category violations (dotted line). At the bottom is plotted the response to the same three ending types in strongly (left) and weakly (right) constraining sentence contexts. Older adults exhibited a different pattern from that seen for young adults, showing little facilitation for within-category violations, most of which derived from the weakly constrained sentences, consistent with plausibility. Data are from Federmeier et al. (2002).

Overall, then, older adults seem less able or less likely to use context information to generate information about upcoming words. However, as is often the case, there was notable individual variation in the older adults' responses (Federmeier et al., 2002), with a subset of older adults continuing to show younglike response patterns. The tendency for older adults to continue to show ERP patterns indicative of expectancy-based language comprehension was well predicted by neuropsychological measures, and, in particular, by higher vocabulary scores and higher verbal fluency. Verbal fluency was actually the best predictor of the pattern; in this test, individuals are asked to generate as many words from a particular semantic category or beginning with a particular letter of the alphabet as they can in 1 min. The fact that it was a test of rapid, cued production that explained the pattern of effects that would be seen in a language comprehension task further supports the contention that the younglike effect pattern reflects covert generation processes in the form of predictions about features of likely upcoming words—perhaps reflecting a link between language comprehension and language production mechanisms (an idea that will be explored in more detail in a subsequent section of this review).

Thus, language comprehension processes, even for simple sentences, appear to change qualitatively with age, although there are factors that seem to be able to protect against and/or compensate for such changes. In particular, whether due to changes in resource availability or other factors, the ability to use contextual information to rapidly build a message-level representation and anticipate features of upcoming words apparently becomes more difficult for older adults. In turn, this suggests that predictive processing may not be the best—or even a viable—strategy for all individuals at all phases of the lifespan and/or in all processing situations. Given the information processing trade-offs between predictive and integrative mechanisms for comprehension, it is perhaps not surprising that the brain seems to be able to use either strategy, depending on factors such as age and resource availability. Perhaps more intriguing is the hypothesis, examined in the next section of this review, that the brain may routinely implement both mechanisms in parallel, distributed across the two cerebral hemispheres.

Hemispheric Differences in Language Comprehension

Since Paul Broca's discovery in 1861 of an association between fluent, articulate speech and the left frontal operculum, research into the neural bases of language has uncovered a complex brain network subserving the phonological, lexical, semantic, and syntactic functions of normal language comprehension and production (e.g., review by Martin, 2003). A particularly salient feature of this network is its apparent strong left laterality: whereas damage to left hemisphere (LH) areas can cause profound, permanent deficits in many language functions, damage to right hemisphere (RH) homologues of these areas generally leaves basic word and sentence processing relatively intact. This dissociation is one of the most striking and oft-cited examples of hemispheric specialization in humans.

Such specialization is remarkable, given the anatomical, morphological, physiological, and chemical similarity of the LH and RH. Documented neural asymmetries are generally subtle, including neuronal size (Hayes & Lewis, 1993), dendritic branching patterns (Scheibel et al., 1985), and neurotransmitter distributions (Tucker, 1981) in some brain areas. Yet, language is one of a number of replicable functional asymmetries that span from basic sensory processing to object recognition, emotion, and higher level cognition (e.g., Hellige, 1993). This disparity between the neural similarity and the functional distinctiveness of the two hemispheres highlights the limits of our current understanding of the mapping between brain and behavior. Whereas there is now a better sense of which brain areas may be involved in various language functions, the computations that these areas perform are not yet understood well enough to afford an explanation of why a LH area might be crucially involved in a given function whereas the RH homologue, with essentially the same basic cell types, neurochemistry, and inputs and outputs, is not. Developing that understanding has to be a central goal for research into the neural underpinnings of cognition.

The asymmetries observed for language are most notable for production. The well-known dissociation in the impact of LH and RH damage on language output is corroborated by studies of commissurotomized (“split-brain”) patients, which rarely find evidence for RH control of speech (Gazzaniga, 1983). On the other hand, some isolated RHs seems to understand language, and recent evidence suggests that both hemispheres may be making critical, though different, contributions to comprehension. For example, data from noninvasive spatial neuroimaging techniques have revealed language-related activation in brain areas outside the regions classically associated with aphasia, including a number of RH areas (e.g., Ni et al., 2000). In fact, some imaging studies, particularly those looking at the comprehension of complex narratives (Robertson et al., 2000; St George, Kutas, Martinez, & Sereno, 1999) or nonliteral language (Bottini et al., 1994), find not just bilateral activation patterns, but a predominance of RH activity. One must then ask not why the RH is unable to process language but what language functions the RH supports, how those differ from LH functions, and what role(s) they play in normal language processing. In particular, a number of lines of research now suggest that a critical difference between the hemispheres may lie in how each makes use of context information (at various levels) to shape comprehension.

One important source of information about RH language capacities has been studies of patients with unilateral brain damage. Although language deficits are most profound after LH injury, patients with RH damage also show a number of changes in language comprehension, including difficulties extracting the main points from narratives and conversations (Gardner, Brownell, Wapner, & Michelow, 1983), appreciating discourse structure (Delis, Wapner, Gardner, & Moses, 1983; Wapner, Hamby, & Gardner, 1981), and drawing and revising inferences (Beeman, 1993; Brownell, Potter, Bihrle, & Gardner, 1986). RH damage is also associated with problems processing various types of nonliteral language, such as jokes (Brownell, Michel, Powelson, & Gardner, 1983) and indirect requests (Weylman, Brownell, Roman, & Gardner, 1989). These data thus suggest that although the RH may not be necessary to support the basic functions essential for the processing of individual words or simple sentences, it may make important contributions to higher level meaning processing and pragmatics—linking words and sentences across larger language units and using language information in a flexible and context-sensitive manner.

RH contributions to comprehension have also been examined in brain-intact individuals using visual half-field (VF) presentation methods. This technique takes advantage of the fact that information presented peripherally (>0.5° from fixation) is received exclusively by the contralateral hemisphere and processed unilaterally through at least area V2 (Clarke & Miklossy, 1990), encompassing processing stages involved in the extraction of basic visual features. After this, information can be transferred via the corpus callosum, albeit with a delay of about 10–15 ms (Hoptman & Davidson, 1994). However, various sources of evidence suggest that callosal transfer is incomplete and results in a loss of information fidelity. For example, ERP studies have found that visual information transfer is restricted to a range of relatively low spatial and temporal frequencies (for a review, see Berardi & Fiorentini, 1997), and functional magnetic resonance imaging (fMRI) studies have shown that after lateralized presentation, hemodynamic responses in higher level visual areas are reduced in the ipsilateral hemisphere (Tootell, Mendola, Hadjikhani, Liu, & Dale, 1998). These findings are substantiated by animal studies showing that cells in the inferior temporal lobe that respond to both VFs respond less strongly to ipsilateral stimulation (Gross, Rocha-Miranda, & Bender, 1972) and that cFOS levels (a marker of neural activity) associated with lateralized stimuli are contralaterally biased even in perirhinal and hippocampal areas (Wan, Aggleton, & Brown, 1999). Indeed, hemispheric processing resources seem to be dynamically regulated, such that information is not always transferred, even when it could be (e.g., Weissman & Banich, 2000). VF methods thus provide a powerful tool for studying functional asymmetries in the intact brain. Indeed, decades of studies using this technique have revealed a number of consistent performance differences for stimuli presented to the right (RVF) and left visual fields (LVF), many of which are corroborated by evidence from patients with unilateral brain damage, studies of commissurotomized patients (for a review, see Hellige, 1993), and recent functional imaging data (e.g., Martinez, Moses, Frank, & Buxton, 1997).

The majority of VF language studies in healthy adults have focused on word-level processing, often assessed through measures of priming—that is, speeded response times (RTs) (usually in lexical decision tasks) to a target preceded by a related prime word. A number of different asymmetries have been uncovered with these methods, leading, in turn, to several proposals about the nature of the underlying processing differences. For example, although priming for closely related word pairs (e.g., cat–dog) is equivalent in the two VFs, priming for more distantly related words (e.g., goat–dog) is greater with LVF/RH presentation (Chiarello, Burgess, Richards, & Pollock, 1990). Greater facilitation from “summation primes” (three words weakly related to a target) also has been observed in the LVF/RH (Beeman, Friedman, Grafman, & Perez, 1994), and, in some studies, the RH shows more priming for the subordinate meanings of ambiguous words (e.g., bridal vs. freight TRAIN; Burgess & Simpson, 1988). Results like these have been taken to suggest that the hemispheres may differ in the “coarseness” of their semantic coding, with narrow, focused meaning activation in the LH and weak, diffuse activation (“coarse coding”) in the RH (Beeman, 1998; Jung-Beeman, 2005). However, the observed patterns change depending on the timing of the stimuli (i.e., the stimulus-onset asynchrony, or SOA, between prime and target). At short SOAs, priming for distantly related words and subordinate word meanings can be observed in the RVF/LH, and, indeed, in some cases is actually greater than the priming observed for the LVF/RH (e.g., Koivisto, 1997). Thus, some have suggested that it is not the breadth of activation per se that differs between the hemispheres, but rather the time course of activation, with the RH showing a slower and more prolonged activation pattern than the LH (Burgess & Simpson, 1988; Koivisto, 1997). It has been posited that this time-course difference may arise because the LH is more able or likely than the RH to use controlled processes to actively select word meanings (e.g., Burgess & Simpson, 1988) and/or that the RH is more likely than the LH to rely on postlexical integration processes for its facilitation (Koivisto, 1998; Koivisto & Hamalainen, 2002; Koivisto & Laine, 2000).

Fewer studies have used VF methods to examine sentence processing in brain-intact individuals. The results of behavioral studies have suggested that the RH's language competence might largely be limited to word-level information and pairwise relationships between words (e.g., Chiarello, Liu, & Faust, 2001; Faust, 1998). For example, with LVF/RH presentation, lexical decision (word/nonword judgment) latencies for sentence-final words were unaffected by scrambling the order of the context words (e.g., “The saddled the rider HORSE”; Faust, Babkoff, & Kravetz, 1995) or by increasing the amount of context from one to six words (Faust, Kravetz, & Babkoff, 1993). In some cases the RH has even seemed insensitive to overt message-level anomalies (e.g., “The patient parked the MEDICINE”; Faust et al., 1995). In other cases, the RH has shown some sensitivity to congruency, but at a level that could be predicted from word-level associations alone (Chiarello et al., 2001; Faust, Bar-lev, & Chiarello, 2003). Based on these findings, it has been argued that only the LH can integrate lexical, syntactic, and semantic information to derive the message-level meaning of a multiword utterance. The RH, instead, has been supposed to process each word as it comes, without the benefit of higher level information to guide meaning access and integration (the “message-blind RH” view; e.g., Chiarello et al., 2001). However, it is difficult to reconcile these conclusions with the neuropsychological findings already discussed, which have tended to suggest that intact RH language functions are particularly crucial for processing complex message-level meaning information. One possibility is that the tasks used in behavioral studies, such as lexical decision, may underestimate RH comprehension abilities (Baynes & Eliassen, 1998).

Combining visual half-field methods with ERP techniques can help mitigate this problem, as responses can be measured as participants read for comprehension, without the need for additional tasks that may be associated with their own hemispheric biases. Furthermore, whereas behavioral studies of lateralization examine only the extent to which stimuli are preferentially or predominately processed by one hemisphere over another, studies that involve concurrent measurements of electrical brain activity offer the possibility to decompose the lateralized contribution of the processes that underlie performance on that task as it unfolds over time. The use of ERPs also mitigates against some concerns associated with VF techniques in a behavioral context, because one can (1) ascertain that participants maintain fixation during lateralized presentation, through concurrent measurement of the electrooculogram (EOG) signal, (2) demonstrate early unilateral apprehension of the critical stimuli, as indexed by asymmetric sensory ERP responses, (3) test for onset differences that would be present in cases where processing was mediated by callosal transfer, and (4) use topographic information to make inferences about the relative contribution of each hemisphere to specific subprocesses under lateralized and normal (central) processing conditions and as a function of the participant's task.

To address the apparent discrepancy between behavioral findings using lexical decision measures suggesting that the RH is unable to construct message-level representations and neuropsychological data pointing to important RH contributions to higher order meaning processing, we conducted a series of visual half-field ERP experiments that jointly manipulated word- and sentence-level context information (Coulson, Federmeier, Van Petten, & Kutas, 2005). In the first experiment, young adults viewed lexically associated (e.g., LIFE, DEATH) and unassociated (e.g., LIFE, PRISON) word pairs. Primes were presented centrally and targets were lateralized to the LVF and RVF; participants named the targets after a delayed cue. In the second experiment, these same word pairs were then embedded in sentence contexts, wherein the targets formed plausible and implausible completions, as in the following examples:

Congruent Associated: When someone has a heart attack, a few minutes can make the difference between LIFE and DEATH.

Congruent Unassociated: The gory details of what he had done convinced everyone that he deserved LIFE in PRISON.

Incongruent Associated: The gory details of what he had done convinced everyone that he deserved LIFE in DEATH.

Incongruent Unassociated: When someone has a heart attack, a few minutes can make the difference between LIFE and PRISON.

Congruent completions were matched for cloze probability, so that word-level factors were unconfounded with message-level plausibility. Young adults read these sentences one word at a time, named (after a delay) the lateralized final word, and then answered a comprehension question.

There were three major findings from this study. In the first experiment, robust effects of word association were found on N400 responses to words in both VFs, with reduced N400 amplitudes to lateralized targets preceded by a related prime. Effect topographies differed slightly, characterized by a contralateral skew, whereas the timing did not, suggesting that lateralized stimulus presentation succeeded in preferentially stimulating the contralateral hemisphere (had these effects been mediated by callosal transfer, there would have been effects that were topographically identical but delayed for one VF relative to the other). However, association effects were bigger for RVF/LH presentation, suggesting that the LH makes greater use of word-level information when that is the only context available. In contrast, in the second experiment, when these same word pairs appeared in sentence contexts, association effects were superceded by sentential plausibility for targets in both visual fields. Congruent sentence endings yielded smaller N400 responses than incongruent sentence endings, irrespective of whether words were or were not associated; these N400 congruency effects were again contralaterally skewed and similar in timing. Thus, the RH not only seems to be able to make use of message-level context information, it, like the LH, seems to preferentially use sentence-level over word-level information when both are available. Although effects of association were minimal in Experiment 2, there was a small effect of lexical relatedness on the N400 to words in incongruent sentences for both VFs. Association effects in congruent sentences, however, were apparent only for LVF/RH targets. Thus, although the overall pattern of results argues against qualitative distinctions in the types of information available to each hemisphere, the data do suggest that, in the LH, higher level processing stages may exert greater influence over lower levels of analysis.

If both hemispheres make use of sentence-level information—but do so somewhat differently—then what is the nature of that difference? To examine each hemisphere's ability to exploit predictive sentence context information, we used the Federmeier and Kutas (1999b) paradigm (with expected exemplars and within- and between-category violations) in a visual half-field design, such that the critical target words were lateralized to the right or left visual field (Federmeier & Kutas, 1999a). Note that if, as suggested by behavioral data patterns, the RH is less sensitive to sentence-level information (e.g., Faust, 1998) and more broadly sensitive to semantic relations (e.g., Beeman et al., 1994), then, for LVF as compared with RVF presentation, one should observe reduced effects of contextual congruency (i.e., smaller N400 differences between expected words and violations) and more facilitation (smaller N400 amplitudes) for the semantically related violations. However, although N400 responses were indeed qualitatively different in the two VFs, as can be seen in Figure 3, the pattern failed to support the predictions of the coarse coding and message-blind-RH views. First, N400 responses in both VFs were affected by congruency (difference between expected items and between-category violations), and there was no difference in the size or timing of this effect. This result again suggests that RH word processing is as sensitive to message-level information as is LH word processing. Second, in contradiction to predictions based on coarse coding, priming for the semantically related but contextually implausible endings was greater following RVF/LH presentation; indeed, in the LVF/RH, responses to the two violations types were identical. In other words, when assessed with a functionally specific brain measure, RH-biased responses showed more semantic selectivity and more adherence to message-level plausibility. In a follow-up study (Federmeier & Kutas, 2002), we replicated this pattern with sentence-final pictures, showing that it indexes something general about how each hemisphere integrates semantic information with context, as opposed to something particular about word reading.

Figure 3.

 Grand average ERPs (N=18 young adults) shown at the middle central electrode site (Cz). Responses to expected exemplars (solid line), within-category violations (dashed line), and between-category violations (dotted line) are plotted at left for presentation in the right visual field (initially apprehended by the left hemisphere) and at right for presentation in the left visual field (initially apprehended by the right hemisphere). Within-category violations were facilitated only for presentation to the RVF/LH. Data are from Federmeier and Kutas (1999a).

The data suggest instead that the hemispheres differ in their ability and/or tendency to use the available message-level information to predict semantic features of upcoming words. The pattern observed for initial presentation to the RVF/LH is similar to that observed for young adults with central visual (Federmeier & Kutas, 1999b) and auditory (Federmeier et al., 2002) presentation of these stimuli, and suggests that the LH uses the context to actively prepare for the processing of likely upcoming stimuli. Indeed, this preparation may also extend to visual features of the predicted stimulus. For pictures (Federmeier & Kutas, 2002), there were also congruency effects on the P2 component, a positive-going potential peaking around 200 ms. P2 amplitude modulations have been linked to the detection and analysis of visual features in selective attention tasks (Hillyard & Muente, 1984; Luck & Hillyard, 1994), with larger P2s to stimuli containing target features. With RVF presentation, P2 amplitudes were larger for expected than for unexpected endings, suggesting that LH sentence processing provides top-down information that allows enhanced visual feature extraction from expected targets. With LVF presentation, in contrast, there were no P2 congruency effects for the picture study and no semantic similarity effects for either sentence-final words or pictures. Thus, the RH, although sensitive to semantic similarity when it is between a target word and another context word, as already described (e.g., Chiarello et al., 1990), appears insensitive to such similarity between a target word and a word that was predicted, but never actually presented. The RH thus manifests the pattern expected under an integration strategy, in which the fit or plausibility of a word is assessed in a more “bottom-up” fashion; note that this was also the pattern observed for most older adults in this same paradigm (Federmeier et al., 2002).

Evidence for a LH role in predictive processing was also seen in a study that manipulated contextual constraint in a VF paradigm (Federmeier, Mai, & Kutas, 2005). Young adults read for comprehension a set of congruent sentences (the same as those used to examine aging effects in Federmeier & Kutas, 2005), which differed only in the extent to which the sentence-final target word was constrained by the message-level meaning; for example, strongly constrained: “She was suddenly called back to New York and had to take a cab to the AIRPORT” versus weakly constrained: “She was glad she had brought a book since there was nothing to read at the AIRPORT.” Lexical-associative information was minimized, and norming studies were used to establish the target words' plausibility and the contexts' constraint. The conditions thus differed only in how strongly the lateralized target words were predicted by sentence-level information. N400 responses were facilitated by constraint in both VFs; these effects were of similar size and timing, although there was a contralaterally skewed effect topography. This pattern makes sense given that increasing the constraint of a context provides more scaffolding for semantic analysis and integration and also renders words more predictable. Again, then, there was no evidence that RH semantic processing is insensitive to message-level information. However, as can be seen in Figure 4, there were VF-based differences in the effect of constraint on higher level visual processing as indexed by the frontal P2. P2 responses to strongly constrained targets were enhanced only with RVF/LH presentation. These findings support the hypothesis that the LH is able to make use of the greater predictive information available from rich contexts to prepare for the processing of upcoming stimuli, here at high-level perceptual stages; such top-down information seems to be less available for stimuli projected initially to the LVF/RH.

Figure 4.

 Grand average ERPs (N=32 young adults) shown at a representative frontal electrode site. Responses to sentence-final words in strongly (solid line) and weakly (dotted line) constrained contexts are plotted for presentation to the RVF/LH (at the left) and the LVF/RH (at the right). P2 responses were larger for words in strongly than in weakly constrained contexts only with RVF/LH presentation. Data are from Federmeier et al. (2005).

The kind of predictive processing that we have observed during normal comprehension for young adults (and a subset of older adults) thus seems to be more associated with LH than with RH processing mechanisms. As discussed, prediction would seem to afford a number of advantages for comprehension, including the potential for increased processing efficiency and reduced susceptibility to noise. Recent data (Wlotko & Federmeier, in press) further suggest that a predictive approach to comprehension may afford particular advantages for processing expected words that are embedded in relatively weak contexts. We used the strongly and weakly constraining sentences from Federmeier et al. (2007) in a VF design and found that, in the RVF/LH, N400 facilitation for weakly constrained, expected completions was nearly as great as that for strongly constrained, expected completions, despite a large difference in cloze probability between these two ending types. Thus, predictive processing may have the effect of “ramping up” activation for expected information that has less contextual support, resulting in processing advantages greater than would be predicted by cloze probability alone.

On the other hand, the RH's more bottom-up approach to processing should also provide advantages, under different circumstances. In particular, the RH's integrative processing strategy might allow it the flexibility to more easily accommodate words/concepts that are plausible but not particularly predictable. To test this prediction, ERPs were recorded from young adults as they read centrally presented category cues (e.g., “A type of bird”) followed by the lateralized presentation of one of three types of target words: high typicality members of the category (e.g., ROBIN), low typicality members of the category (e.g., CHICKEN), and incongruent nonmember targets (e.g., RACCOON). Participants made category membership judgments after each pair. We hypothesized that LH processing would involve the generation and selection of likely targets in response to the category cue (prior to target presentation), whereas the RH would assess the bottom-up fit between the presented target and the prior cue. If true, this predicts a selective LVF/RH advantage for processing the low typicality category members, because these items fit the category but are more unexpected (less likely to be generated) than high typicality items.

The data (K. Federmeier, unpublished data) support the hypothesis (Figure 5). Presentation to either VF resulted in reduced N400 amplitudes for high typicality compared with incongruent targets, showing that both hemispheres can assess the fit of clear category members and nonmembers. However, N400 responses to low typicality targets differed by VF, with greater facilitation (more N400 reduction) for LVF/RH than for RVF/LH presentation. This pattern is consistent with behavioral data showing greater RH facilitation for distant associates when the critical relationship is directly between the context and the target (e.g., Beeman et al., 1994; Chiarello et al., 1990). In combination with the Federmeier and Kutas (1999a) results, the data suggest that both hemispheres can be broadly sensitive to semantic overlap. For the LH, facilitation seems to be driven by how predictable an item's features are in its context, whereas, for the RH, it seems to be the semantic overlap between specific context words and the target that is most important. It is intriguing to speculate that this ability of the RH to appreciate relationships in language in a more “post hoc” fashion may contribute to its role, as seen in patient studies (e.g., Brownell et al., 1983) as well as in VF studies with normal individuals (e.g., Coulson & Williams, 2005), in processing certain types of nonliteral language, such as jokes, which involve the integration of generally less expected, but plausible, words and concepts.

Figure 5.

 Grand average ERPs (N=24 young adults) shown at the middle central electrode site (Cz). Overlapped are responses to words presented to the LVF/RH (solid line) and the RVF/LH (dotted line) when these were high typicality exemplars of the cued category (left), low typicality exemplars of the cued category (middle), or not from the cued category (incongruent, right). Whereas responses to high typicality and incongruent exemplars were similar in the two hemispheres, N400 responses to low typicality exemplars were more facilitated with LVF/RH than with RVF/LH presentation. Unpublished data.

The differing strengths and weaknesses of a top-down, predictive as opposed to a bottom-up, integrative approach to language processing would also seem to have implications for each hemisphere's memory for the words that it is exposed to. In particular, because, with an integrative processing mechanism, initial stages of word processing are less affected by context, the RH may derive a more veridical representation of verbal stimuli that are encountered than would the LH, which immediately uses top-down information to generalize away from the input. Studies of patients with unilateral damage to the anterior temporal lobe (Falk, Cole, & Glosser, 2002; Pillon et al., 1999) and VF studies in brain-intact individuals (Coney & MacDonald, 1988; Hines, Satz, & Clementino, 1973) have found that the LH's general superiority for verbal processing extends to memory. Indeed, however, the RH seems to encode and retain different aspects of verbal stimuli, affording it advantages in some circumstances. For example, Metcalfe, Funnell, and Gazzaniga (1995) studied recognition memory in a split-brain patient and found RH superiority at rejecting “lures” (new items that were semantically similar to studied items), and similar patterns have recently been seen with ERPs in brain-intact individuals (Fabiani, Stadler, & Wessels, 2000). Findings like these have led to the hypothesis that the RH is biased to process and retain veridical, item-specific (holistic), and perhaps form-specific (Marsolek, 1999; Marsolek, Squire, Kosslyn, & Lulenski, 1994) aspects of verbal stimuli, whereas the LH is more likely to extract and remember abstract information, by attending to category-invariant input features (Marsolek, 1995; Metcalfe et al., 1995). In turn, this hypothesis suggests an additional prediction concerning the effects of lag. The tendency of the LH to rapidly derive complex, interpreted representations from verbal stimuli suggests that the retention of specific word-level information may decrease more rapidly over a delay for the LH than for the RH.

To examine the tendency of each hemisphere to extract specific information about individual words and retain this information over time, we collected behavioral and ERP measures using a continuous recognition memory paradigm. In the first experiment (Federmeier & Benjamin, 2005), participants studied lateralized words interleaved with centrally presented test items, consisting of new words and old words repeated after lags of 0 (immediate repetition), 1, 2, 4, 6, 9, 19, 29, and 49 intervening words; they pressed a button to indicate as quickly and accurately as possible whether each test word was new or old. Response times for correct responses confirmed our prediction (top half of Figure 6). At short lags, responses were faster to words originally studied in the RVF/LH. However, this advantage decreased with increasing lag, and, at the longer study–test intervals, actually reversed, with faster responses to words that had been studied in the LVF/RH. This pattern is quite striking, given that an RVF advantage is almost always observed for verbal material. These behavioral data thus support the hypothesis that there may be a relative RH advantage for maintaining some aspects of verbal information across longer lags.

Figure 6.

 Bar graphs showing memory for words as reflected in response times (RTs) and ERP old/new effects. The top graph plots the RTs for correct endorsements (hits) of centrally presented words that were originally studied in the RVF/LH (black bars) and LVF/RH (white bars) when these words were repeated at short lags (2 or fewer intervening words), medium lags (4–9 intervening words), or long lags (19–49 intervening words). Whereas at short and medium lags RTs were faster for words originally studied in the RVF/LH, at long lags RTs were faster for words originally studied in the LVF/RH. The bottom graph plots the corresponding ERP old/new effects (mean amplitude response to hits minus response to correct rejections, measured at nine central-posterior electrode sites between 250 and 800 ms) for centrally presented words originally studied in the RVF/LH (black bars) and LVF/RH (white bars) at the same three lags. Old/new effects did not differ by study VF for words tested at short and medium lags, but were larger (reflecting more memory signal) for LVF/RH-studied words when these were tested at long lags, analogous to the RT pattern. Behavioral data are from Federmeier and Benjamin (2005) and ERP data are from Evans and Federmeier (2007).

In a follow-up experiment, ERPs were recorded in the same paradigm (Evans & Federmeier, 2007). ERP studies of recognition memory have established that old items, relative to new ones, elicit increased posterior positivity from about 250 to 700 ms. This positivity seems to be composed of at least two functionally dissociable effects: repetition decreases the amplitude of the N400 and increases the amplitude of a late positive component (LPC) linked to explicit recollection (e.g., Paller, Kutas, & McIsaac, 1995; Rugg & Doyle, 1994). Both subcomponents are modulated (albeit differently) by the number of items intervening from study to test (Nagy & Rugg, 1989; Van Petten, Kutas, Kluender, Mitchiner, & McIsaac, 1991), with general decreases in this ERP “old/new effect” with increasing lag. In the Evans and Federmeier (2007) study, general lag effects were apparent on the old/new effect for items studied in both the LVF and the RVF and the old/new effect size was similar across VF for items repeated at short (0–2 intervening words) or medium (4–9 intervening words) study–test lags, suggesting that the behavioral advantages seen at these lags were likely due to perceptual and/or attentional factors favoring word apprehension in the RVF rather than to superior memory per se. However, at the longest study–test lags (19–49 intervening words), old/new effects were larger for items originally studied in the LVF/RH, consistent with the response time pattern. The bottom of Figure 6 shows the pattern. Because this difference arises on an ERP response specifically linked to memory, it supports the hypothesis that the verbal representations formed by the RH and LH are retained differently over time. Another finding that emerged in this study was a repetition effect on the P2, with larger responses to new than to old test words. Strikingly, this P2 repetition effect was seen for all lags but only for words that had been studied in the LVF/RH (Figure 7); this pattern is consistent with the hypothesis that the RH retains more perceptual information about words that it has encountered. It is possible that such information becomes more important for memory judgments at long lags, when explicit memory signals are weaker.

Figure 7.

 Grand average ERPs (N=24 young adults) shown at a representative frontal electrode site. Responses to (centrally presented) correctly classified old words (hits; solid line) and correctly classified new words (correct rejections; dotted line) are shown for items originally studied in the RVF/LH (at the left) and the LVF/RH (at the right). A P2 repetition effect (larger P2 responses for old than for new words) was observed only with LVF/RH presentation. Data are from Evans and Federmeier (2007).

The PARLO Framework

A growing body of data thus supports the idea that LH processing is more expectancy driven, involving the preactivation of likely upcoming items (words, pictures) at multiple levels, whereas RH processing seems to integrate new information in a more bottom-up fashion. Expanding upon this, I have developed a theoretical framework that ties together the patterns of asymmetry documented across tasks and language levels. This framework assumes an interactive-style model of language comprehension, characterized by both feed-forward and feedback connections among levels, and further assumes that the representational structure of individual levels and the feed-forward connections between them are essentially similar for the two hemispheres. However, it is proposed that a core difference between the hemispheres lies in the functional efficacy of their top-down connections, with feedback connections playing a much larger role in shaping LH processing.

This functional difference in the connectivity of the hemispheres may arise, at least in part, because language comprehension and production share resources only in the LH. As already discussed, production mechanisms seem to be highly left lateralized, making it plausible that concept-to-output connections are more available, stronger, and/or better-tuned in the LH. (There are also some suggestive links between the N400 component and language production abilities; see Kutas, Hillyard, & Gazzaniga, 1988.) Of course, the relationship between production and comprehension, and, in particular, the extent to which they share representations and processes at different levels of the language system, is a complex and controversial topic. It is worth noting, however, that recent connectionist approaches have incorporated aspects of production (in particular, mechanisms for prediction) into models of comprehension in order to account for certain aspects of word and sentence processing (Chang, 2002; Rohde, 2002) as well as language learning (Elman, 1991; Plaut & Kello, 1999) that had previously proven difficult to simulate. On the proposed framework, hemispheric differences could be simulated in such models through adjustments in the strength, speed, and/or noise levels of the prediction/production components.

Under this “Production Affects Reception in Left Only” (PARLO) framework, LH comprehension will be more driven by top-down, context-based information at each level. A key consequence of this integration between language comprehension and production mechanisms is stronger feedback connectivity in the LH and a concomitant increase in cross-level interactivity. The strength of such a system lies in its efficiency and robustness: The availability of top-down information allows the system to rapidly generalize away from the input, and the resultant higher level activity is then fed back down, resulting in changes at lower levels that prepare the system to process likely upcoming stimuli. However, such a system tends to lose stimulus-specific information, as context and expectations shape and override input features. Compensating for this is a RH comprehension system that processes in a more feed-forward manner. Such a system is more tied to the stimulus and will extract and use higher level information less efficiently. Yet the greater specificity with which the input is represented and retained can afford future flexibility, as information can be reinstated and then reanalyzed as needed, potentially allowing for the appreciation of more varied and temporally distant relationships between stimuli.

This relatively simple and neurobiologically plausible proposed difference has broad consequences for behavior and can account for many of the patterns of hemispheric specialization observed in language, at the word, sentence, and discourse levels, and in verbal memory. For example, this view is supported by behavioral and ERP results showing relatively greater influences of sentence-level than lexical-level variables on word processing in the RVF as compared with the LVF (Chiarello et al., 2001; Coulson et al., 2005; Faust, 1998), ERP evidence for language context effects on perceptual processing only after RVF presentation (Federmeier & Kutas, 2002; Federmeier et al., 2005), and the LH's superiority for the apprehension of words in the face of similar processing of letter strings (Jordan, Patching, & Thomas, 2003). The simultaneous use of constraints from multiple levels will tend to strengthen and speed selection (e.g., reaching a stable state in a network model) in the LH. This may help to explain findings such as the LH's narrower scope of priming (e.g., Atchley, Burgess, & Keeney, 1999; Chiarello et al., 1990; Koivisto, 1997) and its tendency to suppress the subordinate or context irrelevant meaning of ambiguous words (Burgess & Simpson, 1988; Faust & Chiarello, 1998) as well as observations that RH damage may result in stronger commitments to initial interpretations (e.g., Beeman, 1993; Brownell et al., 1986) and reduced ability to appreciate multiple meanings of words/phrases (important for some nonliteral language; e.g., Brownell et al., 1983). In the postulated LH network, feedback can act not only to shape bottom-up processing as it is occurring, but, in an ongoing stimulus stream, can create prospective activation of features of likely upcoming stimuli in lower levels; in other words, the LH can use covert generation and selection to predict, as in Federmeier and Kutas (1999a). On the other hand, this interactivity results in a corresponding loss of veridicality for the LH, as top-down processing distorts the bottom-up signal and makes it more difficult (if not impossible) to recover later. This is consistent with observations in the verbal memory domain that related lures are rejected less effectively in the RVF/LH (Fabiani et al., 2000; Metcalfe et al., 1995) and may contribute to some of the neuropsychological language findings for RH damaged patients, because keeping track of discourse topic information (Gardner et al., 1983), revising inferences (Beeman, 1993; Brownell et al., 1986), and understanding nonliteral language that requires reanalysis of particular language elements (Brownell et al., 1983; Weylman et al., 1989) would seem to necessitate that stimulus-specific information be extracted and maintained over time—something that RH processing mechanisms may be more advantageous for (Evans & Federmeier, 2007; Federmeier & Benjamin, 2005).

Thus, the PARLO framework takes steps toward creating a unified account of the patterns of asymmetry that have been observed across verbal domains, language levels, tasks, and methods, including some that seemed contradictory at first glance. It suggests that the brain may address the differing costs and benefits of more top-down, predictive approaches to language comprehension (mechanisms that are perhaps integrated with language production) and more bottom-up, “interpretive” approaches by implementing both in parallel. In turn, this may allow the brain to dynamically regulate when and how different comprehension strategies are brought to bear as a function of factors such as the demands of the task and the availability of cognitive resources. There may also be shifts in the default pattern with which language-related hemispheric mechanisms and resources are used across the lifespan; this is an area we are beginning to actively investigate.

More generally, independent of any particular framework, these kind of data emphasize the importance of examining cerebral asymmetries and considering the possibility of multiple mechanisms when trying to understand the nature, timecourse, and neural bases of language processes. Differences across tasks, individuals, and groups that seem problematic and difficult to explain if one seeks a single mechanism for all language phenomena may be more readily accounted for by adopting a more multifaceted view. In simple cases, the patterns seen for everyday, normal (i.e., central) comprehension may largely reflect the processing strategies of a single hemisphere (as in the pattern seen for Federmeier and Kutas, 1999b, which seems to reflect LH-dominated mechanisms; Federmeier & Kutas, 1999a). In other cases, however, phenomena observed with central presentation may reflect the joint contributions from both hemispheres or may instead be an emergent property of mechanisms that require interhemispheric cooperation. For example, numerous studies have shown that N400 amplitudes are graded by cloze probability (e.g., Kutas & Hillyard, 1984; Kutas, Lindamood, & Hillyard, 1984)—leading to the reasonable inference that the (single) underlying mechanism is sensitive to context-induced gradations of expectancy for individual words or concepts. However, in a recent VF study (Wlotko & Federmeier, in press), we found that neither individual hemisphere showed the canonical pattern. Instead, endings with low to moderate cloze probability were highly facilitated with RVF/LH presentation (showing N400s of similar amplitude to those for high cloze probability completions) but facilitated very little with LVF/RH presentation (showing N400s of similar amplitude to completions with cloze probability near zero). An average of the responses from the two VFs, however, closely resembled the graded pattern obtained with central presentation of the same stimuli (Federmeier et al., 2007). These findings thus raise the intriguing possibility that some data patterns may reflect the summation of concurrent activity in more than one distinct processor—for example, in this case, the two cerebral hemispheres—neither of which is individually sensitive to language variables in the manner suggested by the combined response. In other cases, cooperation between multiple systems may afford processing strategies that neither individual processor is capable of supporting alone. In the same set of studies, for example, a late frontal positivity seen with central presentation of the stimuli and linked to suppression/revision processes (Federmeier et al., 2007) was absent when the same targets were lateralized to either VF (Wlotko & Federmeier, in press), suggesting that concurrent contributions from both hemispheres may be needed. Thus, a full understanding of the cognitive and neural bases of normal comprehension will require elucidation of both the individual and the interactive contributions of mechanisms distributed across the two cerebral hemispheres.


In sum, the brain seems to deal with the speed and complexity of language processing by “thinking ahead”—that is, generating information about likely upcoming stimuli and preparing ahead of time, at multiple levels, to process them. However, such predictive processing entails costs, as well as benefits, and changes in the availability of cognitive and neural resources with normal aging seem to make prediction more difficult or less likely for many individuals. To address the trade-offs entailed by more top-down, predictive language processing strategies as compared with more bottom-up, integrative ones, the brain may actually implement both in parallel, in a dynamic fashion. In particular, prediction seems to be associated with left hemisphere processing mechanisms, possibly because language comprehension is integrated with language production in the left hemisphere. The right hemisphere, instead, seems to use a more post hoc, interpretive approach and to retain more veridical information about words, affording flexibility and the possibility for recovery when prediction results in errors. By tracking language comprehension through the use of real-time, direct brain measures such as event-related potentials, we are thus beginning to build a picture of when, where, and how the brain comes to build meaning from words.