The neural basis of visual attention


  • James W. Bisley

    1. Department of Neurobiology and Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, and Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, CA 90095, USA
    Search for more papers by this author

Corresponding author J. Bisley: Department of Neurobiology, PO Box 951763, Los Angeles, CA 90095-1763, USA. Email:


Visual attention is the mechanism the nervous system uses to highlight specific locations, objects or features within the visual field. This can be accomplished by making an eye movement to bring the object onto the fovea (overt attention) or by increased processing of visual information in neurons representing more peripheral regions of the visual field (covert attention). This review will examine two aspects of visual attention: the changes in neural responses within visual cortices due to the allocation of covert attention; and the neural activity in higher cortical areas involved in guiding the allocation of attention. The first section will highlight processes that occur during visual spatial attention and feature-based attention in cortical visual areas and several related models that have recently been proposed to explain this activity. The second section will focus on the parietofrontal network thought to be involved in targeting eye movements and allocating covert attention. It will describe evidence that the lateral intraparietal area, frontal eye field and superior colliculus are involved in the guidance of visual attention, and describe the priority map model, which is thought to operate in at least several of these areas.

James Bisley is an Assistant Professor of Neurobiology at UCLA. In 1998 he received his PhD from the University of Melbourne, Australia. His research uses physiological, psychophysical and computational techniques to study the cognitive processing of visual information, such as attention, short-term memory and spatial processing. His work on the neural mechanisms underlying the allocation of visuospatial attention has been recognized by awards from the McKnight Foundation, the Sloan Foundation and the Klingenstein Foundation.

Visual attention

Although we are not always aware of it, visual attention is incredibly important for visual perception and for all the uses we put perception to, such as learning, memory and our interactions with the visual world. This is highlighted by behavioural protocols such as change blindness, which require subjects to identify changes in a scene, but remove the usual cues of these changes (such as movement or sudden onset) by having the entire display flash off and then on again at the time of the change (Rensink, 2000). Subjects are remarkably poor at detecting these changes when broadly viewing the entire scene, but improve dramatically when their attention is allocated to the location of the change. In addition to these types of protocols, the importance of attention is highlighted in patients with parietal lesions who often show signs of attentional deficits as striking as the inability to notice anything in a particular region of space (Husain & Nachev, 2007; Adair & Barrett, 2008).

Generally we think of focusing our attention onto objects or features in terms of making a rapid eye movement (a saccade) to bring the object of our attention to the centre of gaze. However, our visual system can also process information from selected peripheral regions of the retina. When this is done consciously, it is often described as ‘looking out of the corner of your eye’. As the eyes do not move, this is referred to as covert attention (Posner, 1980). Both covert and overt attention can be allocated voluntarily (commonly referred to as top-down) or involuntarily (bottom-up). Classic examples of bottom-up attractors of attention are flashing or moving objects, such as the lights on top of emergency vehicles. In contrast, top-down attention can refer to any voluntary attention or attention that is driven by factors other than external stimuli.

The effects of attention on neurons in visual cortical areas

In the lab, covert attention provides a number of behavioural benefits. Subjects attending a particular location have perceptual benefits when examining a stimulus at that location, such as improved contrast sensitivity (Cameron et al. 2002) or spatial resolution (Yeshurun & Carrasco, 1999). Furthermore, reaction times are improved, such that subjects respond more rapidly to an event at an attended location than at another location (Posner, 1980). In this section, we briefly describe the effects of attention on neuronal responses in a number of visual areas (light grey regions, Fig. 1) that may underlie these behavioural benefits.

Figure 1.

Illustration of the primate brain
The locations of visual areas containing neurons whose responses are modulated by visual attention are shown in light grey. Cortical areas involved in the allocation of attention and the guidance of eye movements are shown in dark grey. The superior colliculus, also involved in this process, is not visible. MT, the middle temporal area; LIP, the lateral intraparietal area; FEF, the frontal eye field.

Spatial attention Just as eye movements are aimed at particular regions of space, covert attention can also be focused on discrete locations in the visual scene. To examine this, experiments have been performed in which a single stimulus was placed in the receptive field of the neuron being studied and one or more stimuli were placed outside of the receptive field (Fig. 2A, left panel). Responses were then recorded when the animal focused its attention towards the stimulus within the receptive field or outside of the receptive field. Generally, neurons respond significantly more to the stimulus in the receptive field when it is the locus of the animals’ attention than when it is ignored. These results have been found in a host of visual areas from the lateral geniculate nucleus through to the middle temporal area (MT) and V4 (Motter, 1993; Vidyasagar, 1998; McAdams & Maunsell, 1999b; Treue & Maunsell, 1999; Reynolds et al. 2000; Casagrande et al. 2005; McAdams & Reid, 2005; McAlonan et al. 2008). However, the magnitude of this effect is not particularly strong and in some studies the effect is almost absent in certain cortical areas (Luck et al. 1997; Seidemann & Newsome, 1999).

Figure 2.

Studying the effects of attention on MT neurons
A, the two main ways that the effects of attention have been studied in visual cortices. Usually at least 1 preferred stimulus (a direction of motion or oriented bar) and 1 non-preferred stimulus (the opposite direction of motion or a bar rotated 90 deg from the preferred orientation) are presented on the screen. Either a single stimulus is placed in the receptive field (left panel), with the other stimulus in an opposite location, or both stimuli are placed in the receptive field (right panel). B, the response of a population of 70 MT neurons under 5 attentional conditions (illustrated on right). The dashed grey oval represents the receptive field (RF), the cross represents the fixation point (FP), and the black cone illustrates the focus of attention. Adapted from Lee & Maunsell (2010) with permission from the Society for Neuroscience.

Striking effects of attention are seen when two stimuli are placed within the neuron's receptive field (Fig. 2A, right panel). The logic underlying these experiments is that as receptive fields get bigger in progressively higher visual areas, mechanisms must still be available to focus attention to a discrete region of space. Thus, these experiments generally place both a preferred and non-preferred stimulus within the receptive field and have the animals attend one of the two stimuli. In addition, extra stimuli are often placed outside of the receptive field, so that the responses to the stimuli within the receptive field can be measured under conditions in which attention is not focused on either. The result of such an experiment is illustrated in Fig. 2B, which shows the mean responses from 70 MT neurons with a single or two stimuli within the receptive field (Lee & Maunsell, 2010). The effects of attention under these conditions are clear; the response of the population is biased towards its normal response to the attended stimulus when it is presented alone. Thus, if the attended stimulus is the non-preferred stimulus (continuous grey trace, Fig. 2B), then the activity is suppressed compared to the non-attended response to both stimuli (dashed black trace, Fig. 2B), and if the attended stimulus is the preferred stimulus, then the response is enhanced (dotted black trace, Fig. 2B). Similar effects have been seen in V2, V4 and in the temporal lobe (Moran & Desimone, 1985; Chelazzi et al. 1993, 2001; Luck et al. 1997; Reynolds et al. 1999; Treue & Maunsell, 1999; Martinez-Trujillo & Treue, 2002).

Feature-based attention When searching for a particular item, we are able to effectively ignore objects that do not share any features with our target. This ability to ignore completely irrelevant distractors while focusing attention on categories of objects based on their composition is termed feature-based attention. Studies examining this form of attention have shown that neurons that preferentially respond to a feature have elevated responses to that feature when it is part of an object within the neuron's receptive field and when that feature is being attended – even if the specific object in the receptive field is not the focus of attention (Motter, 1994; Treue & Martinez Trujillo, 1999; McAdams & Maunsell, 2000; Bichot et al. 2005; Buracas & Albright, 2009). For example, if a subject is looking for a red ball and a red star is in the receptive field of a neuron that has a preference for red objects, then the response will be enhanced compared to if the subject is looking for a blue ball. In this way, the visual system is highlighting objects that are more likely to be the goal of the search and, thus, increasing the efficiency of search.

The response modulations under feature-based attention and spatial attention have been examined together in a number of studies (McAdams & Maunsell, 2000; Busse et al. 2006; Katzner et al. 2009). These studies have found that single neurons in areas MT and V4 exhibit similar modulations due to both feature-based attention and spatial attention, suggesting that the mechanism used to produce the modulation for the two processes may be the same (Katzner et al. 2009).

Activity modulation It is intuitive to imagine that attention effectively shrinks the receptive field to encompass the attended stimulus when multiple objects are in the receptive field (continuous grey or dotted black traces, Fig. 2B). However, understanding what is occurring when only a single stimulus is within the RF is less obvious because the attentional modulation varies with both the neuronal tuning and stimulus contrast (Martinez-Trujillo & Treue, 2002; Reynolds & Desimone, 2003). In coming up with an explanation for the results found when a single stimulus is presented in the receptive field, two theories have dominated the literature. The first suggests that attention shifts the stimulus–response curve (Fig. 3A; Reynolds et al. 2000). The second suggests that attention modulates the gain of the neural response above baseline (Fig. 3B; McAdams & Maunsell, 1999a,b; Treue & Martinez Trujillo, 1999). One study set out to differentiate between these two possibilities, but found that the data were very well fitted by both models; what variance there was in the neural data did not allow the responses to fit unambiguously into one category or the other (Williford & Maunsell, 2006). However, recent modelling work has suggested that the differences in the changes to the stimulus–response curves may be due to the stimulus size relative to the area over which the animal is paying attention (Reynolds & Heeger, 2009). They found that when the stimulus was smaller than the attended area, a response shift was seen and when the stimulus was larger than the attended area, a response gain was seen.

Figure 3.

Theoretical effects of attention on the contrast response function
A, the response shift model predicts that the response to an attended stimulus of a given contrast is the same as the response to an unattended stimulus of higher contrast. Thus, attention is similar to turning up the brightness of the stimulus. B, the response gain model predicts that the response to an attended stimulus of a given contrast, is a multiplicative increase in response to that contrast. Thus, attention is similar to just turning up the gain of the response. Note that the response gain model (B) suggests that the peak response to a bright stimulus can be increased, whereas the response shift model (A) does not. Continuous lines, unattended contrast response functions; dashed lines, attended contrast response functions.

A number of recently published models have been attempting to explain all attentional modulation using a response normalization mechanism. These models explain the response modulations seen with both one and two stimuli within neurons’ receptive fields by either implementing an input gain (Ghose, 2009) or a normalization of the output signal (Lee & Maunsell, 2009; Reynolds & Heeger, 2009). Although differing in subtle ways, these models show that it is likely that a unifying mechanism is used to modulate activity under most, if not all, attentional conditions (Lee & Maunsell, 2010). These models are a welcome consensus in the field as they meld two sets of data that were treated by many as separate problems to be solved.

Other effects of attention In addition to modulating firing rate, attention has been found to influence neural activity in other ways. Focusing attention increases the synchronization of local field potentials (LFPs; thought to represent synaptic input) in V4 (Fries et al. 2001, 2008) and increases the synchronization between the LFP and the spiking output (Bichot et al. 2005; Fries et al. 2008). Attention also produces more reliable responses in putative interneurons in V4, as indicated by a reduction in the variance (Mitchell et al. 2007). There is also recent evidence from V4 that attention reduces interneuronal correlations (Cohen & Maunsell, 2009), which would improve the population encoding of information by reducing correlated noise that would otherwise remain after pooling. Finally, when attention is allocated to a stimulus close to, but not within, the receptive field, then the centre of the receptive field appears to shift towards the attended location (Connor et al. 1996, 1997; Womelsdorf et al. 2006). It is thought that this may be related to the changes in centre–surround interactions seen due to attention (Sundberg et al. 2009) and, consistent with the normalization models, may be due to multiplicative attentional modulation at earlier stages within the visual system (Womelsdorf et al. 2008).

Mechanisms underlying the allocation of attention

Thus far, we have talked about the ways in which attention modulates the activity of neurons in visual cortices. Now we will describe the mechanism thought to guide the allocation of attention. Although we will focus on results from monkeys, there is a growing literature in humans based on lesion, TMS and functional imaging (Corbetta et al. 2008), which shows that the animal model is providing a good foundation for our understanding of these processes in the human.

The parietofrontal network In both humans and monkeys, the guidance of attention appears to be focused on a network involving parietal and frontal areas, as well as the superior colliculus (SC). In the monkey, it is thought that the lateral intraparietal area (LIP) and the frontal eye field (FEF) are the specific regions within the parietal and frontal cortices, respectively, which are involved in this process (Fig. 1). Anatomically, these areas are all interconnected with each other and with visual cortical areas (Stanton et al. 1988, 1995; Andersen et al. 1990; Blatt et al. 1990; Tanaka et al. 1990), so they are ideally suited for the collection of visual information and the feedback to visual areas in order to guide attention. It is notable that most of these areas are also involved in the processing of eye movements, which are really just overt shifts in attention. Thus, it is likely that the same network guides both overt and covert attention (Bisley & Goldberg, 2010).

Neural evidence of attentional allocation by the parietofrontal network As it is difficult to know exactly where an animal's focus of covert attention is, there are only a limited number of studies that have explicitly related the activity of neurons to a qualitatively defined locus of attention. This has been done explicitly in LIP (Bisley & Goldberg, 2003; Herrington & Assad, 2010) and SC (Ignashchenkova et al. 2004). However, the use of visual search tasks, in which subjects must covertly shift their attention around an array of objects to find a target without moving their eyes, gives an indirect measure of attention that can be used to identify attention-related activity. Such studies have found covert attention-related activity in LIP (Thomas & Pare, 2007; Ipata et al. 2009), SC (McPeek & Keller, 2002b) and FEF (Juan et al. 2004; Thompson et al. 2005; Buschman & Miller, 2007). Indeed, one recent study has found evidence showing what appears to be a neural correlate of shifting covert attention in the activity of FEF neurons during such search (Buschman & Miller, 2009). In this study, the monkeys appeared to have a tendency to covertly scan the array in a clockwise direction (Fig. 4A). Figure 4B shows the responses of FEF neurons sorted based on where their receptive fields were relative to the target location. When the target was in the neuron's preferred location, the response was elevated just prior to the animal's signal that he had found the target (indicated by a saccade). When the target was clockwise to the neuron's preferred location, the response was elevated earlier – as if the animal checked the stimulus in the neuron's preferred location and then moved on to the next stimulus where it found the target. A similar, but more exaggerated effect was seen when the target was opposite the RF – as if the animal checked the stimulus in the neuron's preferred location and then checked the next one before finding the target at the third location. This result suggests that the activity in FEF and, by extension, the parietofrontal network, guides covert attention on a moment-by-moment basis.

Figure 4.

Evidence of shifting covert attention in frontal eye field (FEF)
A, in this experiment, monkeys had to fixate on a small point (the fixation point, FP) and covertly find a target amongst 3 distractors. Based on the behavioural data, the authors found that the monkeys were serially attending the stimuli in a clockwise direction (illustrated by the dashed line). B, the responses (illustrated as z-scores by the colour coding) of FEF neurons are aligned by the onset of the choice saccade and are sorted based on where the target was relative to the preferred location (RF). Asterisks show significance with Bonferroni correction, dots show uncorrected significant. Adapted from Buschman & Miller (2009) with permission from Elselvier.

To test whether this is the case, several studies have recorded the activity in the parietofrontal network and in a mid-level visual area. They have found that attention leads to enhanced synchrony in responses across the areas (Saalmann et al. 2007; Gregoriou et al. 2009). These data suggest that neurons in the parietofrontal network can modulate the activity in visual areas and are consistent with the hypothesis that these areas guide the allocation of covert attention.

Visual search paradigms can also be used to relate neural activity to saccade generation and, thus, overt attention. The principle of these studies is that if the neural activity is used to guide eye movements, then once a target has been selected by these neurons a set time should elapse before an eye movement is triggered to the location represented by the neurons. We can think of this in the following way: once the neuron selects the target, it tells the oculomotor system what to aim for and the remaining time is some set time it takes for the oculomotor system to generate a saccade. Such a relationship between saccade goal selection and saccadic onset has been seen in LIP (Ipata et al. 2006a; Thomas & Pare, 2007), FEF (Bichot et al. 2001; Sato et al. 2001) and SC (McPeek & Keller, 2002a; Shen & Pare, 2007). Thus, the same areas that appear to be involved in guiding covert attention are also involved in providing targeting data to the oculomotor system for overt attention.

Causal evidence of attentional allocation by the parietofrontal network In addition to correlations of neural activity with measures of behaviour, a growing number of studies have found that inactivation or microstimulation of single areas within the parietofrontal network biases the allocation of attention. As a rough tool, these methods can ask whether artificially increasing or decreasing the activity within an area has an effect on the allocation of attention. But, they also allow the testing of specific hypotheses; one can make a prediction about what would happen to the allocation of attention after stimulation or inactivation and see if the effects match the predictions.

Microstimulation involves injecting low current levels into clusters of physiologically defined neurons. Microstimulation of LIP has been shown to bias orienting to a flashed stimulus (Cutrell & Marrocco, 2002), while stimulation of the SC has biased performance in several attention-demanding tasks (Cavanaugh & Wurtz, 2004; Muller et al. 2005). The most impressive data from microstimulation come from a series of studies in which Moore and colleagues have stimulated FEF, using currents that are too low to induce eye movements. These studies have shown that microstimulation of FEF improves stimulus detection behaviourally (Moore & Fallah, 2001) and that microstimulation induces response changes in V4 neurons reminiscent of the modulations seen when attention is allocated to those neurons (Moore & Armstrong, 2003; Armstrong et al. 2006). One of the downsides of microstimulation experiments is that it is possible that the behavioural or neural effects are primarily due to remote stimulation of other more relevant areas via antidromic or orthodromic stimulation of neurons connecting to the stimulated cluster, or even via stimulation of axons that pass close to the stimulated area, but are not connected to it. Given the robust changes elicited by Moore and colleagues and the fact that the regions within the parietofrontal network appear to work together, it is generally accepted that their data are very strong evidence for this network playing a role in the allocation of attention.

The role of an area can also be tested by artificially reducing the activity of neurons within that area. In this field, it is most commonly done with muscimol, which inhibits neurons with GABAA receptors for hours at a time. This removes the main concern of more classical lesion studies, which is that any main roles the area may play could be masked by retraining or because redundant circuits kick in during the recovery phase. Performance in visual search requiring shifts in covert attention has been affected by inactivation of LIP (Wardak et al. 2004; Balan & Gottlieb, 2009; Liu et al. 2010), FEF (Wardak et al. 2006; Monosov & Thompson, 2009) and SC (McPeek, 2008). In addition, Lovejoy & Krauzlis (2010) have shown that cued covert attention is disrupted by inactivation of SC (Fig. 5). In this study, monkeys were cued by a red ring to attend a particular location in space and, when a pulse of motion occurred in that location, they had to indicate the direction of motion. Further, the monkeys had to ignore a distractor pulse of motion that was given in the opposite location. When the cue was in the intact quadrant (right panel), the animals’ performance was good and consistent both before and during inactivation. On the opposite side, performance was good before inactivation (left panel), but when the cue was in the inactivated quadrant during inactivation, the animal often indicated the direction of the motion pulse in the distractor location (yellow points) rather than the cued location (red points). This means that during inactivation the animals often attended the incorrect location. As the monkeys had to indicate the direction of motion by a saccade or by a button press (in different experiments), the authors concluded that the attentional deficit was not related to a deficit in eye movement generation. Thus, these data strongly suggest that the SC is involved in allocating or maintaining guided covert attention and together with the neural correlations and microstimulation studies described above, strongly imply that the frontoparietal network, in which we have included SC, guides visual attention.

Figure 5.

Inactivation of the superior colliculus (SC) creates a deficit in the allocation of attention
In this experiment, monkeys were cued by a red circle to pay attention to a stimulus in a region of visual space represented by inactivated SC neurons (left panel) or in the opposite location (right panel). The animals had to indicate the direction of a pulse of motion in the cued location (red) and ignore a pulse of motion in the opposite location (yellow). These graphs show the proportion of choices made when the monkey correctly indicated the direction of the cued motion pulse (red points), indicated the direction of the motion pulse in the distractor location (yellow points) and did not indicate either of these directions (grey) before and after an injection of muscimol. Adapted from Lovejoy & Krauzlis (2010) with permission from Macmillan Publishers Ltd: Nature Neuroscience, ©2010.

The priority map hypothesis Many results from recording, inactivation and microstimulation studies are consistent with the idea that the frontoparietal network may operate as a priority map. A priority map is a representation of the visual world in which items, objects or locations are represented by activity that is proportional to their attentional priority. The hypothesis is based on the saliency map model (Itti & Koch, 2001); however, we use the term priority to remove any implication that the responses are due to bottom-up inputs alone (Serences & Yantis, 2006). The attentional priority is some combination of bottom-up input and top-down influences, which include task goals, evaluation of importance and personal biases (Fig. 6). The hypothesis is that eye movements are made to the peak of the map at the time they were programmed and, on a moment-to-moment basis, covert attention is also allocated to the peak of the map. This latter statement, supported by some neurophysiological data (Bisley & Goldberg, 2003, 2006), is somewhat contentious because subjects can generally split their attention over long periods of time. However, it is not clear whether this splitting is due to a single spotlight shuttling between locations or due to an actual splitting of attentional resources.

Figure 6.

Theoretical priority map response during visual search
A, stimulus arrangement and eye movements (white dashed lines) in a hypothetical visual search task. B, theoretical responses on a priority map to the search performed in A. Red stimuli are represented by low activity, blue stimuli are represented by higher activity. Bars oriented the same way as the target are represented by greater activity than bars that are not. The bright yellow pop-out stimulus is represented by elevated activity due to its inherent salience. The middle blue bar that had just been fixated is represented by reduced activity.

It is likely that a priority map operates in FEF and SC, or within the parietofrontal network as a whole; however, the majority of research studying this hypothesis has been performed on LIP. These studies have shown that LIP activity has a bottom-up component (Roitman & Shadlen, 2002; Balan & Gottlieb, 2006) and top-down influences. The top-down inputs can enhance activity due to task demands (Gottlieb et al. 1998; Ipata et al. 2009), reward expectation (Platt & Glimcher, 1999; Dorris & Glimcher, 2004; Sugrue et al. 2004) and motor plans (Gnadt & Andersen, 1988), and can suppress responses to salient, but behaviourally distracting, stimuli (Ipata et al. 2006b) and to stimuli that have been judged as unimportant after being foveally examined (Mirpour et al. 2009). All these inputs and influences appear to be summed almost linearly (Ipata et al. 2009) and the output predicts saccadic latency (Ipata et al. 2006a; Thomas & Pare, 2007), the shifting time of covert attention (Bisley & Goldberg, 2003, 2006; Herrington & Assad, 2010) and monkeys’ decisions on two-alternative forced choice tasks (Roitman & Shadlen, 2002). Thus, the concept of a priority map for visual attention is particularly appealing because it explains the varied roles that had been assigned to LIP (Bisley & Goldberg, 2010) while incorporating a strong and plausible model of attentional allocation (Itti & Koch, 2001). Indeed, the model is part of the foundation for an exciting series of experiments which appear to have successfully aided patients with spatial impairments due to parietal lesions (Bays et al. 2010).



J.W.B. is supported by a Klingenstein Fellowship Award in the Neurosciences, an Alfred P. Sloan Foundation Research Fellowship, a McKnight Scholar Award, and the National Eye Institute (R01 EY019273).