Cognitive Neuroscience: The Troubled Marriage of Cognitive Science and Neuroscience


should be sent to Dr. Richard P. Cooper, Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK. E-mail:


We discuss the development of cognitive neuroscience in terms of the tension between the greater sophistication in cognitive concepts and methods of the cognitive sciences and the increasing power of more standard biological approaches to understanding brain structure and function. There have been major technological developments in brain imaging and advances in simulation, but there have also been shifts in emphasis, with topics such as thinking, consciousness, and social cognition becoming fashionable within the brain sciences. The discipline has great promise in terms of applications to mental health and education, provided it does not abandon the cognitive perspective and succumb to reductionism.

1. The emergence of cognitive neuroscience

Thirty years ago cognitive neuroscience was beginning to emerge as a new field of research as the cognitive revolution began to interact with what were becoming the neurosciences. At the time, behavioral neuroscience was well established as physiological psychology and systems neuroscience was forming out of the interactions between physiology, anatomy, and psychology. The former focused on brain–behavior relations, whereas the latter was concerned with the neural circuits involved in specific structures (such as the hippocampus). Both subdisciplines would come to play key roles in cognitive neuroscience, yet their primary sources of evidence came from animal studies, and although there was substantial interest in learning, cognitive concepts were peripheral. A major transmission route for such concepts into neuroscience was, however, provided by cognitive neuropsychology. The deficits of neuropsychological patients had begun to be related to emerging cognitive theories of normal function during the mid-1960s (Marshall & Newcombe, 1966), and by the time cognitive neuroscience emerged in the late 1970s, the information-processing conceptual framework was widely applied within cognitive neuropsychology.

Embryonic cognitive neuroscience was, then, empirically the product of three approaches—physiological psychology, systems neuroscience, and cognitive neuropsychology. Each discipline brought to the emerging field an established, and expanding, knowledge base. Within systems neuroscience, significant progress had been made in the 1960s and early 1970s in understanding early sensory processing, largely due to Hubel and Weisel’s pioneering investigations of the receptive fields of cells in the primary visual cortex of cats and macaque monkeys (see Hubel & Wiesel, 1977). However as this work primarily involved anaesthetized animals, it was not informed by cognitive concepts. A critical development was the application of single-cell recording to behaving animals in research that spanned the behavioral and systems approaches, some of which was directly influenced by contemporary cognitive psychology. One key early strand was the work of Fuster and Alexander (1971), who related the firing of neurons in monkey prefrontal cortex during a delayed response task to the cognitive concept of short-term memory. At the same time, O’Keefe and Dostrovsky (1971) discovered place cells in the hippocampus, and this led O’Keefe together with Nadel (1978) to propose that the hippocampus carries a cognitive map of the environment.

Behavioral neuroscience brought with it an increasing knowledge of the functional roles of various subcortical structures and neurotransmitters. Wise and colleagues, for example, had proposed a role for dopamine in the reward system (Wise, Spindler, De Wit, & Gerberg, 1978; Yokel & Wise, 1975), whereas more explicitly cognitive concepts had been invoked by Mason and Lin (1980), who had suggested that noradrenaline played a role in ignoring irrelevant stimuli and hence in selective attention. In general, three methodological approaches were standard within the field: chemical studies, lesion studies, and recording of electrical potentials at the scalp. Of these methods, only the last (in the form of electroencephalography—EEG—and event-related potentials—ERP) was suitable for general use with human subjects, but at that time this method was not widely used to study cognitive processes (though see Hillyard, Hink, Schwent, & Picton, 1973, for early use of ERP in a study of selective attention with humans). These methods began to have a much greater influence in the 1980s and 1990s, when they also began to be applied to higher level cognitive topics, such as in the discovery of the semantic anomaly effect in the N (for negative) 400 (ms) wave form (Kutas & Hillyard, 1984).

Much of the cognitive strand of neuroscience-related work at the time was in neuropsychology, in a revolution stimulated by the return of the single case study (e.g., Scoville & Milner, 1957), scathingly dismissed as unscientific in the early 20th century (e.g., Head, 1926), but now applied using more rigorous empirical methodology. By the 1970s, it was being applied within the theoretical framework of the information-processing conception of mental processes (e.g., Shallice & Warrington, 1970). By the end of that decade, considerable progress had been made on a variety of fronts. The reading system had been fractionated into a set of component processes (see, e.g., Coltheart, Patterson, & Marshall, 1980). Much was known about the different levels of process involved in object recognition and how they might break down (Warrington & Taylor, 1978). Similarly, several systems involved in memory processes, relating to episodic memory, semantic memory, auditory-verbal short-term memory, visual short-term memory, and priming, had been isolated (see, e.g., Kinsbourne & Wood, 1975; Warrington, 1975). Much of this work was influenced by the conceptual frameworks being developed by Marr within cognitive science and Tulving within cognitive psychology, but the transfer of ideas was bidirectional, with cognitive neuropsychological studies also leading to major theoretical developments within more mainstream cognitive psychology (e.g., Baddeley & Hitch, 1974).

It is clear then that by the 1980s cognitive concepts had taken hold within cognitive neuropsychology, and that the transfer of those concepts into neuroscience had begun. The process was not just one of osmosis, however. The neurosciences needed cognitive concepts if they were to address what some neuroscientists saw as the most interesting questions. As Gazzaniga (2000, p. 13) puts it, “neuroscience needed cognitive science … because … the molecular approach [i.e., the approach of the biologist wishing to understand the behavior of inanimate matter in living systems] in the absence of the cognitive context limited the fashionable neuroscientist to pursuing answers to biologic questions in a manner not unlike that of the kidney physiologist. … Such approaches … make it impossible for the neuroscientist to attack the central integrative questions of mind-brain research.”

2. The development of cognitive neuroscience

Not all neuroscientists shared Gazzaniga’s interest in “the central integrative questions of mind-brain research.” Very rapid advances have continued to be made at the cellular and systems levels. Within cognitive neuropsychology progress was rather slower. However, the single-case study approach, combined with developing theory within cognitive psychology, has yielded a more detailed understanding of the functional architecture underlying a range of cognitive processes (see Rapp, 2001; Caramazza & Coltheart, 2006; but also Patterson & Plaut, 2009). The group study approach has also seen a resurgence (see e.g., Stuss & Alexander, 2007) and new structural imaging methodologies have brought the brain back into neuropsychology (Rorden, Karnath, & Bonilha, 2007). In behavioral neuroscience, too, there has been much progress, for example, in understanding the role and function of subcortical structures (e.g., relating the amygdala to anxiety and the processing of emotional events: Phillips & LeDoux, 1992; Phelps & LeDoux, 2005), the mechanisms that support neural plasticity and their consequences (Kolb, Gibb, & Robinson, 2003), and the effects of specific genes on brain function (Holmes, 2001), to list just three broad areas.

Within cognitive neuroscience proper, however, only a few would dispute that the single most influential advance for its potential for understanding cognition has been the advent and development of brain-imaging techniques and methodologies. These techniques were borrowed from medicine, where they were being developed to image a variety of internal organs. Initially, they were only able to image the brain structure, but in the 1980s Roland and colleagues (e.g., Roland, Meyer, Shibasaki, Yamamoto, & Thompson, 1982) and Fox and Raichle (1984) used positron emission tomography (PET) to measure differences in regional cerebral blood flow (rCBF) during cognitive processing. Critically, it was argued that rCBF changes in brain regions mirrored the functioning of those regions during cognitive tasks. PET was dependent on the use of radioactive tracers. A second imaging technique, functional magnetic resonance imaging (fMRI), developed in the early 1990s, overcame some of the problems arising from this dependence. Most critically, fMRI detects differences in blood oxygenation, a variable that relates in a complex and as yet not well understood way to neural activity (Goense & Logothetis, 2008), without the need for a radioactive tracer. Many refinements in fMRI analysis and experimental design—the latter often influenced by experimental psychology—over the past 20 years greatly extended the utility of brain imaging. Critical were the adoption of statistically standardized methods of analysis and reporting, in particular statistical parametric mapping (Friston et al., 1995), the adoption of standardized brain templates beginning with the Talairach atlas (Talairach & Tournoux, 1988), and the use of event-related designs that have made it possible to track neural activity over similar trials within a block.

There have been many other significant technological advances, such as ones that allow recording to be made from many neurons simultaneously, in the use of magnetoencephalography (the magnetic equivalent of EEG) and high-density ERP, in the development of transcranial magnetic stimulation (TMS, which allows safe temporary brain lesions to be created in normal subjects through the use of a strong, focused, magnetic field), and in the development of a range of further imaging techniques (e.g., near infrared spectroscopy—NIRS). The existence of this range of techniques is valuable because each has different properties with respect to the degree of temporal and spatial localization of neural activity. At the same time, the range and costs of techniques now available has greatly affected the sociology of cognitive neuroscience—a point we return to below.

A further development that has had a large impact on part of the field is the rise of computational modeling. Connectionism found a natural home in cognitive neuropsychology, where early success was had in modeling deficits in reading typical of some acquired dyslexias (Plaut & Shallice, 1993; Plaut, McClelland, Seidenberg, & Patterson, 1996; though see Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). Modeling also played an important role in both systems neuroscience and behavioral neuroscience, and modelers with a concern for neurobiological data have developed increasingly complex computational accounts of subcortical structures (e.g., Gurney, Prescott, & Redgrave, 2001; Rolls & Kesner, 2006) and circuits within the prefrontal cortex (e.g., Braver, Barch, & Cohen, 1999; O’Reilly & Frank, 2006). So far, though, connectionism has made far less contact with fMRI than with neuropsychology; connectionist models generally do not naturally have a correspondence to anatomical localizations.

How have these methodological developments affected the field? In some ways, progress appears to be very much faster. The range of techniques available in fMRI escalates exponentially each year. More critically, in areas such as “thinking,” which were on the fringes of interest as far as brain processes were concerned, cognitive neuroscience methods have become standard. At the same time, new areas such as executive functions/cognitive control, social cognition, and neuroeconomics have proliferated. And in many respects, this represents genuine progress, not just fashion. Consider consciousness, which has now become almost a hackneyed topic. Neuropsychological studies have revealed numerous dissociations between stimulus-dependent behavior and awareness of a stimulus, beginning with blindsight (Pöppel, Held, & Frost, 1973; Weiskrantz, Warrington, Sanders, & Marshall, 1974), but extending through various forms of agnosia and alexia (see Farah, 2001, for a review). Results from these studies have been combined with computational insights (concerning so-called blackboard architectures for sharing information between subsystems) and perspectives from systems neuroscience to produce empirically testable computational models of consciousness (e.g., Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006), a possibility that was hardly conceivable 10 years ago.

3. The future of cognitive neuroscience

Cognitive neuroscience has made great progress over the last 30 years, and the rate of progress is not obviously slowing. Technically, it is now possible to combine fMRI, for instance, with other methods, including TMS (Sack et al., 2007), EEG (Ritter & VIllringer, 2006), and NIRS (Strangman, Culver, Thompson, & Boas, 2002). There is also increasing use of techniques such as diffusion weighted imaging (e.g., Rushworth, Behrens, & Johansen, 2006) and dynamic causal modeling (Friston, Harrison, & Penny, 2003) to establish effective connectivity within the brain, and the application of priming (Kouider, Dehaene, Jobert, & Le Bihan, 2007) and multivariate pattern recognition (Haxby et al., 2001) is beginning to enable us to explain the innards of modules. These techniques are essential if we are to go beyond localization of specific putative functions and understand interactions within and between brain systems. Further progress is likely to involve the integration of developments in additional fields, as cellular and genetic advances get linked into our understanding of brain and cognitive processes.

There are dangers, however, which threaten to derail progress. Indeed, a group of cognitive neuropsychologists have argued that functional imaging for all its technical sophistication has failed to lead to any increased understanding at the cognitive level of analysis (Coltheart, 2006; Harley, 2004). Thus, Coltheart has posed a challenge to cognitive neuroscientists to provide examples where definitive answers to open theoretical questions have been given by functional imaging evidence. So far, there has been no conclusive response. We believe that intellectually this perspective on functional imaging is too limited. However, we believe that there are also grave sociological dangers, possibly exemplified by the division between the Cognitive Science and the Cognitive Neuroscience Societies. The complexity of the mind, the brain, and the relation between the two means that approaching any research question from a single perspective limits the inferences that may be made. More generally, the relative lack of power of behavioral data means that any development based on a single result or method is likely to be open to multiple interpretations. It is this weak power of behavioral evidence that gives Coltheart’s arguments their apparent force, as functional imaging relies on appropriate behavioral tasks, just as do any other cognitive investigations. Thus, functional imaging is likely to provide only one of a set of converging lines of evidence necessary to resolve open theoretical issues.

The biological roots of neuroscience mean that it tends not to view the mind from the perspective of Marr’s (1982) most critical level—how it functions as a generative information-processing and knowledge-producing machine. The field risks being driven too much by the technically sweet possibilities that arise from new and improved technologies, rather than by scientific questions aimed at teasing apart different cognitive theories or extending our cognitive understanding of specific processes. This threat is compounded by a reductionist risk, namely that the concepts and theoretical progress of the cognitive revolution are forgotten as teams with different types of empirical expertise attempt to reverse engineer the brain through increasingly sophisticated techniques. In our view, if cognitive neuroscience focuses too much on the “neuroscience” at the expense of the “cognitive” then not only will its contribution to cognitive science be marginalized, but also it will be unable to make sense of the increasing masses of brain-based data now being generated. The use of well-designed cognitive tasks based on cognitive theory will be critical (see Bechtel, 2008).

There are also sociological, ethical, and legal issues that need to be confronted. On the sociological front, the rise of combined methods (simultaneous fMRI and EEG, etc.) continues the push toward “big science.” As research becomes more resource intensive, it becomes more centralized and as a consequence probably less cognitive. In physics, it has been argued that this centralization of one dominant view has led to an environment where novel ideas are discouraged, resulting in the first generation for 200 years where there has been no major breakthrough (Smolin, 2006). On the ethical and legal side, questions are raised by the increasing possibility of using neuroimaging techniques to detect for business, forensic, or political purposes otherwise private mental phenomena (including mental health conditions, prejudice, and deception; see, e.g., Tovino, 2007).

Despite these concerns, there remains a host of scientific questions that a genuinely cognitive neuroscience is well placed to answer over the coming decades. Thus, there is still no single area of cognition for which we have a standardly accepted theoretical account, even 60 years after the beginning of the component disciplines of cognitive science. Take as an example a simple domain like working memory, which has a variety of different theoretical conceptions (Miyake & Shah, 1999). Cognitive neuroscience, despite its dangers, if applied in a nonreductionist style, offers the potential to enable cognitive science to achieve the status of a Kuhnian normal science by establishing a generally agreed theoretical framework. From this would flow a variety of theoretical possibilities and socially important applications.


We are grateful to Max Coltheart, Lawrence Barsolou, and one anonymous reviewer for constructive comments on an earlier version of this study.