Cognitive Control: Easy to Identify But Hard to Define

Authors


should be sent to J. Bruce Morton, Department of Psychology, Graduate Program in Neuroscience, University of Western Ontario, Westminster Hall, 3rd Floor, London, ON N6A 3K7, Canada. E-mail: bmorton3@uwo.ca

Abstract

Cognitive control is easy to identify in its effects, but difficult to grasp conceptually. This creates somewhat of a puzzle: Is cognitive control a bona fide process or an epiphenomenon that merely exists in the mind of the observer? The topiCS special edition on cognitive control presents a broad set of perspectives on this issue and helps to clarify central conceptual and empirical challenges confronting the field. Our commentary provides a summary of and critical response to each of the papers.

The impact of cognitive control is relatively easy to identify. It is uncontroversial, for example, that human behavior is not obligatory but can be flexibly adapted given subtle changes in context, and that this ability is critical for psychological health, learning, and everyday psychological functioning. The nature of cognitive control, however, is much more difficult to pin down. As Cooper asks so provocatively: Is cognitive control reducible to a set of dedicated mechanisms/processes, or is it an emergent product of things more basic, such as memory and learning? And if cognitive control is irreducible, what are the basic inputs, transformations, and outputs of the system? At a time when the science of the mind seeks to integrate genetic, neurological, psychological, and evolutionary levels of analysis, clarity on these issues is critically important. The special edition on cognitive control presents a fascinating cross-section of opinions that offer new perspectives on these perennially vexing questions.

Cragg and Nation’s paper examines the relationship between cognitive control and language through a review of developmental evidence. The close association of language and self-regulation figures prominently in many psychological theories, old and new, and raises the possibility that at least some of what we refer to as cognitive control is an emergent product of the human language facility. Cragg and Nation address this issue by examining the association between self-directed, or inner speech, and the development of mental shifting. Inner speech is not essential for shifting, they claim, but is clearly facilitative—it aids in the formation, retrieval, and maintenance of task rules and reduces costs associated with shifting. Given that human infants and nonhuman primates are capable of mental shifts in the absence of language, and the capacity for shifting improves in tandem with the development of inner speech, the authors appear to be on relatively safe ground in proposing a facilitative rather than necessary role for language in cognitive control. The story, however, may not be so straightforward. Nonhuman primates (macaques), for example, are not simply capable of shifting; they actually show little if any switch costs in standard color-shape shifting tasks (Stoet & Snyder, 2003). Moreover, human adults show no switch costs when tasks are administered without verbal instructions, but switch costs when the same tasks are presented with explicit verbal instructions (Dreisbach, Goschke, & Haider, 2007). In what sense then is language facilitative of EF? The attenuation of switch costs is certainly one possibility. However, an alternate possibility is that language lends immediacy and sociality to the process of rule representation by facilitating the rapid communication and internalization of novel arbitrary actions. Curiously, this kind of flexibility may contribute to rather than ameliorate switch costs.

Stout’s paper explores the possibility of functional plurality in the prefrontal cortex (PFC) from an evolutionary perspective. The notion of functional fractionation within PFC is not new in psychology, but Stout’s approach is very novel in attempting to provide evolutionary grounding for the argument. Drawing on evidence from primatology, paleoneurology, and archeology, Stout argues that ventromedial and lateral PFC support dissociable social and motoric functions, respectively. Stout points specifically to the social structure and tool use of nonhuman primates as proof that these skills—and their associated cortical regions—evolved independently. This argument seems compelling on the surface; however, in stating that “it is a truism that the structure of the modern brain is a product of its evolutionary history,” Stout makes a number of controversial assumptions. One is that genetic influences on brain organization are fixed at conception and invulnerable to environmental variability. Such a view underestimates the influence of developmental processes on mature brain structure and function. A second assumption is that the brain is composed of parts that are fundamentally discriminable in function and independently variable. While this may be true, there is compelling evidence that the single best predictor of phylogenetic increases in the size of any particular brain region is the size of the rest of the brain (Finlay, Darlington, & Nicastro, 2001). In short, brain structures appear to evolve in tandem, not independently. The critical implication is that function may not precede structure in brain evolution, but that the suite of cognitive functions that define humanness (tool-making, sociality, language, etc.) may be fortuitous by-products of general increases in encephalization (Finlay et al., 2001).

Cognitive control is typically considered a voluntary process, which implies conscious accessibility. As Mandik points out, however, there are many examples of cognitive control phenomena that are not fully accessible to consciousness (Libet, 1999; Ruge & Braver, 2008). This raises thorny issues for the scientific study of control. First, how might we account for differences in degree of conscious accessibility of various control processes? And second, if voluntary implies conscious accessibility, should a first-party account of conscious experience be integrated into the study of cognitive control? Mandik’s paper provides important guidance on these challenging issues. Mandik’s Allocentric-Egocentric Interface (AEI) theory provides a framework for addressing the first question. According to AEI, consciousness, including consciousness of cognitive control, does not reside at either extreme of a signal hierarchy, but rather at the interface of allocentric and egocentric representations. On this account, operationalizations such as the readiness potential (Libet, 1999) inhabit one extreme of the hierarchy and are thus not consciously accessible. With regard to the second question, Mandik argues that it is important to combine first- and third-party perspectives in the study of control consciousness. One obvious question, however, concerns feasibility. Introspective assessment would likely be challenging—at what point in a Wisconsin Card Sort Task, for example, should a participant be expected to be conscious that s/he is in fact engaging in cognitive control? Such techniques would likely be feasible only with healthy human adults. A more fundamental question concerns the necessity of first-person to predictive models of cognitive control. Consciousness, in Mandik’s account, seems to be epiphenomenal. If it is, and therefore exerts no causal influence over behavior, what gain in predictive efficacy is achieved by integrating the first-person perspective?

Lenartowicz et al. propose the use of cognitive ontologies as means of systematically organizing scientific knowledge of “cognitive control.” The multidisciplinary field of cognitive neuroscience is beginning to shed light on various genetic, neural, and cognitive correlates of “cognitive control.” However, the rapid growth of the cognitive control literature and the proliferating use of different terms create an urgent need to refine our concepts and catalog available knowledge more systematically. Ontologies—or “specifications of conceptualizations” (Gruber, 1993) that include definitions of keys concepts as well as relations between these concepts—are oft-used in other fields of bioscience as a means of systematically organizing knowledge and facilitating links between different levels of analysis, and could be a terrific asset to the cognitive neuroscience of control. Toward this end, the authors begin by asking how well four commonly used, behaviorally defined components of cognitive control—working memory (WM), response inhibition (RI), response selection (RS), and task-set switching (TS)—might serve as core constructs in such an ontology. Reasoning that if these concepts refer to distinct processes, then each should be associated with a unique pattern of neural activity, the authors test whether patterns of brain activity associated with WM, RI, RS, and TS can be distinguished by means of classifier analysis. The results—that only response selection yields a unique pattern of neural activation—suggest that response selection is a good candidate concept, but that the remaining three require further refinement. The most important contribution of this paper is in showing the value and potential viability of a cognitive ontology for refining the scientific conceptualization of cognitive control. The authors also nicely anticipate potential pitfalls associated with their particular approach such as weak correspondence between constructs and their operationalization. The one concern is whether distinct cognitive control processes will or even should show a “good degree of selectivity” in their respective neural representations (see e.g., Duncan & Owen, 2000), as do, for example, face processing, magnitude representation, and language. It is possible of course that we simply need more refined concepts and tasks. However, it is also conceivable that different control processes are not localizable to distinct brain regions, but differ in terms of temporal (see e.g., Buschman & Miller, 2007) and/or spatial distributions (see e.g., Duncan, 2010) of activity within a distributed set of “multiple demand” brain regions. These differences may have real implications for function, but they would likely not yield unique maps using standard (or even advanced) fMRI methods.

Alexander and Brown’s paper outlines a new model of performance monitoring they call the prediction of response-outcome model, or PRO. Based on prediction-error models of reinforcement learning, the model proposes that performance monitoring involves the on-line prediction of favorable and unfavorable outcomes of a particular action as well a comparison of actual-versus-intended consequences of a particular action. The model accommodates extensive behavioral and neurophysiological evidence concerning performance adjustments that follow unexpected outcomes (e.g., errors) and their associated patterns of activity in mPFC, and it is currently being formally implemented in the form of neural network models. The value of this sort of theoretical enterprise cannot, in our opinion, be overstated. Computational models of cognitive control move the definition of underlying processes away from the meaningless circularity of operational definitions (e.g., cognitive control is what happens in the Stroop task), and the vagueness of language-based concepts, into the precise domain of mathematics. As such, computational models lend clarity to notions of mechanism, precision to empirical predictions, and, unlike many theories, are falsifiable. Perhaps more important, however, mathematically instantiated concepts of cognitive control are a precise means of bridging molecular, neurophysiological, and psychological levels of analysis (e.g., see Behrens et al., 2007; Brown & Braver, 2005)—a fundamental goal of modern cognitive science. That said, there are larger questions regarding performance monitoring that models such as PRO shy away from. Smart people, it is often said, learn from their mistakes, but very smart people learn from the mistakes of others. The point is that unlike the PRO model, our expectations about the world are informed by the consequences that follow from the actions of others, and we can learn about these consequences immediately (i.e., without having to make the same mistakes ourselves) either through direct observation or language. Accounting for these capacities is a challenge not just for the PRO model, but for many models of cognitive control (for discussion, see O’Reilly & Frank, 2006).

In sum, Cooper and his distinguished group of contributing authors should be applauded on a provocative set of papers. The science of cognitive control finds itself at a historical crossroads, as researchers work to reconcile traditional conceptualizations of cognitive control with new insights concerning the molecular, neurophysiological, and cognitive organization of these abilities. These papers help to clarify the issues that lie ahead and offer exciting new means of meeting these challenges.

Ancillary