The way of our errors: Theme and variations

Authors


  • This article is based on the presidential address to the Society for Psychophysiological Research, Savannah, Georgia, October 20, 2007.
    The work described here was begun in 2000 and does not, therefore, include contributions from many valued colleagues and graduate students that came before. I acknowledge them here nonetheless. For psychophysiological espaliering, friendship, and support, I am grateful to a number of SPR's past-presidents, including Peter Lang, David and Frances Graham, Arne Öhman, Niels Birbaumer, Mike Coles, Dick Jennings, Mike Dawson, Bob Levenson, Manny Donchin, Greg Miller, Bruce Cuthbert, and Margaret Bradley. I am grateful to previous graduate students Sandy Rose, Barbara Giardina, Mike Zelson, Mark Miles, Bill Perlstein, Evelyn Fiorito, Lee Fitzgibbons, Arti Nigam, and Tom Roedema, but special thanks goes to the students who have been the backbone of the line of research presented here from its beginning, Greg Hajcak and Jason Moser, and to Jason Krompinger, Damion Grasso, and Emily Stanley, the students who will now move it forward. I would also like to thank our colleagues Edna Foa, Marty Franklin, and Jonathan Huppert at the University of Pennsylvania for providing us space, training, patients, and a great collaboration. Finally, I would like to thank my friend and University of Delaware colleaque Caroll Izard for his career-long support and encouragement and for his many tips and suggestions that have found their way into our research and eventually into this article.
    This research was supported in part by predoctoral fellowships from the National Institute of Mental Health to Greg Hajcak (MH069047) and Jason Moser (MH077388) and NIMH grants to the Center for the Treatment and Study of Anxiety (CTSA; R01MH055126 to M.E. Franklin and K23MH064491 to J. Huppert) at the University of Pennsylvania.

Address reprint requests to: Robert F. Simons, Department of Psychology, University of Delaware, Newark, DE 19716. E-mail: rsimons@psych.udel.edu

Abstract

Negative feedback, either internal or external, is a fundamental guide to human learning and performance. The neural system that underlies the monitoring of performance and the adjustment of behavior has been subject to multiple neuroimaging investigations that uniformly implicate the anterior cingulate cortex and other prefrontal structures as crucial to these executive functions. The present article describes a series of experiments that employed event-related potentials to study a variety of processes associated with internal or external feedback. Three medial-frontal negativities (error-related negativity, correct-response negativity, feedback-related negativity) are highlighted, each of which plays an important role in the monitoring and dynamic adjustment of behavior. Extensions of basic research on these ERPs to questions relevant to clinical-science are also provided.

Monitoring responses (i.e., detecting conflict and errors) is an executive function that is part of a system that guides behavior and allows for corrections or adjustments that might be required when our actual responses do not match the responses we intend or when we predict outcomes and these outcome predictions fail. This article describes a program of research that we have been pursuing in our laboratory at the University of Delaware that has focused on a small family of scalp-recorded event-related brain potential (ERP) components, each involved one way or another with these response-monitoring processes.

This research emphasizes three medial frontal negativities, the error-related negativity (ERN; Gehring, Coles, Meyer, & Donchin, 1990; Hohnsbein, Falkenstein, & Hoormann, 1989), the correct-response negativity (CRN; Ford, 1999; Vidal, Hasbroucq, Grapperon, & Bonnet, 2000), and the feedback-related negativity (FRN; Miltner, Braun, & Coles, 1997). Because we are clinical psychophysiologists, we intend for our basic research to be translational in nature and that our research with each of these components be bridgework spanning cognitive and clinical science. Thus, basic research on each of these “negativities” leads to applications that are relevant to symptoms of obsessive-compulsive or other anxiety disorders.

Theme 1: Error-Related Negativity

Background

The ERN reflects the activity of a neural system that is involved in monitoring actions and detecting errors. A typical ERN is depicted in Figure 1. As the figure illustrates, the ERN is a sharp, surface-negative ERP that peaks roughly 50 ms after an erroneous response has been executed. The ERN has a fronto-central maximum and may be a signal of conflict among competing responses (Yeung, Botvinick, & Cohen, 2004), a signal of an incorrect motor response (Bernstein, Scheffers, & Coles, 1995; Falkenstein, Hohnsbein, Hoormann, & Blanke, 1991), or a signal of a reward-prediction error more generally (Holroyd & Coles, 2002). The signal then leads to remedial or compensatory action such as error correction (Gehring, Goss, Coles, & Meyer, 1993; Rodríguez-Fornells, Kurtzbuch, & Münte, 2002) or a slowdown in performance on the subsequent trial (Falkenstein, Hoormann, Christ, & Hohnsbein, 2000; Rabbitt, 1966). ERNs have been associated with incorrect responses from the hand, finger, foot (Falkenstein et al., 2000; Holroyd, Dien, & Coles, 1998) and even the eyes (Nieuwenhuis, Ridderinkhof, Blom, Band, & Kok, 2001) and is therefore thought to be part of a response-monitoring system that is generic and modality nonspecific. The ERN is associated with action errors, that is, the commission of an incorrect choice response or the commission of a response that should have been withheld. It has been shown that the size of the ERN reflects the magnitude of the error (Bernstein et al., 1995; Falkenstein et al., 2000), but awareness of the error is not necessary for ERN production (Nieuwenhuis et al., 2001). It is well known that the anterior cingulate cortex (ACC) is intimately involved in response monitoring, and there is strong evidence from electroencephalography (e.g., van Veen & Carter, 2002), magnetoencephalography (Miltner et al., 2003) and functional magnetic resonance imaging (fMRI) studies (e.g., Nieuwenhuis, Schweizer, Mars, Botvinick, & Hajcak, 2007) that the ACC is the neurophysiological source of the ERN.

Figure 1.

 Averaged response-locked ERP waveforms for both error and correct trials during a choice reaction-time (Stroop) task.

In the laboratory, the action errors that give rise to the ERN tend to be performance “slips” during fairly simple choice reaction-time procedures such as a manual Stroop (1935), Simon (1969), or flanker (Eriksen & Eriksen, 1974) task. These tasks are similar in that conflict can be introduced on a trial-to-trial basis—by printing color words in ink of an incongruent color, by confounding color and position cues, or by surrounding a center target character with competing flanker characters. This incongruity increases mistakes, or slips, in part because the irrelevant cues (e.g., flankers) activate a response that now competes with the correct response for access to the motor system. Thus, successful performance requires attentional control—both to overcome the conflict on incongruent trials and to adjust behavior following an error. This control is thought to emanate from the dorsolateral prefrontal cortex (DLPFC; Carter & van Veen, 2007) with regulation from the ACC. That is, because it detects the conflict or detects the error as part of a response monitoring process, the ACC signals the PFC that an increase in attention or cognitive control is required (Kerns et al., 2004; Ridderinkhof, Ullsperger, Crone, & Nieuwenhuis, 2004).

Figure 1 illustrates two other ERP components observed in the response-monitoring context. Immediately following the ERN on error trials is a broad positive component with a centro-parietal maximum. This “error-positivity” (Pe), generally thought to reflect awareness of the error, was originally described by Falkenstein, Hohnsbein, Hoormann, and Blanke (1991) as an amalgam of two overlapping positive components, one closely associated with and perhaps part of the ERN, and the second a P300 associated with orienting (Overbeek, Nieuwenhuis, & Ridderinkhof, 2005). This composite structure has been substantiated recently by Donchin and his colleagues (Arbel & Donchin, 2009) via spatial-temporal principal components analysis (PCA). Finally, Figure 1 also reflects the occurrence of a small negative deflection associated with the execution of correct responses. This CRN will be described in more detail below but is presumed to originate in the ACC and seems to reflect the trial–by-trial adjustments of attention or cognitive control that are associated with task performance (Bartholow et al., 2005).

Profile of an Error

Whereas the ERN and the subsequent Pe are striking electrocortical phenomena associated with performance errors, the physiological signature of an error is grandiloquent, sweeping through the autonomic nervous system as well. This is best illustrated by an experiment in which we recorded ERPs, heart rate, and skin conductance from a group of 22 subjects performing a Simon task (Hajcak, McDonald, & Simons, 2003b). In this task, participants were presented with three different arrows presented in the center of the screen. The arrows were either red or green and could point to the left, right, or top of the screen. The task was to press the left or right “ctrl” key with the left or right hand based on the color of the arrow, not the orientation. The data are presented in Figure 2. Typical ERPs are depicted on the left and consist of the ERN and Pe on error trials and a small CRN on correct trials. More to the point, we see significant error-related activity in both the heart rate (HR; bottom right) and skin conductance (SC; upper right) response. This is a prototypical orienting pattern. Given that both HR and SC are to some extent regulated by the ACC (Critchley, 2005), it is not surprising that they too would be reflective of error-monitoring activity. Errors also prompted significant slowdown on the next trial (490 vs. 441 ms).1

Figure 2.

 Averaged response-locked ERP, heart-rate and skin-conductance waveforms for both error and correct trials during a long intertrial interval choice reaction-time (Simon) task (Hajcak, McDonald, & Simons, 2003b).

To flesh out the profile of the error trial in more detail, we also examined the relationships among the error-related measures. Consistent with an orienting/awareness interpretation, there was a significant relationship between the skin conductance response (SCR) and Pe (r=.55). In turn, both the SCR and Pe predicted post-error slowdown (r=.57 and .48, respectively). There was no relationship between the ERN and any of these measures. Heart-rate deceleration was more closely related to the ERN than it was to the other measures, but not significantly so (r=.38). These data not only demonstrate the extent to which a simple error manifests in both central and peripheral response systems, but the between-subject correlations among the measures are consistent with a model of the error-processing system that is modular and consists of an initial conflict- or error-detection module with activity indexed by the ERN and heart-rate deceleration and then an independent orienting and behavioral-adjustment module indexed by the trio of Pe, SCR, and reaction time (RT) slowdown.

Error-Antecedent Trials

Just as a single-trial ERN occurs in the midst of other error-related psychophysiology, error trials themselves occur in the midst of other trials. Trials that either precede or follow the error can also provide insights into response-monitoring processes.

Recall that errors in the type of task that characterize these experiments are conceived as “slips”; they indicate an executive failure to monitor. In other words, errors like this occur when the mind begins to wander. But what is a wandering mind, and is it approachable with psychophysiological methods? In a recent paper, Mason at al. (2007) refered to a wandering mind as one that has disengaged from the task at hand and has returned to a natural psychological baseline characterized by stimulus-independent thought. Such a transition in state involves the recruitment of the ACC and other prefrontal structures away from the task-relevant neural network and into what Mason et al. refer to as the “default” network (see Eichele et al., 2008, for a more detailed treatment of the default network and the ERN).

From an ERP perspective, we know that in these simple tasks the ERN is a real-time indicator that an error has occurred. Could ERPs also indicate when an error is about to occur—or when a subject is switching from the task-relevant neural network to the “default” network? Ridderinkhof, Nieuwenhuis, and Bashore (2003; see also Allain, Carbonnell, Falkenstein, Burle, & Vidal, 2004) had reported that trials immediately preceding errors were characterized by a positivity associated with or just following the CRN. They interpreted this error-preceding positivity (EPP) as a disengagement of the ACC from the monitoring process. As a follow-up to their study, we undertook a reanalysis of data obtained in our laboratory from a standard arrowhead version of the flanker task (<<<<<,<<><<,>>>>>,>><>>) in which the subject had to respond to the direction of the center arrowhead (Hajcak, Nieuwenhuis, Ridderinkhof, & Simons, 2005). The ERPs are presented in Figure 3 for error-preceding trials and for all other correct trials. As is evident in the figure, a relative positivity accompanied an error-preceding trial exactly as Ridderinkhof et al. (2003) had described. It is also important to note that reaction times on these error-preceding trials were much faster than they were on correct trials that did not precede errors (419 vs. 460 ms).

Figure 3.

 Averaged response-locked ERP waveforms on trials that precede errors and on trials that precede correct responses (Hajcak, Nieuwenhuis, et al., 2005).

It is well known that in the type of task described here, RT on error trials is generally much faster on error trials than RT on correct trials. In our flanker task, these values were 371 and 460 ms for error and correct trials, respectively. Because RT was faster than average not only on the error trial but also on the error-preceding trial, we inferred that disengagement from the task may have begun even earlier. With that in mind, we examined the correct trial that was two removed from the error. Although the effects were not as reliable as they were for the trial immediately preceding the error, the twice-removed trial was still characterized by both an error-preceding positivity and a faster than average reaction time (439 vs. 460). These data suggest that, even though errors of this type are simple slips, they do not “come out of the blue”; they result from a gradual process that, in this particular case, began at least two trials before the error, starting with a small EPP and a RT slightly faster than normal. This process continued into the next trial with a larger EPP and an even faster RT and then concluded on the third trial with failure—an especially fast RT and the incorrect motor response. Given that we have observed the foreshadowing of errors as early as 5 s before an error (Hajcak, Nieuwenhuis, et al., 2005, Experiment 3), we believe that the EPP is a harbinger of an upcoming error; it represents a lapse in response monitoring, a recruitment of the ACC away from the task-relevant network, and likely a transient return to stimulus-independent thought as described by Mason et al. (2007).

Error-Subsequent Trial

As the error-preceding trials provide a nice demonstration of how errors can occur when response monitoring begins to wane, a good deal about response monitoring can also be learned by examining trials that occur just after a mistake has been made. We thought that this might be particularly true when the trial following an error was itself another error. Recall that our working model had the ACC monitoring for conflict or errors. To achieve the necessary adjustment in cognitive control, the ACC modulates subsequent PFC activity, and the PFC responds, in turn, with post-error slowing of response time on the next trial (Kerns et al., 2004). Assuming that such slowing is compensatory (Rabbitt, 1966), performance on trials that follow errors should be correct. And for the most part, it is. But occasionally, one error does follow on the heels of another. We wondered, given all of this ACC/PFC communication and compensation, how a second error could occur.

The model actually suggests two possibilities. Either the signal from the ACC to the PFC is deficient or there is no response and compensatory adjustment made by the PFC. To test this, we (Hajcak & Simons, 2008) went back to a number of our previous data sets and combed them for mistakes that followed mistakes. From 40 control subjects who had participated in a manual Stroop task with 1,152 trials, we retained for analysis 20 subjects who had at least five double errors. As an index of the ACC signal, we measured the ERN, and as an index of the PFC response, we measured the slowdown in RT.

To test the possibility that a deficient signal was sent from the ACC, we compared the ERN on error trials that were followed by another error with error trials followed by correct trials. The results are presented in the left-hand panel of Figure 4. Because the ERNs are equivalent on these two different error trials, we conclude that the signal from the ACC to the PFC was intact. We next focused on the compensatory adjustment to reaction time that characterizes trials following errors. These data are presented in the right-hand panel of Figure 4. All error trials are associated with very fast RTs, but correct trials following errors show genuine slowdown (i.e., RTs that exceed the correct-trial average) whereas error trials following other error trials do not. We conclude, therefore, that the second of two consecutive errors results from a failure of the PFC to implement control mandated by a valid signal from the ACC.

Figure 4.

 Averaged response-locked ERP waveforms for error trials that preceded an error (cEe) or error trials that preceded a correct response (cEc). On the right are reaction times for both types of error trials and their succeeding trials. For reference, the average RT for all correct trials is also presented (Hajcak & Simons, 2008).

In sum, examining both error-preceding and error-following trials with both behavioral and physiological measures reveals two kinds of errors. There are ACC errors—monitoring errors that result from a deficient call for cognitive control—and PFC errors—implementation errors that result from a failure of the PFC to respond to the ACC's call for additional resources.

The Significance of Errors

Having examined both the within- and between-trial contexts in which errors occur, I conclude this section by returning to the ERN itself. The ERN, of course, is not a simple switch or binary event. Rather, it is an event whose magnitude can be influenced by variables associated with the nature of the error itself (e.g., its magnitude or “detectability”). Both Bernstein et al. (1995) and Falkenstein et al. (2000) reported larger ERNs when the mismatch between the required and executed response is large (e.g., responding with an incorrect hand prompts a larger ERN than responding with an incorrect finger). In a similar vein, Gehring et al. (1993) delivered a flanker task to subjects under three different instruction sets emphasizing speed, accuracy, or neither speed nor accuracy. The size of the ERN increased systematically from the speed to the accuracy conditions. One could infer that more detectable errors are more significant and that errors when accuracy is emphasized are more significant than errors when speed is emphasized.

We were interested in the possibility that significance or error “value” could modulate ERN amplitude and tested this in two experiments. In the first, error value was manipulated directly by assigning points to each trial (Hajcak, Moser, Yeung, & Simons, 2005). Using the arrowhead version of the flanker task, each trial was preceded by a cue informing the subject that the upcoming trial would be worth either 5 or 100 points and that points earned would be converted to a cash payment at the end of the session. As predicted, mistakes on the high-value trials were accompanied by larger ERNs. The data are illustrated in the left-hand panel of Figure 5. In the second study, we employed a social motivation. Subjects performed the flanker task under two conditions in counterbalanced order. In one condition, the subjects performed the task alone. In the second condition, the experimenter attached his laptop to the subject's computer and told the subject that his or her performance would be monitored on a trial-by-trial basis. The ERN data are presented in the right-hand panel of Figure 5. As illustrated, larger ERNs were elicited during the monitoring condition, and again these data suggest that high-value errors are associated with larger ERNs than low-value errors.

Figure 5.

 Average response-locked ERP waveforms for error trials with high and low point values (left) and for error trials while subjects were performing while being evaluated or performing on their own (right; Hajcak, Moser, et al., 2005).

Variation: Anxiety and the ERN

As both Dawson (1990) and Cuthbert (2004) noted in their respective presidential addresses, there are exciting back-and-forth possibilities between basic research in cognitive neuroscience, such as that just described, and its application to problems in clinical science. We had pondered the possible relationship between response monitoring and psychopathology but were energized by Gehring's (Gehring, Himle, & Nisenson, 2000) report that adult patients with obsessive-compulsive disorder (OCD) had larger ERNs than control subjects. This made perfect sense given the primacy in OCD symptomatology of worries over “slipping up” and making mistakes.

With similar results reported from other laboratories (see Olvet & Hajcak, 2008, for a review), we decided to look at this OCD/ERN relationship in pediatric patients. In collaboration with the Center for the Treatment and Study of Anxiety at the University of Pennsylvania (Hajcak, Franklin, Foa, & Simons, 2008), we measured the ERN before and after the delivery of a standard course of treatment with exposure and response prevention—an empirically supported cognitive behavior therapy. Subjects were 24 patients and 20 nonanxious controls ranging in age from 9 to 17. ERNs were obtained during a modified Simon task of 576 trials. The pretreatment data are presented in the left-hand panel of Figure 6, and the posttreatment data are presented on the right. As the figure illustrates, at the beginning of treatment, consistent with the adult literature, OCD patients had significantly larger ERNs than controls. The results were even more striking after treatment. Although OCD symptoms dropped from moderate/severe (CY-BOCS=24.1) to below clinical cutoff (CY-BOCS=11.5), there was no change in patient ERN. Despite these obvious electrocortical differences between the two groups, the patients and control subjects did not differ in performance accuracy, reaction time, or post-error RT slowdown. This is important because it rules out the possibility that differences in the ERN are simply byproducts of group differences in performance (Yeung, 2004).

Figure 6.

 Average response-locked ERP waveforms on error trials for pediatric OCD patients and controls at pretest (left) and at the completion of cognitive-behavior therapy (right). Twenty-four patients and 20 controls were tested at Time 1 and 10 patients and 13 controls were tested at Time 2. There were no Time 1 differences between patients who completed both assessments and those who did not (Hajcak et al., 2008).

An exaggerated ERN, absent performance differences and independent of clinical status, is more suggestive of a vulnerability marker than a clinical symptom of OCD. If so, then one would expect to see the same marker in subjects with a disposition to OCD but who have not yet developed the disorder. That is exactly what we find in college students with high OC traits (see Figure 7). Selecting students with high and low scores on the Obsessive Compulsive Inventory (OCI; Foa, Kozak, Salkovskis, Coles, & Amir, 1998), larger ERNs are again associated with high-OC subjects, and again this ERN difference occurs in the face of indistinguishable performance between the two groups (Hajcak & Simons, 2002).

Figure 7.

 Averaged response-locked ERP waveforms for high and low trait OCD college students performing a choice reaction-time (Stroop) task (Hajcak & Simons, 2002).

Although these between-group differences make good theoretical sense, we nonetheless wondered whether they were specific to OCD or whether they characterized the anxiety disorders more generally. Toward this end, we next investigated college students with high scores on the Penn State Worry Questionnaire (Meyer, Miller, Metzger, & Borkovec, 1990) and another group with high scores on a combined snake (SNAQ) and spider (SPQ) questionnaire (Klorman, Hastings, Weerts, Melamed, & Lang, 1974). High worriers display generalized anxiety, and high SNAQ/SPQ subjects show evidence of specific phobia. When ERPs were assessed in the context of a flanker task, the chronic worriers, like the high-OC subjects, produced larger ERNs than nonanxious controls, but the specific phobics did not (Hajcak, McDonald, & Simons, 2003a).

It was intriguing to us that some anxiety disorders were characterized by enhanced response monitoring, at least as indexed by the ERN, and some were not. Two plausible hypotheses for these differences came quickly to mind. First, unlike OCD or generalized anxiety, specific phobias are not performance-based; that is, slips or mistakes are not integral to the topography of the disorder. Second, OCD and generalized anxiety disorder (GAD) subjects are trait-anxious: Their obsessions and worries are always switched on. Specific phobias, on the other hand, are situational: They are not always “on.” During the flanker task, these snake and spider phobias are dormant, and subjects were tested in a nonanxious state.

We focused on this second hypothesis in our next study. We wanted to learn what would happen to the ERN if we induced our spider phobics to change from a nonanxious to an anxious state. That is, would moving to an anxious state prompt the same ERN enhancement that we see in our trait anxious subjects? This hypothesis was motivated by hemodynamic neuroimaging studies showing increased ACC activity during anxiety induction (Servan-Schreiber, Perlstein, Cohen, & Mintun, 1998) and during symptom provocation in patients with OCD (Rauch et al., 1994) and simple phobias (Rauch et al., 1995). We reasoned that “kindling” of the ACC with spider exposure would result in heightened response monitoring along with the exaggerated ERN.

Eighteen spider phobics from the upper 10% of the Spider Phobia Questionnaire distribution (n=1,973) were recruited to participate in a flanker task delivered in two blocks of 576 trials. In both blocks, subjects performed the arrowhead task seated in front of the computer with the experimenter seated to their immediate left. In the spider exposure condition, the experimenter removed a Chilean rose-haired tarantula from a cage and encouraged it to walk from hand to hand in plain view and close proximity to the subject. In the control condition, the experimenter passed a small Koosh ball from hand to hand to simulate the same movement. Each block of trials lasted approximately 20 min with fear ratings obtained three times during each block.

Presentation of the spider caused an immediate increase in fear report that did not diminish until the trial block was over. These self-reports were accompanied by other signs of acute fear such as trembling and tearing in some cases. There was also evidence that the heightened state of anxiety interfered with attentional allocation and perhaps reduced error awareness, as indexed by a smaller Pe (see Moser, Hajcak, & Simons, 2005). There were no differences between the fear and no-fear conditions in the ERN, however. Thus, despite a highly successful fear induction, it does not appear that simply increasing state anxiety is sufficient to alter early conflict or error-detection processes.

The failure to enhance the ERN in spider phobics who were clearly provoked by exposure sets this group apart from the OCD and GAD subjects and is consistent with recent distinctions in the literature such as that between anxious apprehension and anxious arousal (Engels et al., 2007; Heller, Nitschke, Etienne, & Miller, 1997) or between distress and fear (Watson, 2005). Because anxious apprehension and distress are characterized by worry and rumination, and because worry and rumination are not unique to the anxiety disorders, we next studied subjects with high negative affect (NA) more generally. Subjects were chosen based on their scores on the Positive and Negative Affect Schedule (PANAS; Watson & Clark, 1991) and were asked to perform a manual Stroop task. As was the case with the OCD and Worry subjects, those with high NA scores had larger ERNs than subjects with low NA scores. More recently, Holmes and Pizzagalli (2008) reported larger ERNs in subjects with depression, and we have obtained similar results with depressed subjects in our most recent study (Moser, 2009). It seems clear, then, that hyperactive response monitoring, as indexed by the ERN, is not specific to the anxiety disorders and may not be a direct reflection of any current diagnostic category. As Olvet and Hajcak (2008) point out, the increased sensitivity to errors in depression and anxiety most likely reflects an underlying characteristic common to these internalizing disorders. Identifying the relevant characteristic will be a focus of our future research.

Theme 2: Correct-Response Negativity

The CRN is a medial frontal negativity, topographically and morphologically similar to the ERN, but usually smaller (see Figure 1). Like the ERN, it occurs shortly after response execution and presumably has a source in the ACC. Although a few early studies of the ERN did not note the presence of this small negativity, evidence of the CRN's existence is now widely appreciated (cf. Vidal et al., 2000). That an ERN-like response could be observed in the absence of an error or in the absence of any obvious response conflict (Vidal, Burle, Bonnet, Grapperon, & Hasbroucq, 2003; Vidal et al., 2000) has challenged error-detection, reinforcement-learning and response-conflict models of the ERN.

Recall, from our examination of correct trials that preceded error trials, that error-preceding trials are characterized by a relative positivity just after response execution. Allain et al. (2004) have argued that with Laplacian transformation, this “positivity” effect can be explained in large part by a decrease in the CRN. Thus, larger CRNs appear to function as error prophylactics. Trial-by-trial variation in the CRN may reflect trial-by-trial modulation of cognitive control. In one of the first experiments to specifically target the CRN, Bartholow et al. (2005) studied the relationship between the CRN amplitude and “strategic” cognitive control processes by manipulating both flanker compatibility and the expectation of receiving an incompatible trial. Consistent with fMRI studies of ACC activation (Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999; Carter et al., 1998), the CRN was larger when target and flankers were incompatible. More importantly, however, the CRN in both compatible and incompatible conditions varied with expectation; that is, regardless of stimulus configuration, the CRN was largest when the delivered trial did not match the expected trial type. Thus, trial-by-trial adjustments of cognitive control reflected not only response conflict (i.e., incongruence) but strategic conflict as well. In keeping with the control models of Kerns et al. (2004) and Ridderinkhof et al. (2004), either type of conflict is sufficient to elicit a signal from the ACC to the PFC that an increase in attention or cognitive control is required.

Data such as these have led some investigators to conclude that the ERN and CRN are not, in fact, distinct components but manifestations of this same cognitive control process. Burle, Roger, Allain, Vidal, and Hasbroucq (2008) see the ERN/CRN as an “alarm signal”—a call for cognitive control that is initiated with response onset and continues until remediation takes place. In the case of correct responses, “remediation” occurs quickly; the alarm is terminated by the response, and the resulting CRN is typically small. Error trials, because there is no remediation, give rise to a full-blown ERN. The CRN on trials where there is a partial error reaches an intermediate amplitude, because the remediation process (i.e., error correction) takes time and allows the CRN to increase for the duration. Other data consistent with the notion that the ERN and CRN are reflections of the same cognitive control process have been provided by Suchan, Jokisch, Skotara, and Daum (2007) and more recently by Meckler et al. (2009). In each of these experiments, conditions (e.g., target expectancies) were developed in which the CRN following correct responses was not a small negativity but was actually a larger negativity than the ERN that followed errors. In short, the traditional distinction between the ERN and CRN based on trial type and amplitude no longer seems to hold. Additional studies like those of Bartholow et al. (2005) and from Vidal and his colleagues that focus as much on correct trials as on errors should go far to inform this interesting issue.

Variation: Anxiety and the CRN

Both anxiety and depression are frequently associated with biased information processing (cf. Mathews & MacLeod, 2005). These biases may be evident in a number of cognitive processes and may be associated with either positive or negative stimuli. For example, attentional mechanisms may be biased toward detecting threat or biased toward the interpretation of ambiguous stimuli as threatening. Or, negative information may be more easily recalled than neutral or positive information. Such negative cognitive biases have been associated with both depression and anxiety. On the other hand, “normal” (i.e., nondepressed and nonanxious) individuals often evince positive cognitive biases (i.e., depressive realism; Alloy & Abramson, 1979), and these positive biases too can be associated with a variety of cognitive processes. Because an attentional bias toward one stimulus class interferes with the detection of others, we thought that we might be able to modify the traditional flanker task in a way that could identify biased processing in subjects with and without social anxiety (Moser, Huppert, Duval, & Simons, 2008).

We chose to study social anxiety as an anxiety exemplar because it has shown attentional biases in previous research, especially to threatening stimuli including human faces (e.g., Mogg & Bradley, 2002). To capture this bias, we modified the flanker task (cf. Munro et al., 2007) such that stimuli consisted of sets of flanker and target faces selected from a larger set supplied by Perez-Lopez and Woody (2001). Faces were categorized as either threatening (negative) or reassuring (positive). In this way, stimuli could be congruent threatening, congruent reassuring, or incongruent, with threatening faces flanking a reassuring target face or reassuring faces flanking a threatening target face. Examples of the two incongruent stimulus sets are presented at the top of Figure 8. Because incongruent flankers interfere with target identification in general, an attentional bias toward positive or negative faces should accentuate the normal effect of incongruity. We reasoned that a bias toward negative faces would be seen as a weak incongruence when the target is negative and the distracters are positive, but strong incongruence when the target is positive and the distracters are negative. A positive bias would be just the opposite—strong interference when targets are negative but flankers are positive and weak interference when the positive face is the target and the distracters are negative. Based on the Bartholow et al. (2005) demonstration of CRN sensitivity to flanker incongruity, it seemed that the CRN would be one likely ERP component that could reflect this cognitive bias.

Figure 8.

 Incongruent flanker stimuli with negative and positive center targets (upper panels) and averaged response-locked ERP waveforms to correct-response negative (lower left) and positive (lower right) target trials. The CRN is the small negative deflection occurring near the 50-ms mark, larger to incongruent than congruent stimuli in both cases (Moser et al., 2008).

High and low socially anxious undergraduates were chosen as participants in this experiment based on their scores on the Social Phobia Inventory (Connor et al., 2000). As in all our flanker tasks, subjects were instructed to respond quickly and accurately to the target picture with one finger if the face was reassuring and with another other if the face was threatening. The two lower panels of Figure 8 depict the response-locked ERPs on correct trials elicited by the negative and positive target faces when the stimulus sets were either congruent or incongruent. As expected, the CRN was significantly larger when the target and flankers were incongruent, and, as the figure indicates, this was true regardless of target valence. More importantly, the CRN did indeed reveal the presence of an attentional bias relevant to social anxiety. As is evident in Figure 9, however, this particular bias characterizes the low-anxious subjects, not the high-anxious subjects. Among the high-anxious subjects, positive and negative flankers were equally distracting, as evidenced by the equivalent CRNs. Low-anxious subjects, however, were much more influenced by the positive flankers, and there was no evidence in this group that negative or “threatening” faces caused any interference at all. In short, these data suggest that low-anxious subjects are “biased” and treat positive faces preferentially and that this positive attentional bias is absent in social phobia.2 The positive bias in nonanxious subjects reflected in the CRN is consistent with a number of reports that normal individuals are characterized by a fairly general positive cognitive bias and that this bias plays a role in maintaining life satisfaction (Cummins & Nistico, 2002; Diener & Diener, 1996). The sensitivity of the CRN to positive faces suggests that the ACC may even play some role in a homeostatic system involved in the maintenance of well-being (e.g., Hölzel et al., 2007).

Figure 9.

 CRN amplitude in response to incongruent flanker stimuli as a function of anxiety group and target type. An attentional bias toward positive faces is evident in the low-anxious subject group (Moser et al., 2008).

Theme 3: Feedback-Related Negativity

Whether a signal of response conflict, a signal that an actual response did not match the representation of the intended response, a signal of a reward-prediction error, or an evaluation of the emotional significance of an outcome (Luu, Tucker, Derryberry, Reed, & Poulsen, 2003), the ERN observed during choice RT tasks is a consequence of internal feedback. As demonstrated initially by Miltner et al. (1997), a similar electrophysiological signal occurs when there is external feedback that an error has been committed. Miltner et al. (1997) employed a time estimation task that required subjects to respond with a button press at the passing of a 1-s interval. A tracking procedure was used to establish a correct-response window and to insure that roughly 50% of the estimations were correct. Feedback was delivered 600 ms after the button press and was auditory, visual, or somatosensory as a function of trial block. The ERP in response to the feedback stimulus was characterized by a negativity, maximal at the midline recording sites, that began about 250 ms after feedback delivery. BESA analysis yielded two equivalent dipoles, the first of which was medial frontal, the same for each feedback modality, and the same as that associated with the ERN. Miltner et al. (1997) conclude that this feedback-related negativity (FRN) involves neural processes in the ACC or supplementary motor cortex and that the mechanism of error signaling is the same in time estimation as it is in choice reaction-time tasks.

In a different context, Gehring and Willoughby (2002) reported that a medial-frontal negativity is also associated with monetary losses in a mock-gambling task. In their study, subjects could win or lose either 5¢ or 25¢ on each trial by choosing one of two squares that appeared on a computer monitor. Visual feedback of the trial outcome elicited an FRN approximately 250 ms after the feedback. The FRN, again source localized to the ACC, was greater after loss than after gain trials, but the FRN was insensitive to the magnitude of the gain or loss. Because of the short time between feedback and FRN and the irrelevance of magnitude to FRN amplitude, Gehring and Willoughby suggest that the FRN reflects the motivational impact of the outcome event and is part of a continuous process of evaluating events along a good–bad dimension.

In their reinforcement-learning theory of the ERN, Holroyd and Coles (2002) proposed that the negativity associated with both errors and negative feedback is a dopaminergic signal from the basal ganglia to the ACC elicited “when the consequences of a response are worse than expected” (p. 699). This proposition was of great interest to our laboratory, not only because it identified the FRN with the ERN but because it suggested avenues of investigation that were likely relevant to our interest in individual differences, such as how outcomes are coded (i.e., what things are good and what things are bad), whether expectancies really matter, and how expectancies and outcomes might interact. To investigate some of these issues, we developed a “Doors” version of a mock-gambling task. In this task, each trial consisted of the presentation of a number of closed doors on the computer screen. Behind each door was an outcome that was either a monetary gain or loss. The gain or loss could either be large or small. In our first experiment (Hajcak, Moser, Holroyd, & Simons, 2006), we used four doors that concealed wins and losses of either 5¢ or 25¢. Subjects indicted their choice of door with a key press and received the feedback stimulus 500 ms later. The results of this experiment were very similar to those of Gehring and Willoughby (2002; see also Yeung & Sanfey, 2004). Larger FRNs accompanied losses, but the FRN was insensitive to the magnitude of the loss or gain; that is, the FRN reflected only a binary classification of outcomes as either good or bad.

We next turned our attention to intermediate outcomes. Previous work by Holroyd, Larsen, and Cohen (2004) employed an outcome option that was neither a loss nor a gain. This “0” outcome feedback elicited an FRN when other trials were rewarded, but the same “0” outcome did not elicit an FRN when the other trials were losses. In short, the same outcome was classified as either good or bad as a function of context. The goal of our second experiment was to embed the “0” outcome with both losses and gains to determine whether it would prompt an intermediate FRN or whether subjects would continue to construe all events in a binary fashion. Toward this end, we employed a five-doors task, with the doors associated with outcomes of +25¢, +5¢, 0, −5¢, and −25¢. Again, subjects indicated a door choice, and the outcome was presented 500 ms later. The results of this experiment are presented in Figure 10. Note how the “neutral” outcome is indistinguishable from the two negative outcomes. Statistically, there were no FRN differences between the positive outcomes or among the negative and neutral outcomes. Again, the outcomes appear to be coded quickly and coarsely, and it is interesting that the no-reward, or neutral, outcome is coded as negative—at least by our randomly selected undergraduates. Virtually identical results with a neutral outcome were reported independently by Holroyd, Hajcak, and Larsen (2006).

Figure 10.

 Averaged ERP waveforms locked to outcome feedback signaling gains, losses, and nonrewards (Hajcak et al., 2006).

Although these action-outcome data are generally consistent with the Holroyd and Coles (2002) theory in that “better or worse than expected ” implies a binary classification of outcomes, the relationship between the FRN and expectancy has been a difficult one to establish. In our first attempt to explore this relationship (Hajcak, Holroyd, Moser, & Simons, 2007), we manipulated expectancy in two different ways. In the first of two experiments, we used our Four-Doors task and manipulated expectancy on a trial-by-trial basis. We did this by presenting a cue that informed the subjects how many of the four doors concealed a reward before each trial. The “1,”“2,” or “3” cue, therefore, indicated that the probability of winning a monetary prize was either .25, .50, or .75 on each trial. Thus, a loss on a .75 trial would be a greater expectancy violation than a loss on a .50 trial and, in turn, a loss on a .25 trial. Although P300 tracked the probability manipulation, there was no difference at all in the FRN among the three conditions.

In the second experiment, expectancy also varied from .25 to .75, but it varied across blocks of trials. Subjects had to discover the probabilities of reward without specific cuing. In this case, subjects chose one of four balloons, and the probability of a “win” varied across counterbalanced blocks. Without a cue to explicitly inform subjects about probabilities, subjects had to discover them for themselves. Again, P300 indicated that subjects could do this: Frequent positive and negative outcomes prompted smaller P300s than neutral or infrequent outcomes. But as in the previous experiment, the FRN was not influenced by the objective probability of a loss.

Given the centrality of expectations to the Holroyd and Coles (2002) theory of the error- and feedback-related negativities and the initial support for the role of expectations coming from trial-and-error learning (Holroyd & Coles, 2002; Nieuwenhuis et al., 2002) and also mock-gambling tasks (Holroyd, Nieuwenhuis, Yeung, & Cohen, 2003), it was surprising that we could determine no effect whatsoever of our two expectancy manipulations. The Holroyd and Coles model posits that feedback negativity reflects a dopaminergic reward-prediction error signal. In our two expectancy experiments, reward “predictions” were inferred from the probability manipulations. In our next two experiments, trial-by-trial predictions were obtained directly from the participants (Hajcak, Moser, Holroyd, & Simons, 2007). Both experiments utilized the Four-Doors task. In Experiment 1, each trial began with the “1,”“2,” or “3” cue indicating how many doors concealed the prize. Shortly after cue offset, subjects were asked whether they thought they would win or lose. Subjects pressed a key indicating their prediction (yes or no), and then the doors were presented. Five hundred milliseconds after their choice of doors, the feedback (“+” or “o”) was delivered. The feedback negativity data were quantified as the difference between nonreward and reward when the outcome was either predicted or unpredicted. The difference waveforms (Win−Lose) are presented in the left-hand panel of Figure 11. Again, there is no hint that “prediction” modulated the size of the error signal. This is especially interesting in light of the behavioral data. Subjects actually predicted that they would be rewarded on 15.8%, 73.9%, and 94.7% of the 1-, 2-, and 3-cue trials, respectively. These data suggest not only a positive bias toward reward in general but that subjectively the expectations of reward between the high- and low-probability conditions were even more extreme than those we had created. In short, despite being virtually certain (≈95%) that they would win a prize, this reward-prediction error did not result in any enhancement of the FRN.

Figure 11.

 Negative minus positive (outcome) difference waveforms for expected and unexpected outcomes when predictions were made prior to (left) or after (right) executing the choice of reward location (Hajcak et al., 2007). Averages are locked to feedback delivery.

About to conclude that the expectancy variable in the Holroyd and Coles (2002) theory was not supportable, we were persuaded by a determined reviewer that we were asking the right question but asking it at the wrong time. Our decision to obtain a prediction from subjects prior to choosing a door was based on Rothbart and Snyder's (1970) work on “magical thinking,” which showed that subjects were more confident and wagered more money before rolling dice than after rolling dice but not yet aware of the outcome. That is, under circumstances similar to those in the Doors task, subjects had a magical belief in their ability to predict or control a random outcome. It is possible, however, as our reviewer suggested, that win/lose predictions are not stable for the duration of the trial and that predictions more proximal to the feedback may increase the salience of the relationship between action and outcome and may better reflect the effects of the expectancy variable.

In Experiment 2, then, subjects were required to make their prediction after rather than before choosing a door. With this simple adjustment, we obtained dramatically different results. As the right-hand panel of Figure 11 illustrates, predictions now did matter; the FRN was significantly larger when a loss was unexpected. The FRN changes were not accompanied by any modifications to subjective probabilities: Subjects in this experiment displayed almost exactly the same action-outcome expectancies as they did in Experiment 1 (21.5%, 69.0%, and 94.9% reward predictions on the 1-, 2-, and 3-cue trials). These data suggest that interposing the prediction between action and outcome serves to solidify the action-outcome unit and that a close coupling of action and outcome may be necessary for the effects of expectancy to emerge.

Experiment 2 was predicated on the notion either that the strength of the prediction waned over the course of the trial or that the actual prediction changed from one outcome (win/lose) to the other (lose/win) as the trial unfolded. Our final study in this series (Moser & Simons, 2009) was designed to look for evidence of prediction change and then to compare the FRN from change trials to the FRN from trials where predictions were consistent from beginning to end.

We accomplished this by combining the methods from the two previous experiments. That is, we asked our subjects to make an outcome prediction before choosing a door and then again after choosing a door. In this experiment, only two doors were presented, and subjects were told that on each trial a prize would be behind one of the doors. The instructions resulted in four trial types corresponding to the decision sequences: (a) win/win, (b) lose/lose, (c) win/lose, and (d) lose/win. Forty undergraduate students participated in this task. From these we selected for analysis 14 subjects who had at least 20 useable trials of each type. In terms of response dispositions, subjects who remained consistent throughout the trial showed the now familiar positive bias, predicting a win on 39.75% of the trials and a loss on only 24.85%. There was no bias evident on change trials (win/lose=19.96% and lose/win=15.43%).

The FRN, again and as difference waveforms, are presented in Figure 12. On the left are the trials in which predictions remained constant. The FRN did not vary as a function of prediction. Even though predictions were made before and after choosing a door, the FRN data look identical to those of the previous experiment where the predictions were made only before choosing a door (Figure 11, left-hand panel). This would suggest that on these trials the real attentional investment occurred early, and the decision was not revisited again, despite the second query. On the other hand, trials in which subjects changed their prediction were characterized by significant FRN variation. Lose/win trials were indicative of a substantial reward-prediction error (Figure 12, right-hand panel). Happiness ratings were consistent with the FRN data: There were no reliable differences between the win/win and lose/lose trials, but subjects did report being more unhappy with a loss after lose/win than after win/lose trials. Thus, it appears that change trials are marked by an attentional focus on the second prediction and that this focus increases the salience of the outcome and increases the strength of the action-outcome coupling.

Figure 12.

 Negative minus positive (outcome) difference waveforms for trials during which the prediction of outcome remained constant throughout the trial (left) and for trials during which the prediction changed after the response was executed (Moser & Simons, 2009). Averages are locked to feedback delivery.

Variation: Anxiety and the FRN

Although there have been many studies showing that subjects with chronic anxiety evince larger ERNs than control subjects, there has been little research assessing the relationship between anxiety and the FRN. Given the conceptual similarity of the ERN and FRN in terms of signaling reward-prediction errors and the similarity, if not identity, in terms of the source localization of the two components, a straightforward hypothesis would have both the ERN and FRN increasing in high anxiety subjects. To test this prediction, we used the revised version of the OCI (OCI-R; Foa et al., 2002) to construct groups of low- and high-scoring subjects (Stanley & Simons, 2009).

Thirteen high-OC and 24 low-OC undergraduates participated in both a flanker task and a mock-gambling task. The flanker task consisted of 480 trials of confusable letter strings (MMMMM, MMNMM, NNMNN, and NNNNN), with letter combinations that varied from block to block (MN, EF, QO, VU, TI, and PR). Mock-gambling was accomplished with our Five-Doors task with possible outcomes of winning 25¢, winning 5¢, losing 25¢, losing 5¢, or no change (0¢). For the purpose of data analysis, a spatial PCA suggested a region of interest consisting of Fz, FC1, FC2, and Cz for both the ERN and FRN. Waveforms were averaged across these four electrode sites and differences were established for each task. The ERN was quantified as the difference between error and correct trials and the FRN as the difference between lose and win trials. We then conducted a 2 (Group) × 2 (Component) analysis of variance. The data are presented in Figure 13. Rather than the simple Group main effect that we anticipated, we obtained a near perfect interaction. As in our previous studies, high-OC subjects had the larger ERNs. But contrary to our prediction, high-OC subjects had smaller FRNs. If, as Holroyd and Coles (2002) suggested, both negativities reflect reward-prediction errors, then it appears that high-OC subjects expect fewer errors than controls during the flanker task and more than controls during mock gambling. More generally, it may be that when errors are controllable, reward-prediction errors are rare in high-OC subjects, resulting in particularly large error signals, whereas the expectation of success in the more uncontrollable guessing task is lower, and error signals are commensurately smaller. Results from our own FRN and expectancy studies would suggest that during the guessing or mock-gambling tasks, the “commitment” to or encoding of the prediction as part of the action-outcome sequence is lower for the high-OC than it is for the low-OC subjects. This differential processing of internal (ERN) and external (FRN) feedback in high- and low-anxious subjects is another area of research we plan to pursue.

Figure 13.

 Interaction of anxiety group and ERN/FRN component amplitude depicting high ERN and low FRN scores in anxious subjects and low ERN but high FRN scores in low anxious controls.

Conclusion

The research program I have described here has now spanned a decade. We have learned much from our own work during that time and have learned even more from the work of others. For cognitive neuroscientists with an interest in executive function, cognitive control, and conflict adaptation, the medial frontal negativities I have described here are rich and compelling. As executive function is of major theoretical relevance to various psychopathologies, these same ERP components are equally promising as tools for translational research in clinical science. Of course, in both the basic and more applied research domains, much work remains to be done. In our laboratory, for example, we treat the ERN, CRN, and FRN as if they were reflections of a single process. Although this is a useful heuristic for us, we know that agreement on this is far from universal and that a variety of studies note distinctions among the three in terms of brain source and the nature of the cognitive process or processes that these negativities reflect (e.g., Müller, Möller, Rodriquez-Fornells, & Münte, 2005; Yordanova, Falkenstein, Hohnsbein, & Kolev, 2004).

With respect to individual differences, we know that a number of subjects with anxiety and depressive symptoms have exaggerated ERNs. That in our pediatric study this difference was stable despite significant reductions in anxiety symptoms raises important questions about the potential utility of the ERN as a vulnerability or biomarker for at least a subset of psychopathologies. This should be pursued with follow-up genetic and family studies, and we plan to do so. The unexpected finding that anxious subjects show reduced FRNs while at the same time evincing enhanced ERNs is intriguing. That this has been replicated in other trait-anxious subjects by Gu, Huang, and Luo (2009) and reported also by Foti and Hajcak (2009) with depressed subjects not only speaks to the reliability of the FRN as a correlate of these internalizing individual differences, but in conjunction with the ERN may provide important insights into the role of internal and external feedback during performance monitoring and into how these processes may be biased in a number of clinical disorders (see Olvet & Hajcak, 2008, for a brief discussion of externalizing disorders). Lastly, we are encouraged by the possibility that the CRN too may reflect information processing biases. The Moser at al. (2008) finding stands alone, however, and more work is needed in order to flesh out these results and put them in a context with other processing biases thought to characterize social anxiety. With one decade under our belts, we look forward to the challenges of another.

Footnotes

  1. 1Error trials prompted a slowdown on subsequent trials when these trials were compared to all correct trials. When simple regression to the mean was ruled out, these error-following trials were also significantly slower than the subset of correct trials that followed correct trials matched to error trials on RT.

  2. 2This is not, of course, the whole story of biased processing in social anxiety. Biases can occur at different stages in the processing stream, biases may change across stages, and different ERP components may reflect different biases. In fact, the high-anxious subjects in the present experiment showed an enhanced P300 when the target was a negative face but showed no evidence of a negative bias in the CRN, whereas the positive bias in the low-anxious subjects was evident in the CRN but was not reflected in the P300 when target faces were positive.

Ancillary