Research on the psychological and neural foundations of morality have to date used a variety of methods and tasks to address these important questions, which will be reviewed briefly in the next section. Additionally, one important research direction that has emerged from this interdisciplinary investigation has focused on how emotions may play a prominent role in how we assess morality.
2.1. Methods and tasks
The methods used in this new field have tracked those employed by cognitive neuroscience more broadly over the past decade. In addition to the standard methods of experimental psychology, where participants’ judgments and reaction times are measured as they are presented with vignettes involving moral issues, the field has also begun to use methodological approaches allowing for inferences about the neural processes underlying moral judgment.
Patients who have suffered from focal brain lesions have long been used in experimental psychology to determine whether specific brain regions are involved in particular cognitive processes. The use of these participants has also informed the study of moral judgment, as will be discussed below (Ciaramelli, Muccioli, Ladavas, & di Pellegrino, 2007; Koenigs et al., 2007). More recently, there has also been the extensive employment of various neuroimaging techniques, primarily functional magnetic resonance imaging (fMRI; e.g., Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Schaich-Borg, Hynes, Van Horn, Grafton, & Sinnott-Armstrong, 2006). By measuring the ratio of oxygenated to deoxygenated blood in the brain as the participant ponders a moral scenario, this method takes advantage of the close relationship between this oxygenation ratio and neuronal firing, and thus allows researchers to infer what regions of the brain are involved while the participant makes his or her judgment. Although there are still some open questions with regard to how accurately these regional blood flow changes track neuronal firing, this method has the major advantage of being both noninvasive and also allowing for good spatial and temporal resolution of the judgment process. In addition, other techniques such as transcranial magnetic stimulation (TMS), which allows for temporary modulation of neural activity in healthy participants, are beginning to offer promise in studying these processes.
In terms of how moral judgment is elicited, the vast majority of research on moral cognition has used short vignettes to present a particular situation to the participant, with this scenario usually containing unclear or opposing moral principles. Then the participant is asked either to determine what he or she would have done in that situation, or what he or she believes is morally permissible in the scenario. Probably the most famous set of moral dilemmas are the so-called trolley problems (Thompson, 1985). In the basic version of this scenario, participants are told that they are witness to a trolley careering down a track. In the “switch” version of the trolley problem, the participant is standing next to a switch for a railroad. If left on its current course, the trolley will hit and kill five people trapped further down the track. If the participant pulls the switch, the train will be diverted onto an alternate track where only one individual is trapped. Thus, pulling the switch will save five, but will kill one. Participants are usually asked whether it is morally permissible to pull the switch. In the “footbridge” version of the scenario, the participant is now standing on a footbridge over the track when he or she notices the runaway trolley. As in the “switch” case, the train is headed toward five men trapped on the track. The participant on the footbridge happens to be standing next to a very large man. They are told that if they push the man onto the tracks, the man will stop the trolley, killing him but saving the five others trapped on the track. What is morally permissible here?
A variety of different studies have found that most people say that it is morally permissible to pull the switch, but it is not permissible to push the man off the footbridge, despite the identical number of potential casualties in both cases (1 life vs. 5 lives) (Bartels, 2008; Greene, Morelli, Lowenberg, Nystrom, & Cohen, 2008; Greene, Nystrom, Engell, Darley, & Cohen, 2004; Greene et al., 2001; Hauser, Cushman, Young, Jin, & Mikhail, 2007; Koenigs et al., 2007; Petrinovich, O’Neill, & Jorgensen, 1993; Schaich-Borg et al., 2006; Valdesolo & DeSteno, 2006). The trolley problem has been highly successful in demonstrating that even though surface details of scenarios can be identical, moral judgment is more complicated than merely completing an actuarial accounting of the outcome. However, as will be discussed later, use of these rather unrealistic vignettes may compromise the ability of the participant to really place himself or herself in the situation, leaving open the question of the extent to which theories of moral judgment constructed from research on moral dilemmas will generalize to behavior in the real world.
2.2. Emotion in moral judgment
Perhaps the single most important contribution the last decade of research has made to the field of moral psychology is the notion that emotions play a critical role in moral judgment and decision making. Although by no means a new idea (e.g., see Hume 1739/1978 or Smith 1759/1966, for classical conceptions of the role of sentiments in morality), recent evidence suggesting that emotion is crucial for normal moral judgment directly contradicted the then-dominant Kohlbergian view. Lawrence Kohlberg, building from Piaget’s work on the stages of cognitive development (Piaget, 1965/1932), posited that moral judgments are the result of conscious reasoning via moral rules that are developed across the life span (Kohlberg, 1981). His focus on reasoning is a hallmark of rationalist moral psychology, which dominated the field through the latter part of the 20th century, and this perspective made little room for the role of emotion in moral psychology. However, the last decade has seen multiple studies—across different labs, using varying methodologies and techniques—which provide evidence that emotions are crucially involved in the formation of moral judgments.
Much of this research has utilized modern neuroscientific techniques, particularly fMRI, and studies of patients with brain damage (Ciaramelli et al., 2007; Greene et al., 2001, 2004; Koenigs et al., 2007; Moll, de Oliveira-Souza, Bramati, & Grafman, 2002; Moll, de Oliveira-Souza, Eslinger, Bramati, & Mourão-Miranda, 2002; Schaich-Borg et al., 2006). Other researchers have used subtle mood induction primes to investigate the role of particular emotions, such as disgust (Haidt, 2001; Schnall, Haidt, Clore, & Jordan, 2008; Wheatley & Haidt, 2005) and happiness (Valdesolo & DeSteno, 2006). The picture that arises from this body of research is one in which emotions are critical to conceptualizing and implementing morality.
The initial reevaluation of the importance of emotion to moral psychology developed from research that examined the behavioral deficits of patients who had suffered focal brain damage to the prefrontal cortex, particularly the orbitofrontal cortex and ventromedial prefrontal cortex (VMPFC). These studies demonstrated decision-making and emotional deficits, including diminished empathy and increased reckless and antisocial behavior (Anderson, Bechara, Damasion, Tranel, & Damasio, 1999; Damasio, 1994). The predictable patterns of emotional deficits and increased antisocial behavior led to the hypothesis that, counter to the Kohlbergian view, emotional processing may indeed be involved in moral psychology.
In an early attempt at exploring the brain mechanisms underlying morality, Moll and colleagues examined brain activity while participants read and made silent judgments about sentences that either did or did not contain morally relevant information (De Oliveira-Souza & Moll, 2000; Moll, Eslinger, & de Oliveira-Souza, 2001; Moll, de Oliveira-Souza, Bramati, et al., 2002). In each of these studies, increased activation in VMPFC and the medial frontal gyrus was found in moral when compared with nonmoral cases. Employing a different approach, Moll, de Oliveira-Souza, Eslinger, et al. (2002) presented participants in an MRI scanner with emotionally charged images that either contained moral content (i.e., physical assaults or images of war) or did not (i.e., body lesions or dangerous animals). They found that, as expected, a complex series of brain regions were activated when looking at either moral or nonmoral emotional images. However, they found that activation in the VMPFC and the medial frontal gyrus was selective for the morally charged images. This study was consistent with the early work with brain-damaged patients. Greene et al. (2001, 2004) have also explored emotional factors in moral judgment in a series of well-known functional imaging studies in which participants provide judgments regarding a series of complex moral dilemmas. Greene has distinguished not merely between moral and nonmoral cases but also between different types of moral dilemmas. This program of research makes a distinction between personal moral dilemmas, in which the choice faced is “up close and personal,” and impersonal moral dilemmas, in which the decision maker is more removed from the situation. These two types of moral dilemma are illustrated nicely in the two trolley problems discussed above.
Within Greene’s personal/impersonal distinction, the “switch” case is a prototypical impersonal case, whereas the “footbridge” is a prototypical personal case. Greene has suggested that personal cases are likely to induce much more emotional responses than impersonal cases, arguing that this may provide an explanation of the differences in responses we see to the “switch” and “footbridge” cases. As evidence for the role of emotion in “personal” moral dilemmas, Greene et al. (2001) compared neural activity when individuals read and made judgments about both types of dilemma, and found increased activation in the posterior cingulate and angular gyrus for personal compared to impersonal dilemmas. Both of these regions have been associated with emotional processing, bolstering the claim that emotion is involved in personal moral dilemmas.
Taken together, the evidence so far has strongly implicated the VMPFC, an area known to be involved in emotional processing, in moral judgment. Further support for the role of VMPFC in moral judgment comes from a recent study comparing the performance of normal controls to brain-damaged individuals with VMPFC lesions on a moral dilemma task (Koenigs et al., 2007).1 This study found that, in “high-conflict” personal moral dilemmas, brain-damaged individuals were more likely to make the utilitarian choice than were controls (for instance, they were more likely to say that it is permissible to push the large man off the footbridge). This finding was replicated in a similar study with an independent population of VMPFC patients (Ciaramelli et al., 2007). An additional study by Schaich-Borg et al. (2006) found increased activation in brain regions related to emotion, including the VMPFC, when individuals made judgments about cases that involved causing harm in order to ensure an overall benefit.
Although most of the above studies have focused on emotion as a unitary category, several recent studies have also begun to look at specific moral emotions such as guilt, indignation, disgust, and compassion. Two recent studies (Moll et al., 2007; Zahn et al., 2009) found that reading short statements designed to elicit prosocial emotions (guilt, compassion) resulted in increased activity in both anterior VMPFC and the superior temporal sulcus (STS), whereas statements designed to elicit other regarding emotions (disgust, indignation) showed increased amygdala activation. Although research is only beginning to reveal the complicated nature of the neural mechanisms underlying moral emotions, these early studies suggest that moral psychologists should move beyond the simplistic notion of emotion as a unitary concept.
Behavioral studies have also indicated that emotions play an important role in moral judgment. Social Intuitionist theories have proposed that moral judgments are primarily caused by intuitive emotional responses to moral scenarios, and that although conscious reasoning can play a role in judgment formation, it typically serves to generate post-hoc rationalizations for the judgment (Haidt, 2001; Haidt & Bjorklund, 2008). Work on the concept of “moral dumbfounding” has been introduced as evidence for this position (Bjorklund, Haidt, & Murphy, 2000). “Moral dumbfounding” is the phenomenon in which an individual is highly confident of the rightness or wrongness of an action, but unable to provide a reasonable justification for why that judgment is correct. One well-known example occurs when people are asked to make a moral judgment about the rightness or wrongness of consensual incest (Bjorklund et al., 2000). Most people agree that it is wrong for two adult siblings to engage in incest even when birth control is used and no physical or psychological trauma will occur from the act. When asked why it is wrong, however, most people struggle to provide a reason although they still maintain that the action is wrong. Similar instances of moral dumbfounding have been reported in other labs with substantially different stimuli (Cushman, Young, & Hauser, 2006; Hauser et al., 2007).
Several studies have additionally found that emotional manipulations can significantly impact moral judgment. The emotion that has perhaps received the most attention is disgust. Schnall et al. (2008) have found that individuals make harsher moral judgments in a physically dirty room when compared with a clean room, after smelling a disgusting versus a neutral smell, and after watching a disgusting video when compared with a sad or neutral one. Additionally, participants hypnotically primed to experience disgust judged scenarios as more morally wrong than individuals who had not received hypnotic suggestion (Wheatley & Haidt, 2005).
Although this evidence as a whole appears to leave little doubt that emotion is heavily involved in moral judgment, it is very much an open question exactly what role it plays. Moreover, where does the newfound prominence of emotion leave conscious reasoning in our understanding of moral judgment? One prominent explanation is that moral judgments are the result of a dual-process model (Greene et al., 2001). According to this view, moral dilemmas provoke responses from two separable, and oftentimes competing neural processes, one of which is associated with fast, automatic, affect-laden processing and the other of which is associated with more conscious, deliberate, and controlled reasoning. According to this view, when faced with a dilemma such as the footbridge problem discussed earlier, two competing responses are generated: an affectively aversive response to the idea of pushing the man onto the tracks, combined with a cognitive bias that pushing the man would bring about the greatest overall good in terms of lives saved. This theory proposes that in personal dilemmas such as the footbridge, the aversive response to the thought of pushing the person to his doom overwhelms any concerns about maximizing the overall good, thus generating the nonutilitarian response that it is wrong to push the man. In contrast, the typical utilitarian response to impersonal dilemmas such as the “switch” case is explained because the impersonal nature of the scenario causes less of an emotional response, allowing more deliberative concerns about the overall good to be considered.
Much of the evidence on emotions reviewed above is consistent with this position. As already indicated, Greene and colleagues found that personal moral dilemmas were correlated with increased activity in emotion-related brain regions. Additionally, and crucially for their dual-process model, they also found that when individuals read “impersonal” moral dilemmas there was significantly increased activation in brain regions associated with working memory and reasoning.
In a follow-up study, Greene et al. (2004) found that in particularly difficult personal moral dilemmas (such as whether to smother an infant to save a group of people), activation was seen in the anterior cingulate cortex, a region thought to be active when cognitive and decision conflict is present (Botvinick, Braver, Barch, Carter, & Cohen, 2001; Pochon, Riis, Sanfey, Nystrom, & Cohen, 2008). This is consistent with the idea that in difficult dilemmas there is significant conflict between the immediate aversive response to harming someone and the utilitarian intuition to maximize the overall good. Moreover, they found that individuals who made the difficult utilitarian choice in personal dilemmas showed increased activation in the dorsolateral prefrontal cortex, a region associated with cognitive control and executive function (Miller & Cohen, 2001). Greene and colleagues interpreted this as support for the dual-process model, arguing that making the utilitarian judgment required increased executive control in order to override the competing prepotent emotional response.
The dual-process model can also accommodate and explain the increased utilitarian responses of VMPFC reported above (Greene, 2007; although see Moll and de Oliveira-Souza, 2007b, for an alternative explanation). According to the dual-process view, damage to brain regions associated with emotional processing should lead to increased utilitarian judgments because there will be less competition from the emotional system. Recently, it has also been shown that putting individuals under increased cognitive load while they engage in a moral dilemma task causes them to be slower to make utilitarian, but not nonutilitarian, judgments in personal moral dilemmas, again consistent with the dual-process model, although it should be noted that this result was restricted to reaction time effects, not differences in moral judgments. Increased cognitive load should interfere with the cognitive processes thought to elicit utilitarian responses, but not with the emotional processes thought to elicit nonutilitarian responses. Valdesolo and DeSteno (2006) provided additional support for the view. They used humorous video clips to induce a positive emotional state and then had people respond to standard trolley problems. They found, consistent with the dual-process model, that people primed with humor were more likely to make the utilitarian response than those who were not. These studies, taken together, suggest that the dual-process model of moral judgment is at the very least a promising theoretical model. However, it should be noted that there is some controversy regarding this proposed model, with some researchers pointing out that there is to date relatively limited evidence for the existence of neural systems that correspond to the purported dual processes (Glimcher, Dorris, & Bayer, 2005).
As this brief review has hopefully made clear, it seems evident that emotion does play a crucial role in moral judgment. Exactly what role it plays though, is still far from clear, and much work is needed in order to fully explain the processes underlying moral judgment. An additional issue in the field of moral judgment is the nature of the tasks used, namely the types of vignettes presented to the participants. As is nicely demonstrated by the trolley series of problems, these scenarios are often rather fantastical (pushing overweight men onto trolley tracks, and so on) and appear to be quite unrepresentative of the type of moral decision making we are faced with in everyday lives.
However, researchers studying different types of decisions have made some useful progress in both understanding how emotions can impact choices and also with the development of tasks in which people are placed in actual consequential situations where moral issues are raised. The following section will briefly review the progress in this field, popularly termed neuroeconomics.