For simplicity, I will follow Greene and others in using ‘utilitarian’ to refer to what are more accurately called ‘utility maximising’ judgments (see Kahane and Shackel, 2010).
Paxton and Greene (2010) also discuss cases where individuals may genuinely apply explicit deontological principles using controlled processing, though on Greene's view such principles are themselves the products of past rationalizations of intuition.
In fact, although Greene and other researchers often present the trolley case as a core part of the traditional case against utilitarianism, this is highly misleading at best. The trolley problem is problem internal to non-utilitarian ethics, as a look at Foot, 1978 and Thomson, 1985 would confirm.
It's common for researchers to report that some study has shown utilitarian judgments to be this or that, where it at most shows this about utilitarian (viz. utility maximising) judgments in trolley-like dilemmas. This slide from a narrow modest claim to an unwarranted general claim is unfortunately frequent.
Or more precisely, the factors to which deontological intuitions are sensitive; it's obvious enough that utilitarian judgments are sensitive to consequences.
I am glossing here over various problems in identifying intuition, in the philosophers' sense, with immediate automatic processing.
Schwitzgebel and Cushman, 2012 may appear to provide such evidence, but it seems to me to only show that, unsurprisingly, the intuitions of experts can be subject to morally irrelevant influences. But it's perfectly rational (rather than any kind of ‘rationalization’) for these experts to then endorse principles consistent with these intuitions.
It might be objected that the claim that deontological judgment is based in emotion is surprising and controversial. It would be to some extent surprising and controversial if it turned out that deontological judgments were generally based on emotion, though something like this is already asserted by influential non-utilitarian authors (see e.g. Williams, 1973). In any case, for the reasons given above, not much revolves on whether we take DP1 to be also a claim about emotion.
But the anatomical labelling of this activation as in the DLPFC is controversial (see Moll and de Oliveira-Souza, 2007). Worse, Tassy et al., (2012) found that disruption of the DLPFC using transcranial magnetic stimulation actually led to an increase in utilitarian judgment!
However, the association between working memory capacity and utilitarian judgment held only for cases where the ‘utilitarian’ option involved harming an individual who would be harmed anyway (Moore et al., 2008), a type of choice that even most hard-core deontologists would endorse. The same was true for the inducement of utilitarian judgment using the ‘Cognitive Reflection Test’ (CRT) in Paxton et al., 2012, where rates of utilitarian judgment for the Footbridge case were not affected (Paxton et al. also report a correlation between individual difference in performance on the CRT and utilitarian judgment but only in a variant of Footbridge where failure to push the one would lead to thousands of deaths. I discuss cases of such catastrophic harm below.) For all we know at this stage, most current evidence for controlled processing in utilitarian judgment may really be due to this type of dilemma which is not really at issue between utilitarians and their opponents. In addition, the CRT may introduce a number of confounds, since it involves mathematical problems that may bias subjects towards utilitarian solutions, and since it was originally designed precisely to measure individual differences in endorsement of counterintuitive cost-benefit solutions.
Greene also cites the example of Haidt's work on the role of disgust in generating intuitions against harmless violations, although these would be examples of deontological intuitions only in the loosest sense. But Haidt reports that the judgments of liberals who rejected these intuitions—and were thus making utilitarian judgments in the relevant sense—were not influenced by manipulation of cognitive load (reported in Haidt, 2012).
This picture is essentially endorsed in Cushman, Young and Greene (2010), where they refer to the Principle of Utility as the ‘welfare principle’.
One famous example is William James's (1891) rejection of an entire universe of supremely happy beings if this required even a single child to suffer horrible torture.
This point is made in Kamm, 2010, pp. 339-340. Selim Berker drew my attention to this.
This point is actually supported by Greene's own work: Shenhav and Greene (2010) report that so-called ‘utilitarian’ reasoning has the same neural correlates as non-moral reasoning.
When Greene (2008) claims that utilitarian judgment must be based in controlled processing because of its aggregative character, he is overlooking this. Notice further that this point isn't already acknowledged by Greene et al. (2004) when they describe utilitarian judgments as generated by ‘domain general’ cognitive processes. My point above isn't about the kind of subpersonal mechanisms that underlie utilitarian judgments; for all we know, distinctly moral forms of reasoning might be subserved by domain general processes. In any case, if the controlled processing really involves no more than the application of some general principle (of whatever content), then DP2 would be either false or highly misleading.
I wrote that deontologists aren't numerically challenged—but in fact virtually all subjects make a mixture of ‘utilitarian’ and deontological judgments. Are we to assume that they count the numbers only when they make utilitarian judgments? There is the further worry that it's doubtful that judging that 5 is greater than 1 really requires any kind of cognitive effort.
In addition, an unpublished study by Shenhav and Greene suggests that the VMPFC is involved, not in generating affective responses driving deontological judgment, but in integrating such affective responses (correlated with amygdala activity) and calculations of utility (Greene, personal communication). This integration clearly reflects the kind of weighing of duties I've described. It would make no sense if subjects saw the contribution of the deontological-affective component as entirely spurious, because on this picture our moral judgments should be completely insulated from the spurious emotional response, whose strength should make no difference at all.
I follow W. D. Ross in speaking of ‘prima facie duties’, but the relevant reasons are not literally ‘prima facie’ (merely appearing to be genuine reasons), and are best described as ‘pro tanto’ (genuine reasons that are outweighed in a context).
Nichols and Mallon (2005) show that non-philosophers can easily distinguish between whether an act violated some rule (weak impermissibility) and whether that act was wrong, all things considered.
It might be objected that subjects can't be engaged in such deliberation because, although something like the principle of utility is an explicit principle that people can apply, the resistance to, e.g., pushing the stranger in Footbridge is an intuition that most people are unable to articulate as an explicit principle (Cushman et al., 2006). But this is irrelevant. We can be conscious of, and torn between, opposing moral considerations even if we can't articulate them as fully explicit principles.
Notice that I'm not claiming that no one treats the intuition not to push as spurious, only that most people don't. There certainly are clear cases where people reject their intuitions/affective reactions are simply morally spurious—one example is the way liberals view their disgust reactions to harmless violations (Haidt, 2001).
Stanovich (2009) argues that talk about controlled processes fails to distinguish between algorithmic processes (such as explicit calculation or inference) and what he calls reflective processes, which relates to control states that regulate behaviour at a higher-level. The kind of deliberation described above would fall on the reflective side of this division.
These findings suggest that many subjects not only accept as valid both of the competing moral considerations in trolley dilemmas, as I have argued earlier, but that they might even hold (implicitly if not explicitly) that these dilemmas are what philosophers call ‘true dilemmas’—that both options in these dilemmas are wrong. This fits both the point mentioned earlier, that people tend to see the utilitarian choice as permissible rather than required, and the recent study reporting that subjects rate both options as somewhat wrong, when allowed to do so (see Kurzban et al., 2012; subjects did rate pushing in Footbridge as more wrong).
This is why it's misguided to suggest, as Greene does, that we take ‘utilitarian’ to refer to whatever processes turn out to underlie judgments with ‘characteristically utilitarian content’ (Greene, 2008).
Utilitarian judgments in personal dilemmas in healthy subjects were associated with longer RTs compared to deontological ones, but this difference was absent in VMPFC patients—again showing that the increase in response time reflected not ‘utilitarian reasoning’ but the conflict between its conclusion and a contrary deontological pull.
In addition, Greene et al. (2008) report that healthy subjects who had a strong tendency to make ‘utilitarian’ judgments in Footbridge (and other high conflict dilemmas) do so faster than it takes most subjects to make deontological judgments. However, they also report that cognitive load increased RTs in utilitarian judgment even in these subjects.
Besides VMPFC patients, the other group that seems to approach moral dilemmas in a genuinely utilitarian fashion are individuals high on psychopathy (Bartels and Pizarro, 2011).
On this operationalization, ‘intuitiveness’ is a population-relative notion, so judging that we are forbidden to push in Footbridge counts as intuitive in this sense even though some individuals (such as VMPFC damaged patients) may lack this intuition. Moreover, recording the unreflective judgments of individuals is only an indirect measure of automaticity, so the classification used left it open what processes underlie judgments classified as ‘intuitive’. More importantly, it left it entirely open what processes underlie judgments classified as ‘counterintuitive’.
Moreover, the study did not find common correlates for so-called ‘utilitarian’ judgments across different domains. This further supports the hypothesis that the processes previously ascribed to utilitarian judgment are in fact generic deliberative processes. Judgments we classified as intuitive utilitarian judgments typically involved concern for others' wellbeing that did not involve counting numbers. But as noted earlier, the correlates of such calculations are not of great interest for moral psychology. We did find that deontological judgments of different kinds were commonly associated with activation in the TPJ and PCC. These are not quite the areas Greene has previously associated with deontological judgment. The TPJ is commonly associated with theory of mind, and implicated in ascription of intention in moral cognition, so this activation might just reflect the role of intention in deontological judgment.
Although absence of evidence isn't necessarily evidence of absence, I should note several independent worries about Greene's earlier result. First, Greene et al., 2004 relied on the rather peculiar subject-by-subject construct of ‘difficult’ dilemmas, which, they report, typically excluded responses to Footbridge from the analysis—the very dilemma which this study was supposed to target! Second, so far as I know, Greene's DLPFC finding hasn't yet been replicated. Indeed, Shenhav and Greene, 2010, which used a better-controlled set of dilemmas, also failed to find an association between utilitarian judgments and DLPFC activation.
Interestingly, patients with damage to the VMPFC also typically suffer damage in this area.
This is also supported by the finding that cognitive load only affects the RTs of utilitarian judgments but not their frequency (Greene et al., 2008).
Schaich Borg et al., (2006) already reported (unsurprising) neural differences between responses to moral dilemmas involving different kinds of deontological considerations. My alternative model is also compatible with some kinds of intuitive judgments being more ‘cognitive’ than others.
Needless to say, if utilitarianism is understood only as a criterion of rightness (and not also as a decision procedure), then it is logically compatible with any form of deliberation—including applying the Categorical Imperative or Rational Egoism. But Greene's claims about the origins of utilitarian thinking require treating it, at least in this context, as referring to a distinctive decision procedure.
Of course, even if all experimental subjects who engaged in effortful deliberation about Footbridge ended up endorsing the utilitarian conclusion, this still wouldn't show that this is what controlled processing favours. Subjects in experiments have seconds, at most minutes, to reflect on the dilemmas they are given. The initial outcome of reflection is often overturned by further deliberation.