A second example is the property transmission heuristic, item no. 7 in the catalog of singular clues. Under conditions of uncertainty, we seek causes that satisfy the property transmission clue, or we infer property transmission when judging effects, and this leads to predictable biases in causal judgment. White (2009b) described several lines of research that support this claim, including a series of studies on magical contagion (Nemeroff, 1995; Nemeroff & Rozin, 1989, 1994; Rozin, Millman, & Nemeroff, 1986; Rozin, Nemeroff, Wane, & Sherrod, 1989). As an example, Nemeroff and Rozin (1989) found that judged personality characteristics of fictitious groups of people tended to resemble distinguishing characteristics of animals they ate, as if the characteristics of the animals were transmitted to the people who ate them. Under conditions of uncertainty, judgments about personality characteristics were guided by the idea of property transmission, in this case from characteristics of animals in the people's diet.
It may be objected that the singular clues are not valid guides to causal identification because they are not exclusively associated with actual causal relations. Taking the contact clue as an example, one thing may come into contact with another but that does not guarantee that the former is the cause of something that happens to the latter. There are two replies to this. One is that the singular clues are used as guides to causal identification because they are empirically associated with something that is a valid guide to causal identification, namely the actual experience of a causal relation in acting on an object. The other is that they are not meant to be used in isolation. Things are more likely to be identified as causes as they possess more of the clues; a thing possessing just one or two clues is not likely to be identified as a cause. In fact, the clues provide a solution to a traditional problem of causal identification, the selection problem (Hesslow, 1988; Wolff & Song, 2003; Wolff et al., 2010). If something happens, there are, a priori, indefinitely many possible causes of it, and the problem for any plausible mechanism of causal induction is that it cannot process information about all those possible causes. Various ways of limiting the field of causal candidates have been proposed (Cheng & Novick, 1992; Griffiths & Tenenbaum, 2009; Hart & Honoré, 1985; Hume, 1978; Wolff et al., 2010), but the singular clues solve the problem automatically without the need to define limitations to the field. Any causal candidate can be rapidly assessed for the number of singular clues it possesses, and in fact the clues guide the search for causal candidates by focusing attention on certain areas at the expense of others. The contact and temporal immediacy clues, for example, serve to eliminate possible causes that are distant in space or time from the outcome of interest. Thus, the solution to the causal selection problem is that only candidates with the greatest number of singular clues are considered as possible causes. These can, of course, be subject to further testing, and other candidates may be sought if none proves satisfactory.
6.1. Implications for the use of contingency and conditional probability information in causal judgment
As was mentioned early in the article, there have been many proposals in psychology about how causal knowledge, valid or not, may be acquired from observation of regularities in events. They are rooted in a philosophical position that causality cannot be directly observed, and that multiple instances are required for any kind of belief about causality to emerge (Hume, 1978; Sosa & Tooley, 1993). This approach has dominated the study of causal judgment, to the extent that some prominent reviews of the causal judgment literature do not even mention other possibilities (Allan, 1993; De Houwer & Beckers, 2002; Holyoak & Cheng, 2011; Perales & Shanks, 2007).
In this section, I argue that singular clues cast this approach in a different light, one in which contingency is a derived clue to causality, not the source of causal knowledge. Specifically, I argue that the use of contingency and other empirical cues depends on the presence of singular clues to causality; it is redundant where there are adequate singular clues for causality to be identified in single instances; there is evidence that the use of contingency information in causal judgment is an acquired skill and not innate; there is evidence that contingency information is actually used for hypothesis testing, in other words in a context of acquired beliefs and clues to causality; and there is evidence that identical contingencies are treated differently in terms of causal inference. I shall discuss each of these in turn.
The most serious problem for testing any model or hypothesis in which contingency information (or any form of empirical regularity) is fundamental to human causal judgment is that of separating contingency from singular clues to causality. All situations in which the use of empirical information for causal judgment has been studied are confounded by the presence of numerous singular clues. I shall first illustrate this with an analysis of the puppet study by Gopnik et al. (2004).
In the puppet study, children aged 4 years were shown a stage with two puppets on it, and they saw the puppets move together and stop moving at the same time. The experimenter told them that one of the puppets was special and always made the other one move. The children had to guess which one was the special puppet. After a short training session, they were exposed to a series of trials in which the puppets both moved together and stopped together. Then, the experimenter intervened and moved one of the puppets (Y) and the other puppet (X) did not move. Then both puppets moved together again. The children tended to say that X was the special puppet. This shows an inference being made from the experimenter's intervention. On trials where the experimenter appears not to intervene, both puppets move. This is, of course, ambiguous because either one could be making the other one move. If Y was the special puppet then, when the experimenter intervened on Y, X should have moved as well. X did not move. The experimenter's intervention, therefore, ruled out the hypothesis that Y was the special puppet, leaving only the hypothesis that X was special. This was a key experiment in an argument made by Gopnik et al. (2004) that children detected causal structure in a manner conforming to normative causal Bayes nets analyses.
The puppet study does not in fact support that claim, because the scenario in the puppet study provides several singular clues that could have guided children's judgments: no. 1, human action (not only intervention by the experimenter but also the puppets were introduced as actors); no. 3, no prior activity of entity in which the outcome occurred; no. 4, contact between cause and outcome object, though in this case contact was mediated by the physical connection between the puppets; no. 5, monodirectionality, which is explicit in the instructions; no. 6, change in the object acted on at contact, because there is no temporal interval between the motions of the two puppets; no. 7, property transmission, in the form of resemblance between the motions of the two puppets; no. 8, brief duration of interaction. The intervention of the experimenter on one of the puppets possesses all those clues and more: no. 2, prior activity of causal object, because the experimenter moves to the puppet in acting on it, and no. 4, contact between experimenter and puppet is direct. Note that the children would not be in any doubt that the experimenter moved the puppet and the puppet did not move the experimenter. This is so obvious that it has never been pointed out, but it is causal knowledge that is a prerequisite for ascertaining the causal structure of the puppet situation. Clearly, a great many singular clues to causality are present in that experimental situation. Similar analyses can be provided of any study of causal structure elucidation. The claim that participants are identifying causal structures just from conditional probabilities is untenable. A proper test of that claim would require a situation in which there were no singular clues.
Studies of causal judgment from contingency information also typically provide several of the singular clues. Usually, a scenario is set up with specific content, and judgments are made about contingency information within the context of the scenario. For example, in Lober and Shanks (2000) and Collins and Shanks (2002), a research scenario was used in which the possible cause was irradiation and the outcome was mutation in the DNA of the irradiated organism. This scenario satisfies several of the clues: no. 1, human action (the radiation was activated by a researcher), no. 2, prior activity of possible cause (irradiation); no. 3, the organism in which the outcome occurred was not active prior to the application of the radiation; no. 4, contact between radiation and the organism; no. 5, monodirectionality, because it is unlikely that participants would have believed that DNA can affect radiation; no. 6, change in the organism at contact, because no delay between irradiation and occurrence of mutation was mentioned; and no. 8, brief duration of interaction. In addition, the higher order clue of mechanism was satisfied, because the scenario was designed to evoke pre-existing beliefs about causal mechanisms connecting radiation and mutation. Scrutiny of any scenario used in causal judgment research will reveal a similar list of singular clues to causality. Use of causal language in instructions and materials is also routine.
If contingency is the fundamental means by which causal learning occurs, then it should be possible to demonstrate that it occurs under conditions where the singular clues are absent. Temporal contiguity can be excepted from this, because it is arguable that a fairly brief window of time is required for the detection of contingency (Shanks & Dickinson, 1987). No published experiment has met those conditions. It can therefore be argued that the use of contingency information in causal judgment is dependent on the presence of at least some of the singular clues, not to mention the causal language used by the experimenter.
It could be argued that singular clues and contingency information are just used for different purposes: Singular clues are used for problems of causal attribution, where the issue is to identify the cause of one particular outcome, and contingency information is used to establish general causal beliefs. However, as I showed earlier, infants and young children are inclined to generalize causally relevant properties revealed by brief interactions (e.g., Aguiar & Baillargeon, 1998), and older children tend to induce novel causal hypotheses from properties of single instances (e.g., Schauble, 1990). Singular clues derived from experience of single instances therefore form a source of hypotheses about causal generalities that may or may not be subject to further testing. In that way, they support inference of general causal beliefs as well as addressing causal attribution problems.
In the present account, causal learning begins with actions on objects and gradually progresses to a point where reasonably sophisticated use can be made of contingency information (I say “reasonably” because of the requirement for other singular clues to causality to be satisfied.). If that is the case, then one would expect both developmental and individual differences in the use of contingency information in causal judgment. There is evidence for both.
Shaklee and Elek (1988) compared American junior high school students and college students (no age data were published) and found superior performance by normative standards in the latter, in terms of proportions of participants who identified covariates as causes. Shaklee and Goldston (1989) sampled participants aged 8, 12, and 21 years, and found improvement with age in identifying covariates as causes. Individual differences have been found among adults (Anderson & Sheu, 1995; White, 2000). Anderson and Sheu (1995) and White (2000) both found that some participants used only cause-present information for causal judgment and ignored cause-absent information in the stimulus materials. That finding, which disconfirms the predictions of any model that postulates normative competence at causal judgment, has never been satisfactorily explained by advocates of contingency-based models (Cheng, 1997; Cheng & Novick, 2005; White, 2005). In addition, White (2000) found self-reported individual differences in the use of cause-absent information. Tendencies in causal judgments matched the reports made by the participants, indicating a degree of insight into individual use of information. Some participants reported what White (2000) called the “closer to zero rule”: If the previous judgment had been below zero (on a scale from −100, preventing, to +100, causing), then an instance of cause-absent information led them to raise their judgment, and if the previous judgment had been above zero, then the same cause-absent information led them to lower their judgment. A minority of participants reported using that rule and their judgments were consistent with the use of it. These tendencies can be interpreted as representing attempts by participants to deal with information that is not in a form that they find natural for causal judgment. It is clearly not the case that everyone makes judgments in the same way, and neglect and idiosyncratic use of cause-absent information are common.
Most models of causal judgment from contingency information have treated causal learning as inductive. That is, given a few starting assumptions, causal judgments and beliefs emerge from the accumulation of empirical information without the prior formulation of hypotheses to direct the search for information. An example of this approach is the power PC theory (Cheng, 1997). In that theory, it is assumed that humans are born with the knowledge that there is such a thing as causality in the world, and the inferential mechanism modeled in the theory is also present as a form of innate competence. Causal beliefs are acquired simply from the mechanism operating on input information about contingencies without the need for directed searches for particular kinds of information, or any kind of formulation and testing of causal hypotheses. Contrary to that approach, there is considerable evidence that contingency information is used, not inductively, but for hypothesis testing (Ahn et al., 1995; Luhmann & Ahn, 2011; Schauble, 1990; Zimmerman, 2007). That is, causal judgment begins with the formulation of one or more hypotheses, and information is then sought that might have confirmatory or disconfimatory value for the hypothesis under test. This is a sophisticated use because the formulation of a hypothesis requires some domain-specific knowledge. It could be argued that hypotheses are generated from accumulated contingency information, but research findings do not strongly favor that possibility. At the very least, it is just one among many ways in which hypotheses could be formulated. Schauble (1990) found that even when children were collecting data that would enable contingencies to be assessed and scrutinized for possible hypotheses, they tended to generate hypotheses on the basis of individual instances, not on patterns detected across a sample of instances.
Moreover, if humans have a hypothesis-testing orientation, they are not restricted to contingency information in testing their hypotheses. Any of the singular clues could be used, as could any of the kinds of information derived from the singular clues. Ahn et al. (1995) argued that mechanism clues are favored. Participants in their experiments sought clues to causal mechanisms and had little interest in contingency information. The materials included event descriptions that were nonsense sentences, and things about which people could not have had preconceived beliefs. In an experiment where people had enough information to conduct a covariation analysis to identify some factor in the stimulus information as the cause, they did not do so; instead, they identified a mechanism not explicit in the description of the event. There is a good reason for this: “the focus of the mechanism analysis would be on discovering the process underlying the relationship between the cause and the effect” (Ahn et al., 1995, p. 304). Their point is that contingency analysis can only identify a cause, or give justification for changing one's estimate of the probability of a given cause, whereas mechanism analysis can answer the “how?” question, explaining how a given effect is generated. That is more informative and more useful. The force of the argument is illustrated with a real-world example in the supplementary materials.
Luhmann and Ahn (2011) have taken this argument further, showing that even the confirmatory and disconfirmatory status of individual instances of contingency information is not fixed. White (2000) had already shown that the status of cause-absent information could vary depending not only on the individual participant but also on the value of the previous causal judgment. Luhmann and Ahn (2011) showed that the status of cause-present information is open to interpretation as well. Objectively, occurrences of an outcome in the presence of a candidate cause are confirmatory for the cause because they tend to increase the contingency, and non-occurrences of the outcome in the presence of the cause are disconfirmatory because they tend to decrease the contingency. Luhmann and Ahn showed, however, that the interpretation of such instances varied depending on the kinds of instances that had been encountered earlier. If previous instances were predominantly confirmatory, for example, then non-occurrences of the outcome in the presence of the cause were most commonly interpreted either as coincidence or as indicating that, for some reason, the cause failed to produce the outcome on that occasion. This is reminiscent of the reasoning about objectively disconfirmatory instances in the study by Schauble (1990): Instead of being treated as disconfirmatory for the hypothesis, they tended to be explained away as exceptions or reinterpreted as not disconfirmatory. Equivalent tendencies were found in the interpretation of objectively confirmatory instances if the previous instances had been predominantly disconfirmatory. Luhmann and Ahn also showed that adding confirmatory instances after a history of largely disconfirmatory instances could lead to lower, not higher, causal judgments if a second candidate cause was present. The presence of the second cause provided a means of interpreting the apparently confirmatory instances in a way that maintained the hypothesis about the first cause.
The final problem to be discussed here concerns the findings of a study by Muentener and Carey (2010). Several studies with infants at a few months of age have presented stimuli in which a moving object contacts a stationary one and the stationary one then moves. The consensus of the findings is that infants have a causal impression, an impression that the moving object makes the stationary object move, that appears to resemble the launching effect that occurs with adult observers (Cohen, Amsel, Redford, & Casasola, 1998; Cohen & Oakes, 1993; Leslie, 1982, 1984; Leslie & Keeble, 1987; Newman, Choi, Wynn, & Scholl, 2008; Oakes, 1994; Oakes & Cohen, 1990; Saxe & Carey, 2006). Muentener and Carey used a variation on this stimulus in which a screen concealed the space in which contact would occur. The target object was partly occluded by the screen; the causal object (a toy train) moved toward the target and disappeared behind the screen, whereupon the target moved off. Infants aged 8 months perceived this as causal: They were surprised if, when the screen was removed, the causal object did not contact the target object, but not surprised if it did. In a variation on this, Muentener and Carey presented changes of state in the target object: The target object either emitted musical sounds or broke into pieces. Infants did not perceive those stimuli as causal; whether the causal object contacted the target or not when the screen was removed made no significant differences to their looking times. In the critical experiment, the toy train was replaced with a human hand. This time, infants apparently perceived both motion and change of state in the target as caused by the hand. Thus, they perceive contact from a human hand as causing a change of state in another object at an age when they do not perceive contact from an inanimate object as causing the same change of state.
Muentener and Carey pointed out that the conditional probabilities in those two events are exactly the same: The outcome always occurs when the moving object contacts the stationary one. If infants were inferring causality from detected conditional probabilities, therefore, they should represent both events as causal relations. The finding that they represent only one of the events as a causal relation therefore counts against the hypothesis that empirical cues are fundamental to causal learning. Instead, the findings support the hypothesis advocated here that human action is fundamental to causal learning.
In summary, contingency information is one among a large number of clues to causality. It cannot be used alone and must be supplemented by singular clues such as temporal contiguity, which itself has to be meaningfully defined. It cannot be the origin of causal understanding partly because its use depends on the presence of singular clues to causality and partly because it cannot be used purely inductively. Instead, it is used for the testing of hypotheses, and its use is significantly biased by the hypothesis under test. The use of contingency information in causal judgment is a sophisticated accomplishment confined to those who have already acquired knowledge about causality from other sources.