Causal Explanation and Fact Mutability in Counterfactual Reasoning


  • This research was supported in part by a grant from an AFOSR MURI & AFOSR FA9550-09-1-0507. The third author thanks the Japan Society for the Promotion of Science (JSPS, Project ‘Inferential mechanisms and their linguistic manifestations'), the American Council of Learned Societies (ACLS, Project ‘Speaking of time and possibility’), and the Lichtenberg-Kolleg at Georg-August-Universität, Göttingen, for support of the theoretical work.

Morteza Dehghani, Institute for Creative Technologies, University of Southern California, 12015 Waterfront Drive, Playa Vista, CA 90094, USA.


Recent work on the interpretation of counterfactual conditionals has paid much attention to the role of causal independencies. One influential idea from the theory of Causal Bayesian Networks is that counterfactual assumptions are made by intervention on variables, leaving all of their causal non-descendants unaffected. But intervention is not applicable across the board. For instance, backtracking counterfactuals, which involve reasoning from effects to causes, cannot proceed by intervention in the strict sense, for otherwise they would be equivalent to their consequents. We discuss these and similar cases, focusing on two factors which play a role in determining whether and which causal parents of the manipulated variable are affected: Speakers' need for an explanation of the hypothesized state of affairs, and differences in the ‘resilience’ of beliefs that are independent of degrees of certainty. We describe the relevant theoretical notions in some detail and provide experimental evidence that these factors do indeed affect speakers' interpretation of counterfactuals.