SEARCH

SEARCH BY CITATION

Keywords:

  • Dynamical models;
  • Explanation;
  • HKB model;
  • Mechanistic explanation;
  • Predictivism

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

Abstract While agreeing that dynamical models play a major role in cognitive science, we reject Stepp, Chemero, and Turvey's contention that they constitute an alternative to mechanistic explanations. We review several problems dynamical models face as putative explanations when they are not grounded in mechanisms. Further, we argue that the opposition of dynamical models and mechanisms is a false one and that those dynamical models that characterize the operations of mechanisms overcome these problems. By briefly considering examples involving the generation of action potentials and circadian rhythms, we show how decomposing a mechanism and modeling its dynamics are complementary endeavors.


1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

Stepp, Chemero, and Turvey advocate embracing dynamical explanation as an alternative explanatory framework to mechanistic explanation. While we defend a major role for dynamical models in cognitive science, we reject the claim that they should be construed as alternatives to mechanistic explanations. After presenting two points of clarification in Section 2, we respond to their proposal in three ways. In Section 3, we review several well-known problems to the view of explanation they advance. More important, in Section 4, we demonstrate how dynamical models possessing explanatory force are best understood as instances of bona fide mechanistic explanations. Finally, in Section 5, we briefly elaborate on how decomposing a mechanism and modeling its dynamics are complementary endeavors.

2. Two clarifications

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

First, Stepp et al. frame their argument as defending the very possibility of explanatory dynamical models against mechanistically minded philosophers who propose that “dynamical explanation is not genuinely explanatory, but merely describes phenomena.” However, we (along with many other sensible advocates of the mechanistic perspective) do not hold that all dynamical models in cognitive science are restricted to providing mere descriptions of phenomena and are thereby explanatorily defective. In fact, the debate between us would be far less interesting if this were the case. Instead, we readily grant that some (even many) dynamical models in cognitive science and neuroscience do provide genuine explanations for the phenomena they cover, and in doing so successfully transcend mere description.1 Below we discuss examples involving dynamical explanations of action potentials and circadian rhythms. The real pivot point for the debate centers on what makes explanatory dynamical models explanatory. Here, there is genuine disagreement and room for debate between us, to which we attend shortly.

Secondly, Stepp et al. falsely assume that mathematical modeling, especially modeling involving differential equations, and mechanistic explanation are in opposition. They state “Dynamical explanations do not propose a causal mechanism that is shown to produce the phenomenon in question. Rather, they show that the change over time in set of magnitudes in the world can be captured by a set of differential equations.” This betrays a deep, but widespread, confusion that causal-mechanical explanations and dynamical mathematical modeling (especially models taking the form of differential equations) are somehow mutually exclusive. This is simply not true. Under a particular interpretation, the Hodgkin-Huxley model of the action potential is both mathematical-dynamical, comprising a set of coupled differential equations to describe the dynamics of the membrane potential, and a mechanistic explanation describing how the components (ion channels) and the activities involving these channels are organized and orchestrated to generate action potentials. When Hodgkin and Huxley first offered their equations, they were based on data about membrane potentials; details about the channels themselves were unknown. Their model initiated the search for underlying parts and operations of the mechanism. The subsequent discoveries of channels they anticipated, coupled with continued refinements in the equations themselves, provide a rich example of how mechanistic research and dynamic modeling have supported each other.

Crucially, once the assumption that dynamics and mechanism are mutually exclusive is jettisoned, one cannot simply use an observation about the increasing prevalence of dynamic models in cognitive science to read off the emergence of a brand new paradigm of non-mechanistic explanation as Stepp et al. do. Indeed, we readily admit that dynamical models involving differential equations are increasingly commonplace in cognitive science and various domains of neuroscience. What must be shown is that dynamical models explain even when they do not describe mechanisms, as Stepp et al. maintain. Justifying this claim requires a much more comprehensive picture than we can give here of what this alternative, nonmechanistic form of explanation looks like, and the rules for assessing these explanations as either good or bad. In the following sections, we focus on this core question in the debate between us: What is required of good dynamical explanations in cognitive science?

3. Shortcomings of predictivist accounts of explanation

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

Mechanists, as correctly characterized by Stepp et al., endorse the view that models explain in virtue of describing how the organized activities and interaction among components in a system produce the target phenomenon. Putative dynamical explanations are to be evaluated as good or bad according to this same mechanistic guideline; they are not subject to different rules of assessment. Thus, the mechanistic perspective on dynamical models is uniform and remarkably clear: Dynamical explanations do not provide a separate kind of explanation; when they explain phenomena, it is because they describe the dynamic behavior of mechanisms. What, then, do Stepp et al. propose as their alternative, nonmechanistic explanatory framework? And by what standard are their explanations to be evaluated?

The alternative view of explanation Stepp et al. embrace might be termed predictivism. According to predictivism, the explanatory power of dynamical models derives from their descriptive and predictive power. In advocating this prediction-based view of explanation, Stepp et al. follow along well-trodden lines within the philosophy of science tracing back to Hempel’s (1965) covering law account of explanation. The attempt to link the explanatory force of dynamical models to their predictive import in this manner is also not new. van Gelder and Port (1995) stressed that dynamical explanation “yields not only precise descriptions…but also predictions which can be used in evaluating the model” (p. 15). In similar fashion, van Gelder (1998, p. 625) asserts that “[m]any factors are relevant to the goodness of a dynamical explanation, but the account should at least capture succinctly the relations of dependency, and make testable predictions.”

Predictivism confronts many well-known shortcomings (Salmon, 1989). For example, by knowing a law-like regularity, one can predict a storm’s occurrence from falling mercury in the barometer, but the falling mercury does not explain the occurrence of the storm. Rather, a common cause—a drop in atmospheric pressure—explains both the falling barometer value and the impending storm. Along these same lines, a dynamical model of a given cognitive phenomenon might be predictively adequate (i.e., the model predicts the relevant aspects of the phenomenon with the required precision and accuracy), and yet its variables may represent only magnitudes that are mere correlates of a common cause for that phenomenon. Just as we reject the claim that the barometer drop explains the storm, we should also resist the claim that such a dynamical model explains. Explanatory adequacy is thus not (merely) predictive adequacy. Moreover, predictivism lacks the resources to distinguish which models with the same predictive import are explanatory. These considerations, of course, directly bear on Stepp et al.’s claim that the HKB model explains to the extent that it is capable of generating accurate quantitative predictions (e.g., for the observed phase-transitions in subject behavior, and possibly as well for unobserved effects on the behavioral dynamics induced by future experimental manipulations).

The second major problem with Stepp et al.’s predictivist view is that it lacks the resources to distinguish describing and explaining a phenomenon. The problem is that variables posited in dynamical models often represent measurable quantities at roughly the level of the cognitive or behavioral performance itself. Indeed, the favored interpretation of the HKB model (the one Stepp et al. endorse) is that it characterizes the temporal evolution of one purely behavioral dependent variable (relative phase) as a function of another purely behavioral independent variable or order parameter (finger oscillation frequency). As a result, the HKB model offers, in Robert Cummins's (2000) language, a description of an effect and not an explanation.

4. What explanation requires?

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

At the center of our differences with Stepp et al. is what is required for explanation. This is not just a dispute about the word explanation but about what is sought by scientists when they try to develop an explanation. One strategy in recent philosophy of science for distinguishing what explanations provide is unification of multiple phenomena under a common generalization (Friedman, 1974; Kitcher, 1989). Stepp et al. embrace this understanding of explanation when they claim that dynamicist models “unify disparate phenomena.” This is achieved when models describe general features about the behavior of systems independently of the material facts about these systems. For example, they point to the fact that the core equation of the HKB model can be used to describe similar coordination patterns implemented across physically disparate systems. We agree that an important role of dynamical modeling is to reveal such widespread patterns. But we deny that the range of application for a given dynamical model (e.g., that it applies to bimanual coordination, social coordination, locomotion, etc.) bears on whether it explains the phenomenon in question or not. If we want to know why humans exhibit the phenomenon described in the HKB model, it is merely suggestive to note that a similar pattern is observed in a variety of other systems. Given the pattern alone we have no better idea than we had as to how it is that humans (or any other system for that matter) behave in compliance with the model. After all, not all systems do—a pair of boulders does not exhibit HKB dynamics. If anything, then, the broad scope of certain dynamical models merely indicates that many other similar phenomena require explanations as well, and perhaps these explanations will be similar. Whether they will in fact be similar is, of course, an open empirical question on which we take no stand here.

What is required to explain a given phenomenon is to identify the responsible mechanism and the conditions under which it is operating. Although not acknowledged by Stepp et al., many proponents of the dynamical approach, including prominent modelers such as Kelso, appear to have recognized the importance of mechanistic explanation in their own ongoing research programs. Since its original phase of development, Kelso and colleagues have unreservedly pursued a line of research to understand how the behavioral regularity depicted by the HKB model might result from features of the underlying organization of component neural systems and the dynamics of their interactions (see, e.g., Jantzen, Steinberg, & Kelso, 2009; Jirsa, Fuchs, & Kelso, 1998; Schöner & Kelso, 1988). More specifically, Kelso and colleagues (Jirsa et al., 1998) have proposed a neural field model connecting the observed phase shift described by HKB to the underlying dynamics of neural populations in motor cortex. By mapping connections between the model components and components of neural systems, Kelso et al. have undeniably started to transform their model into a mechanistic one. In other words, they seem to recognize what Stepp et al. do not, namely the explanatory value of pushing beneath the regularities couched at the behavioral level to reveal underlying mechanisms.

It should also be noted that this development raises a final problem for the predictivist view that Stepp et al. endorse. Because predictivism assimilates explanatory power with predictive power, it renders impossible increasing the quality of one’s explanation without increasing its predictive reach. Yet Kelso and his collaborators, through their investigations into the mechanistic underpinnings for the HKB phenomenon, seem to be doing exactly that. Suppose the original HKB model successfully identified the significant variables from which the vast majority of the variance in the phenomenon can be accounted, all without understanding or specifying the causal structures by which those variables change or by which those variables influence the phenomenon, as Stepp et al. imply. If Kelso and colleagues eventually succeed and learn how the observed phase shift in finger coordination relates to underlying dynamics of neural populations in motor cortex, this might result in refinements to the original model (e.g., via inclusion of additional variables standing for the relevant neural components and their dynamical interactions). Many, us included, would want to argue the supplemented HKB model provides a deeper, better explanation for the phase shift phenomenon than the original model. Yet the supplemented model might have no greater predictive power in spite of its improvement as an explanation.

A prediction-centric view of explanation seems to miss out on precisely those features that distinguish better from worse explanations, good explanations from bad. The mechanistic perspective, on the other hand, is capable of capturing this explanatory improvement, since mechanists are committed to the idea that predictive and explanatory adequacy can and do vary independently of one another. According to the mechanistic viewpoint, predictive power is crucial but insufficient for explanation. What more is required to transform a mere dynamical model into a genuine dynamical explanation is, as we have already stated, a description of the mechanism.

5. The complementarity of dynamics and mechanisms

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

It should be clear that our disagreement with Stepp et al. does not concern the importance of dynamics in cognitive science. Even if, as we have contended, some dynamical models do not explain, they may still make important contributions to cognitive science—for example, by revealing the dynamic behavior of cognitive systems. Often they may do much more. As we claimed in the case of the Hodgkin-Huxley equations, their equations helped guide the search for the components of the responsible mechanism. In the decades since they introduced their equations, there has been a highly productive interaction between the discovery of additional channels for other ions and the incorporation of additional equations characterizing the resulting conductances.

In support of their conception of explanation by dynamical models not tied to mechanisms, Stepp et al. appeal to the application of the strong anticipation model of Voss (2000). The Voss model purportedly explains the synchronization of multiple neuronal oscillators in the mammalian suprachiasmatic nucleus responsible for generating circadian rhythms in which the neurons not receiving direct entrainment by the animal’s exposure to light are phase advanced to those that are entrained by light.2 We do not deny that dynamical models can illuminate initially puzzling phenomena such as the phase relations between driven and nondriven oscillators. But without specifying the parts and operations in the mammalian circadian system that performs such functions as coupling between oscillators, this account is empty. The model remains only a how possibly model, not an explanation of the coupling of oscillators in mammals. Other features of the Voss model deserve brief comment. The model characterizes the oscillatory processes of springs, not neurons. This may not be a problem: It is a standard and useful strategy to employ models that merely save the phenomena generated by an individual mechanism when the focus is on the interactions of that mechanism with others. Researchers, for example, tend not to use Hodgkin-Huxley style equations when modeling complex circuits. However, if it turns out that the behavior depends upon the particulars of the individual mechanism, it becomes critical to take those features into account and revise the model. Once again, mechanistic decomposition and dynamic modeling are complementary, not opposed (for further discussion, see Bechtel & Abrahamsen, 2010; Kaplan & Craver, in press).

6. Conclusion

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References

We have raised several objections to Stepp et al.’s proposal that dynamical models eschewing a concern with underlying mechanisms are explanatory. We have also attempted to demonstrate that the opposition between dynamic and mechanistic explanations is a false one. Dynamic accounts are explanatory when they characterize the operations of the underlying mechanism (including how it is related to features of its environment). When they do not, they fail to provide explanations, whatever their other virtues.

Footnotes
  • 1

    It is important to note that no parties to the debate should wish to hold that all dynamical models are explanatory. As van Gelder (1998), one prominent advocate of the dynamical approach, has clearly pointed out, good dynamical explanations can be weeded out from bad dynamical explanations, which include mathematical models involving little more than ad-hoc curve fitting.

  • 2

    Part of Stepp et al.’s point in bringing this up is to argue against the need for representations in such accounts. A detailed discussion of representations would take us beyond the focus of this paper (see Bechtel, in press, for more discussion); suffice it to note that advancing a dynamical model does not show that a system is not using representations. One must analyze the mechanism whose behavior is being modeled to determine whether it contains internal processes that carry information external to it. Another of Stepp et al.’s argument against representations underlying circadian rhythms is the finite persistence of oscillations in free-run conditions. This only counts against a simple characterization of representations as fixed stable structures always available to perform their role. There is no basis for saddling defenders of representations with such an account.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Two clarifications
  5. 3. Shortcomings of predictivist accounts of explanation
  6. 4. What explanation requires?
  7. 5. The complementarity of dynamics and mechanisms
  8. 6. Conclusion
  9. References
  • Bechtel, W. (in press). Representing time of day in circadian clocks. In A.Newen, A.Bartels, & E.-M.Jung (Eds.), Knowledge and representation. Palo Alto, CA: CSLI Publications.
  • Bechtel, W., & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science Part A, 41, 321333.
  • Cummins, R. (2000). “How does it work?” versus “what are the laws?”: Two conceptions of psychological explanation. In F.Keil & R.Wilson (Eds.), Explanation and cognition (pp. 117144). Cambridge, MA: MIT Press.
  • Friedman, M. (1974). Explanation and scientific understanding. Journal of Philosophy, 71, 519.
  • van Gelder, T. (1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences, 21, 615628.
  • van Gelder, T., & Port, R. (1995). It’s about time: An overview of the dynamical approach to cognition. In R.Port & T.van Gelder (Eds.), It’s about time (pp. 143). Cambridge, MA: MIT Press.
  • Hempel, C. G. (1965). Aspects of scientific explanation. In C. G.Hempel (Ed.), Aspects of scientific explanation and other essays in the philosophy of science (pp. 331496). New York: Macmillan.
  • Jantzen, K. J., Steinberg, F. L., & Kelso, J. A. S. (2009). Coordination dynamics of large-scale neural circuitry underlying rhythmic sensorimotor behavior. Journal of Cognitive Neuroscience, 21, 24202433.
  • Jirsa, V. K., Fuchs, A., & Kelso, J. A. S. (1998). Connecting cortical and behavioral dynamics: Bimanual coordination. Neural Computation, 10, 20192045.
  • Kaplan, D. M., & Craver, C. (in press). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science.
  • Kitcher, P. (1989). Explanatory unification and the causal structure of the world. In P.Kitcher & W. C.Salmon (Eds.), Scientific explanation, Vol. XIII. (pp. 410505). Minneapolis, MN: University of Minnesota Press.
  • Salmon, W. C. (1989). Four decades of scientific explanation. In P.Kitcher & W. C.Salmon (Eds.), Scientific explanation. Minnesota studies in the philosophy of science, Vol. XIII. (pp. 3219). Minneapolis: University of Minnesota Press.
  • Schöner, G., & Kelso, J. (1988). Dynamic pattern generation in behavioral and neural systems. Science, 239, 15131520.
  • Voss, H. U. (2000). Anticipating chaotic synchronization. Physical Review E, 61, 5115.