We think that it is worth taking the strong anticipation model seriously as an explanation of circadian systems. Indeed, nonrepresentational explanation of circadian phenomena is especially attractive when one considers that circadian rhythms are active at the levels of organism, organ, and single cell (e.g., at least Drosophila has clocks in all cells; Sehgal, Price, Man, & Young, 1994). But the sort of nonrepresentational, nonmechanistic explanation of circadian systems strong anticipation offers has no place in Bechtel’s proposed philosophy of science for cognitive science. We do not wish to go so far as to recommend replacing Bechtel’s philosophy of science with a different one; rather, we offer a philosophy of science for the growing minority in the cognitive sciences whose explanations are dynamical and do not posit mechanisms or representations.
6.1. Defining “cognition”
For some cognitive scientists, that cognition involves transformations of internal representations is simply a matter of definition; they simply define “cognition” and the subject matter of cognitive science as involving representations. For example, in describing what they call the “mark of the cognitive,”Adams and Aizawa (2008) argue that cognitive systems must traffic in representations with nonderived content. Similarly, Rowlands (2009) defines cognition as information processing that produces representations. These understandings of cognition establish by necessity what is surely something that must be discovered. Whether all, some, or none of cognition involves representations is an empirical matter, and the empirical facts on this are simply not in. Moreover, like Bechtel’s proposed philosophy of science, this excludes a good deal of actual published research in cognitive science. We therefore propose an understanding of cognition that is inclusive of what is studied by nonrepresentationalist, dynamicist cognitive scientists. We take it that cognition is the ongoing, active maintenance of a robust animal–environment system, achieved by closely coordinated perception and action. Of course, this understanding of the nature of cognition is intended primarily to reflect the phenomena of dynamicist cognitive scientists in philosophy, psychology, AI, and artificial life—that is, perception-action. But notice that it also applies to learning, speaking, reasoning, and other traditionally cognitive phenomena.
6.2. Dynamical models are genuinely explanatory
Some cognitive scientists and philosophers of cognitive science maintain that dynamical explanation is not genuinely explanatory, but merely describes phenomena. The reasons for this claim can stem from a strong theoretical commitment to computational explanation (e.g., Adams & Aizawa, 2008) or from a normative commitment to mechanistic philosophy of science (e.g., Craver, 2007). Yet many cognitive scientists are committed to neither computational explanation nor normative mechanistic philosophy of science, and they can embrace dynamical explanations as genuine explanations. There is good reason to take dynamical explanations to be genuine explanations and not as mere descriptions. To see why, we must say a bit about how dynamical explanations works in practice.
Dynamical explanations do not propose a causal mechanism that is shown to produce the phenomenon in question. Rather, they show that the change over time in set of magnitudes in the world can be captured by a set of differential equations, as shown in the case of circadian systems above. These equations are law-like, and in some senses dynamical explanations are similar to covering law explanations (Bechtel, 1998; Chemero, 2009). That is, dynamical explanations show that particular phenomena could have been predicted, given local conditions and some law-like general principles. In the case of circadian systems and the strong anticipation model, we predict the behavior of the slave system, using the mathematical model and observed activity of the master system. Notice too that this explanation is counter-factual supporting: We can use the mathematical model to make predictions of the activity of the slave system with so-far-unobserved activity in the master system. These predictions can be the basis of further experimentation. This allows some dynamical models to act as guides to discovery (Chemero, 2009). Indeed, in the best dynamical explanations, an initial model of some phenomenon is reused in slightly altered form, so that apparently divergent phenomena are brought under a small group of closely related models. Thus, dynamical explanation can provide unification, in the sense discussed by Friedman (1974) and Kitcher (1989). We can see this looking at research on coordination dynamics.
The most well-established body of research that employs complex systems in the cognitive and neural sciences is work on coordination dynamics. Its empirical antecedents were the investigations of von Holst (1936/1974). Its theoretical antecedents were arguments by Kugler, Kelso, and Turvey (1980)—and, tangentially, by Gibson (1979), Bernstein (1967), Iberall (1977)—that explanations of coordination be consistent with strictures of physical principles that inform the self-organization of biological systems. Its modeling antecedents were the mathematical formalisms of Haken (1977) developed to address the potentially profound analogies among seemingly very different systems studied in the physical, biological, and social sciences (see Frank, 2004). Coordination dynamics’ departure point was bodily rhythms.
Rhythmic limb movements at a common frequency tend to occur in two stable patterns of coordination, inphase and antiphase. With an increase in the common frequency there is a tendency for antiphase of homologous muscles to switch spontaneously to inphase of homologous muscles but not vice versa. This bistable 1:1 frequency locking of limbs can be characterized by relative phase with the observed interlimb patterns mapped onto point attractors at radians and radians.
The simplest dynamics of satisfying the aforementioned behavior for two limbs or limb segments of the same type (e.g., left and right index fingers) are given by
where V is the potential function
It has “valleys” or attractors at 0 and ±π and “hilltops” or repellors (at ±π/2 and ±3π/2) with the relative strengths of the attractors governed by the parameter b/a. Given Eq. 5, Eq. 4 becomes (Haken, Kelso, & Bunz, 1985):
For reasons that will become apparent below, Eq. 6 is the deterministic and symmetric form that captures the law-like principles of elementary coordination (Kelso, 1995; Park & Turvey, 2008). As such, one expects to see the hand of Eq. 6’s dynamics revealed in each and every manifestation of monofrequency rhythmic behavior—most notably, the feature of reflectional symmetry in Eq. 5 and the distinction between the stability of coordination at (or in the vicinity of) 0 radians and that at (or in the vicinity of) π radians (Park & Turvey, 2008). The identification of the principles of elementary coordination accords with three principal lessons from the study of complexity (Goldenfeld & Kadanoff, 1999).
Lesson 1: Even in simple situations, Nature produces complex structures and even in complex situations Nature obeys simple principles.
Lesson 2: Revealing large-scale structure requires a description that is phenomenological and aggregated and directed specifically at the higher level.
Lesson 3: A modeling strategy that includes very many processes and parameters obscures (qualitative) understanding.
Equation 6 has proven to be more than a compact and convenient way to describe interlimb synchrony. It has generated multiple novel predictions that have been evaluated experimentally (see summaries in Kelso, 1995; Fuchs & Jirsa, 2008). This is especially the case for its stochastic nonlinear Fokker-Plank (Frank, 2005; Schöner, Haken & Kelso, 1986) and (potentially) nonsymmetric form. The latter obtains when (a) fluctuations in coordination, and (b) differences between the two limbs are incorporated, respectively, by inclusion of a Gaussian white noise ξt of strength and an “imperfection” parameter that can assume values other than zero:
A brief survey of the contributions of Eq. 7 and its extensions to the cognitive and neurosciences follows: Attention (e.g., Amazeen, Amazeen, Treffner, & Turvey, 1997), intention (e.g., Scholz, & Kelso, 1990), learning (e.g., Newell et al., 2008); handedness (e.g., Treffner & Turvey, 1995, 1996), polyrhythms (e.g., Sternad, Turvey, & Saltzman, 1999), interpersonal coordination (e.g., Schmidt & Richardson, 2008), cognitive modulation of coordination (Pellecchia, Shockley, & Turvey, 2005), sentence processing (e.g., Olmstead, Viswanathan, Aicher, & Fowler, 2009), speech production (Port, 2003), and brain-body coordination (Kelso et al., 1998).
Neurodynamics, as the name suggests, provides its own set of like examples. Skarda and Freeman (1987) showed that the background activity of the rabbit olfactory bulb can be modeled as a chaotic dynamical system. Bressler, Coppola, and Nakamura (1993) showed that the Eq. 7 predicts the coordinated activity of brain areas during perceptual tasks. Varela, Lachaux, Rodriguez, and Martiniere (2001) suggest that large-scale neural integration is the result of the establishment of transient phase couplings among brain areas (i.e., couplings whose relationship is measured by ) forms the substrate for cognition and conscious experience. Collectively, this work established the now-thriving neurodynamics research program (see Cosmelli, Lachaux, & Thompson, 2007 for review).
The above list crosses anatomical, species, and functional boundaries, spanning multiple disciplines. Dynamical cognitive scientists have brought these disparate phenomena under a single model, in a case of textbook scientific unification.
We take it that the above gives a sense of how dynamical explanation works in the cognitive sciences, and how it effectively explains cognition appropriately understood. Dynamical cognitive science explains the ongoing, adaptive perception-action of robust animal-environment systems; dynamical systems models provide law-like explanations, support counterfactuals, and allow predictions that can be used to guide future experimental research; the best dynamical models can be used to unify disparate phenomena, capturing them under a single explanatory scheme.