#### 2.1. Attractors and detuning

Adaptation and re-adaptation data as shown in Fig. 2 (right panel) have frequently been analyzed by fitting the raw data to a monotonically decaying or increasing function (see, e.g., Blau et al., 2009; Brookes, Nicolson, & Fawcett, 2007; Fernandez-Ruiz & Diaz, 1999; Martin et al., 1996a, 1996b). In the present article, we evaluate adaptation and re-adaptation from a dynamical systems perspective. Within this perspective Schöner and Kelso (1988a, 1988b) suggested that learning any arbitrary bimanual rhythmic coordination (e.g., a phase relation of 45°) involves (a) an *intrinsic attractor* (e.g., 0° or in-phase) that is invariant over learning trials, and (b) a *learning attractor* that emerges over learning trials. Paralleling this suggestion, Frank et al. (2009) proposed a model of prism adaptation that involves an intrinsic attractor that is invariant over adaptation trials and an *adaptation attractor* that evolves over adaptation trials (“prism on” condition) and disappears during the re-adaptation trials (“prism off” condition; see left panel of Fig. 2). Their modeling includes a parameter δ that expresses the aforementioned training-to-test change in the throwing arm’s moment of inertia (and, thereby, its haptic stimulation) and allows the reconstruction of the latent aftereffect.

The use of δ to designate this parameter highlights the parameter’s relation to the notion of *detuning* in the coordination dynamics literature (Kelso, 1995; Park & Turvey, 2008). It does so in the following sense: The test conditions were either (a) “in resonance” with the training conditions, δ = 0 (e.g., Test 1 for Group 1), or (b) “out of resonance” with the training conditions, δ ≠ 0 (e.g., Test 1 for Group 5). In the present article, we investigate the extent to which a fit of the Blau et al.’s (2009) data to the model of Frank et al. (2009) provides evidence that the parameter δ is related to the experimentally manipulated difference between the movement conditions of training and test.

#### 2.2. Nonlinear attractor dynamics model of prism adaptation

We consider a stochastic version of the nonlinear attractor dynamics (NAD) model of prism adaptation proposed by Frank et al. (2009). As originally formulated, the NAD model accounts for the stability and flexibility of the adaptation process. Before and after completion of the adaptation process, performance is stable. During the adaptation process, performance is (by definition) flexible.

Evolution of the performance error *x* is described by the original NAD model in a time-continuous framework. At any fixed time point, *x* is subjected to a so-called potential or attractor dynamics. That is, *x* is regarded as a coordinate of a potential energy function evolving in such a way that its potential energy value decays toward a minimum. This latter dynamical process is the stability property. The flexibility property, a key feature of Frank et al.’s (2009) NAD model, is that the potential energy function itself evolves as an integral aspect of the adaptation process.

The two features, stability and flexibility, are coupled in the NAD model. To this end, the total performance error potential is composed of a fixed (time-invariant) intrinsic potential and a potential that emerges during adaptation (adaptation potential). The latter potential reflects one part of the coupling between flexibility and stability. Another part of the coupling between flexibility and stability is given by the fact that the performance error acts as a driving force on the evolution process of the adaptation potential (see Eq. A1).

The evolution of the adaptation potential can be captured by means of the signed potential amplitude α. The sign of α describes in which direction the emerging adaptation potential shifts the performance error and in doing so makes it vanishing. The amount of α describes the overall scale of the potential. The original NAD model of Frank et al. (2009) includes two coupled evolution equations. One describes the evolution of the performance error (and as noted above accounts for performance stability). The other describes the emergence of the adaptation potential (and as noted above reflects performance flexibility). The second-order dynamical model can exhibit oscillatory and non-oscillatory solutions. In what follows, we restrict analyses of prism adaptation data to the non-oscillatory part of the Frank et al.’s NAD model. In this case, we can simplify using the principle of adiabatic elimination (see Haken, 2004). In doing so, we obtain a first-order dynamic model for the evolution of the signed amplitude α (see Appendix A). In the time-continuous framework, the adaptation dynamics for α reads

- (1)

Equation (1) is a quantitative description of the phenomenon that performance is flexible under adaptation. Here, *t* is time defined on a continuous scale (see Frank et al., 2009). We will consider an event-based, time-discrete scale below. In Eq. (1), the parameter *s* denotes the effective prismatic shift, that is, the effect of shifting the gaze by prism glasses. The strength of the prismatic shift on a behavioral activity may vary across individuals (e.g., Warren & Platt, 1975). In Eq. (1), the parameter *s* denotes the experienced prismatic shift as measured in terms of the motor performance. That is, *s* corresponds to the initial motor performance error. This experienced shift need not necessarily correspond exactly to the degree that the prism glasses shift the target laterally relative to the participant’s sagittal plane.

The parameter *c *≥* *0 in Eq. (1) is the weight of the intrinsic error correction potential with respect to the emerging adaptation potential. The parameter δ reflects an additive force term of the error dynamics that tends to produce a positive error for δ > 0 and a negative error for δ < 0. (Eq. A1). To reiterate, it was shown in Frank et al. (2009) that a parameter such as δ, capturing the difference between training (adaptation trials) and test (re-adaptation trials), could be used to explain, at least qualitatively, the emergence of AE2, the latent aftereffect. The parameter κ > 0 characterizes how the strength of visual feedback about the performance error affects the adaptation process, that is, the emergence of the adaptation potential. (An alternative interpretation of the parameters κ is given below.) The most right-standing term in Eq. (1) is a fluctuating force composed of a normalized Langevin force Γ(*t*) and an amplitude *Q* >* *0.

The stochastic evolution equation, Eq. (1), can be split into two parts. To this end, we consider the function

- (2)

This function captures the deterministic impacts on the evolution of the adaptation amplitude α and is called the drift function. Equation (1) reads

- (3)

The first-order dynamical model describes a potential dynamics involving the potential function *V*. That is, Eq. (3) can be written as

- (4)

with

*V*(α) = −∫h(α)dα, and

- (5)

for *c *>* *0, and

- (6)

for *c *=* *0. Note that if |α| is large with respect to *c*, then Eq. (5) for the potential *V* reduces to Eq. (6) just as in the case *c *=* *0.

Figure 3, with the parameter α expressed in meters, illustrates the classical adaptation (*A*) and re-adaptation (*R*) process as expected for participants in Group 1 (no change in arm moment of inertia, see Fig. 2) and predicted by the NAD adaptation model. During the adaptation phase the potential *V*(α), which governs the emergence of the adaptation attractor, looks qualitatively as shown by *A* in Fig. 3A. The minimum is different from zero due to the impact of the prismatic shift. Fig. 3B (upper panel): the adaptation attractor amplitude increases from zero (open circle) to a stationary finite value (full circle) that corresponds to the potential minimum (minimum of *A* in panel A). During re-adaptation the potential *V*(α) has the qualitative form of *R* in Fig. 3A and exhibits a minimum at zero. Consequently, the amplitude dynamics converges to zero (transition indicated by open and full circle in Fig. 3B lower panel).

The prism adaptation dynamics predicted by the NAD model for experimental conditions involving a training-to-test change in movement details is more complex and illustrated in Fig. 4. The potential *V*(α) during the adaptation condition (Training) is illustrated by *A* in Fig. 4A. Under the first re-adaptation condition (Test 1) we are dealing with a potential *V*(α) that looks qualitatively similar to *R*_{1}. The potential does not exhibit a minimum at zero, although the prismatic shift equals zero. The reason for this is that *δ* is different from zero and results in a shift of the minimum of *V*(α) out of the origin. Only during the second-adaptation condition (Test 2) shown by *R*_{2} does the potential *V*(α) exhibit a minimum at α = 0. The adaptation amplitude evolves qualitatively in three steps as illustrated in Fig. 4B (top panel to bottom panel).

We can now consider the relation between α and the performance error *x*. Due to the aforementioned adiabatic elimination process, the performance error *x* can be computed from α by means of an invertible nonlinear mapping (see Appendix A)

- (7)

such that Eq. (1) can be expressed alternatively in the non-closed form

- (8)

From Eq. (8), it becomes clear that the parameter κ corresponds to one of the two constants that describe the coupling between stability (performance error dynamics) and flexibility (evolution of the adaptation attractor). The other coupling constant is given by the parameter *c* (see Eqs. A1 and A6). Moreover, the fact that Eq. (7) provides a mapping between the adaptation amplitude and the performance error implies that the stability related subprocesses can maximally change at a rate determined by the subprocesses accounting for performance flexibility. In other words, in the modeling effort presented above we focus on those cases in which the time-scale characterizing the performance flexibility under adaptation is the rate-limiting parameter for the overall dynamics, including both stability and flexibility. Mathematically speaking those cases involve a first-order dynamics and consequently are consistent with experimentally observed performance errors that decay with a non-oscillatory pattern under adaptation. In order to improve the link between model and experiment, and to estimate the parameters on the basis of experimental data, we consider in what follows a time-discrete, event-based counterpart of the time-continuous dynamical model, Eq. (1). First, we consider a sequence of performed actions *n *=* *1, 2, . . ., *N*. We denote with α(*n*) the adaptation amplitude when the *n*th action is performed. Likewise, we denote with *x*(*n*) the error observed in the *n*th performed action (*n*th trial). With these notations in mind, the first-order differential equation, Eq. (1), has as counterpart a first-order difference equation, which reads

- (9)

In Eq. (9), we replaced the parameter κ by a parameter *k*. Both parameters have the same interpretation but different time units. The fluctuating force in the time-discrete case is described by the random variable ε. At any time step *n*, the variable ε is drawn from a normal distribution with variance equal to *Q* (see Frank, 2005; Risken, 1989). Equation (4) becomes

- ((10)

The performance error *x* is related to the amplitude α in the manner

- (11)

#### 2.3. Model-based data analysis

In the following, we show how we estimated the parameters *δ*, *c*, *k*, and *Q* from the experimental data published in Blau et al. (2009) using Eqs. (9–11; for a comment on the growth curve analysis used in Blau et al. and the model-based analysis used in the current study, see Appendix C). Let *x*^{D}(1), *x*^{D}(2), . . ., *x*^{D}(*N*) denote experimentally observed performance errors. We first selected a set of parameters *k*, *δ*, and *c*. Then we iterated the model and compared the model trajectory {*x*(1), *x*(2), . . ., *x*(*N*)} with the experimental data and calculated a model-fit-error. Subsequently, we varied the parameters *k*, *δ*, and *c* and computed the mismatch again. In total, we varied the parameters in a parameter space *k *= *k*_{min}, . . ., *k*_{max}, |*δ*| = 0, . . ., *δ*_{max}, *c *=* *0, . . ., *c*_{max}. Note that according to the NAD adaptation model, non-vanishing parameters *δ* have the same sign as the experienced prismatic shift *s*. Since *s* was negative in the study by Blau et al. (2009; see Fig. 1A) we considered negative values for *δ*. We finally selected those parameters that minimize the model-fit-error. More precisely, we started each iteration with *α*(1) = 0 because at the beginning of the prism adaptation process the adaptation potential is not present (see also Frank et al., 2009). Furthermore, we assumed that the experienced prismatic shift *s* corresponds to the first performance error. Consequently, we put *x*(1) = *x*^{D}(1). In sum, we computed the amplitudes α(*n*) and the performance errors *x*(*n*) using Eqs. (9) and (11) for ε = 0 with α(1) = 0 and *x*(1) = *x*^{D}(1). For the 30 time steps of the adaptation phase (see Fig. 2), we used s ≠ 0 and *δ *= 0. For the subsequent 15 time steps (first part of the re-adaptation phase, “Test 1” condition), we had *s *=* *0 and |*δ*| ≥ 0. That is, we tested various *δ* parameters including *δ *= 0 as one possible parameter value. For the remaining 15 time steps (second part of the re-adaptation phase, “Test 2” condition), we had *s *= *δ *= 0. Between the adaptation and re-adaptation phase, we allowed the adaptation amplitude to decay as discussed in Frank et al. (2009; see also Fernández-Ruiz, Díaz, Aguilar, & Hall-Haro, 2004). The decay of the adaptation amplitude accounts for the fact that the aftereffect is usually smaller in magnitude than the prismatic shift. In order to mimic such a decay, we iterated the amplitude dynamics given by Eq. (9)*m* times, where *m* is a parameter that varied from *m *=* *0, . . ., *m*_{max}. We used *k*_{min} = 0.01, *k*_{max} = 1.0, *d*_{max} = 0.4 *s*^{2}, *c*_{max} = 15, and *m*_{max} = 6. Note that in Eq. (7)*δ* occurs in combination with the term α^{2}. The amplitude α in turn converges during the adaptation process to a value that is proportional to the prismatic shift *s* (Frank et al., 2009). Therefore, we scaled the interval for *δ* with the size of the squared prismatic shift. The model-fit-error was defined by

- (12)

Having obtained the optimal parameters *δ*, *c*, *k*, and *m* that minimize the model error, we solved Eq. (9)—the time-discrete event-based form of the NAD prism adaptation model—for the optimal parameters. We then computed the strength *Q* of the fluctuating force. From the results of Friedrich and Peinke (1997) and Frank, Friedrich, and Beek (2006), *Q* can be computed from Eq. (9) as

- (13)

The parameter *Q* needs to be estimated from the experimental data *x*^{D}. To this end, we inverted Eq. (11). We computed pairs {α^{D}(*n*), α^{D}(*n *+* *1)} from data pairs {*x*^{D}(*n*), *x*^{D}(*n *+* *1)} and subsequently substituted the pairs {α^{D}(*n*), α^{D}(*n *+* *1)} into Eq. (13). The five model parameters were estimated for each participant from the 30 data points in the training and the 30 data points in the Test 1 and Test 2 phases. That is, parameter estimates were calculated based on 60 data points.