On EDP‐convergence for gradient systems with different time scales

Motivated by the author's work on evolutionary Γ‐convergence of slow‐fast‐reaction systems, we consider a simple evolution equation, an exponential decay which depends very singularly on a small parameter ϵ > 0: the decay rate is proportional to 1/ϵ. Nevertheless the evolution equation can be understood as a gradient system for which structural convergence, the so‐called EDP‐convergence, has been established in recent years. We investigate evolutionary Γ‐convergence based on the energy‐dissipation principle (EDP) for that simple singularly behaving evolution equation.

Often not only the convergence to a limiting solution is of interest but also a convergence of the whole evolution system is needed which preserves the underlying physical structure. Several questions arise: Is there a underlying structure for (EE) and what does convergence for the structure mean? Moreover, how can the jump for the limit be understood?

Gradient structures and EDP-convergence
Indeed, the simple evolution equation (EE) can be understood as a gradient system consisting of a quadratic energy and dissipation (see [2] for an detailed explanation of gradient systems; our notion is based on that work). Let us introduce on the state space X = R the energy functional E ε (u) = 1 2 u 2 and two dissipation potentials R ε (v) = ε 2 v 2 , R * ε (x) = 1 2ε x 2 (which are related via the Legendre transform). Then (EE) is induced by the gradient system (X, E ε , R ε ), i.e.
Here G ε v = εv is a positive and symmetric operator, the so-called Onsager-operator, given by 1 2 G ε v, v = R ε (v) (in our 1-dimensional case it is just the scalar multiplication with ε). The Onsager operator G ε is singular, since it converges to zero, as ε → 0. Obviously, on X = R the dissipation potentials R ε and R * ε have the following Γ-limits R ε The gradient flow equation can be formulated in various forms. In particular, (GS) is equivalent to the following energydissipation balance: The energy-dissipation balance compares the energy E ε (u(·)) at time t with the energy at time s and the difference is described by the real-valued dissipation functional D ε (u; s, t), the so called De Giorgi dissipation functional. In fact, the (EDB) for s = 0 and t = T already implies that (GS) holds. In our case, the dissipation functional is ε-dependent and given by The energy does not depend on ε > 0, which means that the whole ε-information is captured in the dissipation functional D ε . The EDB is the starting point for defining limits of gradient systems.
The general philosophy can be described as follows. Let u ε be a solution of a gradient system which converges pointwise for any t ∈ [0, T ] to u 0 . Assume that for any t ∈ [0, T ] it holds that lim inf ε→0 E ε (u ε (t)) ≥ E 0 (u 0 (t)) and lim inf ε→0 D ε (u ε ; 0, T ) ≥ D 0 (u 0 ; 0, T ) where D 0 = R 0 +R * 0 dt. Then the EDB also holds for ε = 0, provided that the initial values are well-prepared, i.e. u ε (0) → u 0 (0) and E ε (u ε (0)) → E 0 (u 0 (0)). In particular, u 0 is a solution of the gradient system (X, E 0 , R 0 ). This motivates the following definition of evolutionary Γ-convergence based on the energy-dissipation-principle (cf. [2,3]): We say Note that E ε is defined on X, whereas D ε is defined on the space of curves in X.
Γ-limit of D ε Let us firstly fix s, t ∈ [0, T ] with s ≤ t. Clearly, D ε can be considered as a functional on L 2 ([s, t]), i.e.
In particular, we conclude that D ε Note that D 0 (u) < ∞ implies that DE(u) = 0 and hence, u = 0 almost everywhere. The crucial observation is the following. The functional D ε is of Modica-Mortola type due to the penalization term which restricts the limit to a smaller space but allows, nevertheless, for some jumps. Indeed, compactness in BV can be shown. Let us define w ε := u 2 ε . We have already shown that w ε → 0 in L 1 ([s, t]). Moreover, we observe that In particular, D ε (u ε ; s, t) implies that w ε and also d dt w ε are uniformly bounded in L 1 ([s, t]). Hence, we conclude that there is a function w 0 ∈ BV([s, t]) and w ε → w 0 in L 1 ([s, t]) (we already know that w 0 = 0 almost everywhere). Moreover, assuming that the initial value is well-prepared, we conclude by Helly's selection theorem that the convergence is even pointwise, i.e. for any r ∈ [s, t] it holds that w ε (r) → w 0 (r). So w 0 is a BV-function and hence, has at most countable many jumps, which are removable discontinuities. Moreover, we observe that the same holds for u ε and that u 0 has at most countably many jumps in the form of removable discontinuities and, moreover, is zero otherwise. Thus, we can write D 0 also in a different form: Introducing J(u) as the set of jump times of u, we have D 0 (u) = s∈J(u) E(u(s)) if u ∈ BV([0, T ]) with u = 0 a.e., and ∞ otherwise. Note that the last formula for D 0 is not of R + R * -structure.

Convergence of solutions and the EDB
So what does the above observation imply for the solution of the gradient systems? Clearly, everything is independent of the choice of the interval [s, t] ⊂ [0, ∞[. Take any t > 0 and fix a final time T > t > 0. Then there is a time value s > 0 such that E(u ε (s)) → 0 as ε → 0 since u 0 = 0 for at most countably many times in [0, T ]. Moreover, the energy is not negative and the EDB implies that the energy is non-increasing, i.e. for s ≤ t it holds that E(u ε (s)) ≥ E(u ε (t)) ≥ 0. Hence, E(u ε (t)) → 0. Since t > 0 is arbitrary, we conclude that the solutions u ε converge on ]0, T ] to the constant zero function u 0 = 0. The function u 0 satisfies again the EDB on ]0, T ] and is the continuous representative of u satisfying D 0 (u) < ∞. The only jump time for u 0 is possibly t = 0. Hence, D 0 (u 0 ) = E(u 0 (0)), which means that in the limit the solution u 0 jumps instantaneously at t = 0 at zero and the total energy is dissipated.
In this sense the above defined evolutionary convergence based on the energy dissipation principle is consistent with convergence of solutions. It tells us, that jumps necessarily emerge and can nevertheless be modelled with the EDB.
Let us finally mention the following: In the limit the state space X collapses to a lower dimensional state space (in our casẽ X = 0). Well-prepared initial data could also mean that the initial values converge toX.