Stability, complexity and the maximum dissipation conjecture



The formalism of irreversible thermodynamics is extended to include the effect of random perturbations and applied to representative systems giving rise to instabilities and to complex nonlinear behaviours. The extent to which dissipation as measured by the entropy production exhibits variational properties that can be linked to key indicators of the dynamical behaviour is explored with emphasis on the conjecture of the climate system as a system of maximum dissipation. Copyright © 2010 Royal Meteorological Society

1. Introduction

Atmospheric and climate dynamics rank among the most complex phenomena encountered in nature. They involve processes occurring over a wide spectrum of space- and time-scales, from the chemistry of minor constituents in the stratosphere to hurricanes, droughts or the Quaternary glaciations, and give rise to a variety of intricate behaviours in the form of abrupt transitions, wave propagation, weak chaos or fully developed turbulence.

The generally accepted approach to atmospheric and climate dynamics is based on numerical forecasting models, in which all processes deemed to be relevant are included in the model equations. This introduces a large number of variables and heavy parametrizations, often at the expense of a deeper understanding of the principal mechanisms behind the phenomenon of interest. It is therefore tempting to seek the possibility of universal trends in the form of ‘organizing principles’ underlying key aspects of atmospheric and climate dynamics, which could end up being masked in the context of a full-scale analysis.

An elegant and far-reaching expression of universality is the formulation of variational principles. The idea is that, whatever the specifics might be, among all possible paths that may link an initial state to a final one, the path that will actually be followed under the action of the evolution laws of the system of interest extremizes a certain quantity. In classical physics and at the microscopic level of description in which the system's state variables are the positions and momenta of the constituting particles, a celebrated variational principle of this sort is the principle of least action (Arnold, 1976). The extent to which results of this kind can also be expected at the macroscopic level of description as well, in which the laws governing atmospheric and climate dynamics are usually formulated, has attracted a great deal of interest over the last several decades. Research in irreversible thermodynamics and nonlinear dynamics has shown that variational properties involving only macroscopic variables may exist under some well-defined (and rather stringent) conditions such as the linear range of irreversible processes (minimum entropy production theorem) or the vicinity of a bifurcation at a simple real eigenvalue (in which case the dynamics derives from a generalized, kinetic potential). In contrast, in the most general case of systems operating far from criticalities and far from the state of thermodynamic equilibrium, there exists no variational principle generating the full form of the evolution equations (Glansdorff and Prigogine, 1971; Nicolis and Nicolis, 2007).

Despite this negative conclusion, a number of approaches based on the existence of an extremum principle have been reported in the atmospheric and climate literature. Most familiar is the one originally proposed by Paltridge (1975, 1981) that global climate is a state of maximum dissipation. The idea has been taken up by several authors, both in the atmospheric and climatic (Ozawa et al., 2003, and references therein) and the general physics (Martyushev and Seleznev, 2006, and references therein) literatures and, eventually, the concept of ‘maximum entropy production principle’ has emerged. As the very term ‘principle’ implies, support of this statement came mainly from circumstantial evidence or from qualitative arguments and there is currently no direct proof available, as a recent attempt to that effect (Dewar, 2005) was subsequently shown to be unfounded (Grinstein and Linsker, 2007). In the present work, the maximum entropy production principle is reassessed. We show that the tendency to maximize dissipation does not reflect a universal trend but is, rather, system specific. Put differently, a general thermodynamic principle underlying the evolution of complex nonlinear systems out of equilibrium and based on the properties of the entropy production—the basic thermodynamic quantity measuring dissipation—is not to be expected, at least as long as a macroscopic description is adopted. Our approach is dynamics-driven, in the sense that we inquire whether the laws governing the evolution of a natural phenomenon may lead to extremal properties reminiscent of maximum dissipation. This is to be contrasted with a number of other approaches (e.g. Dewar, 2005) where extremal principles are postulated at the outset on the basis of information theoretic arguments.

In what follows we focus, successively, on three key points that have been advanced to support the idea of a maximum dissipation principle. In section 2 the hypothesis that a nonlinear system will move, when perturbed, to a ‘dominant’ state in which entropy production is a maximum is addressed, and a number of counterexamples are provided. In section 3 the idea that the deterministic trajectory, which constitutes the most probable state of a system subjected to fluctuations, dissipates more than any of its fluctuating counterparts is considered. An extension of classical irreversible thermodynamics incorporating the effect of fluctuations is outlined. A generalized expression of the associated entropy production is derived, from which is concluded that fluctuations may, in the mean, enhance or depress the dissipation compared with the dissipation on the most probable state. Section 4 is devoted to a reformulation of the proposal of global climate as a system of maximum dissipation, in the light of the analysis of section 3. Again, no basis in favour of such a property is found. The main conclusions are summarized in section 5.

2. Entropy production as a selector

In this section we address the question of possible connections between dissipation and stability, the latter being the principal mechanism responsible for the eventual selection of the dominant state that will asymptotically be followed by a system. In doing so, we adopt a macroscopic level (deterministic, ‘mean-field’ like) description, in which the effect of microscopic-level processes (fluctuations, etc.) on the evolution of the macroscopic observables is discarded.

Let Xi (i = 1, ···, n) be the state variables of the system. The typical form of their evolution equations is

equation image(1)

where Fi are as a rule nonlinear functions of the Xs and λ, µ, ··· are control parameters accounting for the interaction between the system and its environment. In a dissipative system, phase space volumes are contracting in the mean, and Eqs. (1) admit long time solutions {Xjs}, j = 1, ···, n (attractors), whose dimensionality is strictly less than that of the full phase space. If the attractor is the unique stable invariant manifold available, it will eventually be attained by all possible initial conditions except for a subset of measure zero, and this settles automatically the question of selection. In the presence of coexisting attractors, selection will, on the other hand, be determined by the relative importance of their basins of attraction.

While stability is a global property associated with the eigenvalues and eigenvectors of the Jacobian matrix associated with Fi and evaluated on the reference attractor, as well as with the structure of the attraction basins (Nicolis, 1995), dissipation depends on the contrary on the individual irreversible processes going on in the system to which one associates a set of fluxes Jα and their conjugate driving forces Aα. As is well known from irreversible thermodynamics, the form of the fluxes and forces is determined by the entropy balance of the system at hand which, within the limits of validity of a local description (no memory effects, spatial inhomogeneities varying on a macroscopic scale), leads to the following expression of the entropy production (Glansdorff and Prigogine, 1971):

equation image(2)

where the equality sign applies only in the limit of thermodynamic equilibrium and the summation index α runs over the independent irreversible processes present. For each α, both Jα and Aα depend on the state variables Xi. The structure of the resulting function P({Xjs}) on the invariant set is quite different from that of the quantities determining stability. In principle, therefore, there is no reason to expect a close link between dissipation, as measured by the entropy production, and selection, as measured by stability.

An example of the lack of a clear one-to-one relationship between stability and dissipation is provided by the following model of two coupled reactive processes (Schlögl, 1971):

equation image(3)

where ki (i = 1, ···, 4) are the rate constants, A and B denote respectively the initial reactant and the final product, and X is a reactive intermediate capable of catalyzing its own production. The system is maintained open and out of equilibrium by keeping the concentrations a and b of A and B constant. The fluxes and associated driving forces of the two steps in (3) are

equation image(4a)
equation image(4b)

where x is the concentration of X and the Ais are scaled by the product of temperature and the gas constant. The evolution equation for x and the explicit form of entropy production become

equation image(5)
equation image(6)

Notice that (5) can be written in the form

equation image

where the kinetic potentialU is (up to a constant)

equation image(7)

As the right-hand side of (5) is cubic in x with alternating signs, by Descartes' rule the system can admit up to three positive (and thus physically acceptable) steady-state solutions. As is well known, these solutions and the entire dynamics can be fully accounted for by varying two parameters, λ and µ, which we identify by setting k1 = k2 = k4 = 1, a = 3, b = 1 − λ and k3 = 3 − µ. Figure 1(a) depicts the dependence of the steady-state solutions on µ for fixed λ. As can be seen, there is a region µ1 < µ< µ2 for which there are, indeed, three coexisting solutions, those in the upper and lower branches being the stable ones. Furthermore, an increase of µ between values µ< µ1 and µ> µ2 followed by a decrease between these values gives rise to hysteretic behaviour. As x runs over different values, the kinetic potential (7) goes through two minima corresponding to the stable states x and x+ separated by a maximum corresponding to the unstable state x0. The relative stability of the steady-state solutions is determined by the depth of these minima with respect to U(x0), say ΔU± = U(x0)− U(x±), the solution associated with the deepest minimum being the dominant one when the system is subjected to weak random perturbations. Figure 1(b) gives the dependence of ΔU± on µ. At the crossover point equation image, x+ and x are equally dominant. For equation image, state x is the dominant one, the opposite being true for equation image.

Figure 1.

(a) Steady-state solutions of model (5) as parameter µ is varied. Full and dashed lines stand for stable and unstable solutions, respectively. (b) Relative stability of the two coexisting stable states x and x+ in the parameter range µ1 < µ< µ2, as measured by the depth of the potential difference ΔU± between the unstable state x0 and either of the stable states.

We turn now to the properties of the dissipation (6). In Figure 2(a) the values of P are plotted versus µ. We notice that, in the entire interval of variation of µ, the dissipation in the upper state x+ remains larger than its value in the lower state x, independently of the relative stability of x+ and x shown in Figure 1(b). Put differently, the state eventually selected in the evolution on the grounds of its stability (which is x or x+ for µ left or right of equation image, respectively) is not necessarily the state where the system dissipates the most (which is x+ for all values of µ). A different view of dissipation versus stability is provided by Figure 2(b). Here the differences in dissipation ΔP± = |P(x0)− P(x±)| associated with a transition from the unstable state x0 to either of the two stable states x± are plotted against µ. The trend is now similar to Figure 1(b), showing that starting from x0 the transition to the dominant state will occur in such a way that the dissipation will become maximized. Notice however, that this correlation between stability, selection and dissipation has its limits. In Figure 2(c) a ‘dissipation chart’ in phase space –essentially, P as a function of x –is drawn. As can be seen, starting from a (non-stationary) state x(0) on the right of x+ will entail an evolution towards state x+ in which the system dissipates less than x(0).

Figure 2.

(a) Entropy production P corresponding to the steady-state solutions of model (5); (b) difference in dissipation associated with a transition between the unstable state x0 to either of the stable states x and x+; and (c) ‘dissipation chart’ of transient thermodynamic behaviours starting from different initial conditions in phase space.

We close this section with two further examples of the connection (or lack thereof) between selection (stability) and dissipation, pertaining again to dynamical systems descriptive of coupled reactive processes and giving rise to complex behaviours. The advantage of this type of system is that, in each case, the fluxes and forces can be defined unambiguously and the entropy production can be computed straightforwardly in terms of the state variables using (2), in a way analogous to (4) and (6).

  • (i) A system giving rise to sustained oscillations (Lefever et al., 1988):

    equation image(8)
  • (ii) A system giving rise to deterministic chaos (Willamowsky and Rössler, 1980):

    equation image(9)

For system (8), keeping a = c=1, f=0.5 and increasing b, the unique steady-state solution undergoes a Hopf bifurcation at some value b = bc leading to a unique stable limit cycle. Computing the average entropy production

equation image

over the period T of the cycle shows that for all b values larger than the critical one bc, is larger than the value Ps in the unstable steady state. Selection is thus manifested through a tendency towards higher dissipation. The situation is different for system (9). Keeping a1 = a2 = a3 = a4 = a5 = 1, k1 = 31.2, k3 = 10.8, k4 = 1.02, k5 = 16.5, k−1 = 0.2, k−2 = 0.1, k−3 = 0.12, k−4 = 0.01, k−5 = 0.5 and varying k2, one of the steady-state solutions (the one in which the values of all x, y, z are all far from zero) undergoes first a Hopf bifurcation to a limit cycle, which subsequently evolves to a chaotic attractor through a sequence of period-doubling bifurcations. Computation of shows here the trend opposite to that of system (8): is less than Ps and, furthermore, its value tends to decrease as one switches from periodic to chaotic behaviour (Figure 3). Once again, there seems to be no universal correlation between selection (stability) and dissipation. Further aspects of the stability versus dissipation issue are discussed in Nicolis and Nicolis (1999) and Nicolis (1999, 2000, 2003).

Figure 3.

Entropy production Ps (dashed line) of the (unstable) steady state of system (9) versus parameter k2. In this range of values, the system undergoes a Hopf bifurcation leading first to a stable limit circle and subsequently to a chaotic attractor (at equation image) through a sequence of period-doubling bifurcations. stands for the average entropy production as obtained numerically over 20 000 time units for different values of k2 (full line).

3. Entropy production in the presence of fluctuations

Having seen that, among the apriori possible deterministic phase space paths, the one actually followed by a system does not necessarily lead to a state of maximum dissipation, we address in this section the relation, as far as dissipation is concerned, between the deterministic trajectory and the continuum of random trajectories around it associated with the presence of fluctuations (Van den Broeck et al., 1984; Mou et al., 1986). The latter can be of internal origin, in which case they have to satisfy a number of relations dictated by thermodynamics and statistical mechanics, or inflicted by the external environment and perceived by the system as an ‘external noise’.

At the level of the evolution equations of the state variables, intrinsic fluctuations are reflected by the presence of random contributions fα(t) to each of the thermodynamic fluxes Jα. One is thus led to the expression

equation image(10a)

replacing (1), with

equation image(10b)

where the coefficients νiα determine the contribution of the individual fluxes Jα to the overall rate of change of the {Xi}s. The corresponding expression for the generalized entropy production is

equation image(11)

Entropy production thus becomes a random variable, whose probability distribution π(P) is induced by those of the {fα}s and {Xj}s, the latter being in turn determined by the statistical properties of the {fα}s through Eqs. (10a, b). In the following it will be assumed that {fα} are weak, uncorrelated Gaussian random noises of zero mean:

equation image(12)

where the brackets <...> denote ensemble averages over the set of stochastic realizations.


equation image(13)

be the thermodynamic entropy production in a fluctuating state {Xj} and Ps its value in the absence of fluctuations, evaluated on a state {Xjs} (steady-state or any other kind of attractor) reached by the system in the long time limit. The question we raise is, whether fluctuations combine in such a way that the deterministic path –which as is well known corresponds to the most probable state –is also a state of maximum dissipation. If so, the probability density π(P) would turn out to be tilted towards values smaller than Ps in (13) or, in other words, that the mean entropy production < P> over π(P) would be less than Ps. We shall show presently that this need not be the case: the relation between < P> and Ps is both system-dependent and parameter-dependent.

Below we evaluate < P> to the first non-trivial order, Taylor expanding Jα and Aα around {Xis} and keeping in P terms of up to second order in the deviations

equation image(14)

For consistency, these deviations are in turn evaluated in the Gaussian limit, in which Eqs. (10a) are linearized in {xi},

equation image(15)

Notice that, since P is nonlinear in {Xi}, the Jαs and Aαs are not to be linearized separately with respect to {xi} when expanding the entropy production around the deterministic state {Xis}.

Proceeding along the above lines, one arrives straightforwardly at:

equation image(16)

Here the subscript s implies that the derivatives of Pth and Aα are to be evaluated in state {Xis}. The first term in the right-hand side is associated with the excess thermodynamic entropy production due to the fluctuations of the state variables, whereas the second term incorporates the effect of random fluxes in an explicit way, thus constituting a generalization of the formalism of classical thermodynamics. In states characterized by minimum entropy production, as is the case at and near thermodynamic equilibrium, the contribution of the first term is positive (Glansdorff and Prigogine, 1971). If only this term were present, fluctuations would thus tend to increase dissipation over that of the reference most probable state {Xis}, which is in opposition to the maximum dissipation idea. On the other hand, this contribution would subsist in the limit of thermodynamic equilibrium which is, clearly, absurd. The contribution of the second term in (16) is thus essential to restore consistency with thermodynamics. More specifically, using expression (13) for Pth, one may rewrite (16) as

equation image(17)

On the other hand, multiplying both sides of (15) by xi, averaging over all realizations and using the property of ergodicity, one obtains a fluctuation–dissipation type relationship:

equation image(18)

Furthermore, according to the definition of the thermodynamic forces, the derivatives of Aα with respect to the Xis are bound to be proportional to the coefficients νiα introduced in (10b). From this and (18), it follows that the first term in the right-hand side of (17) vanishes. This is also the case for the second term, as long as the reference state is a steady state, on the grounds of the property that ∑iανiαJαs then vanishes by definition (otherwise an extra time average is needed). We thus arrive at the final expression

equation image(19)

This result entails that the mean entropy production of fluctuations vanishes if the system operates around equilibrium (where Aαs = 0) or if it is governed by linear evolution laws (for which case (∂2Jα/∂XiXj)s = 0). On the other hand, the sign of this expression is not guaranteed; fluctuations may, in the mean, enhance or depress the dissipation compared with the dissipation on the most probable state. In the next section, an explicit illustration of the role of fluctuations in dissipation is provided on a case-study pertaining to the thermodynamics of global climate.

4. Global climate as a system of maximum dissipation revisited

The extent to which the Earth's climate system might be viewed as a structure of maximum entropy production has been a subject of discussion for more than three decades (Paltridge, 1975, 1981). Although no direct proof has ever been produced, a consensus has at some point built in favour of such a ‘principle’, based on the interpretation of data pertaining to laboratory experiments in fluid dynamics and to the horizontal temperature structure of the Earth and some other planets of the solar system (Osawa et al., 2003).

Recently, an attempt at a physical explanation of why the climate system should satisfy a maximum entropy production property has been reported (Paltridge, 2001). The Earth–atmosphere system is described as a two-box model which has since then received considerable attention in the literature, each box being associated with the adjoining equatorial and polar regions. Box 1 has temperature T1, receives an incoming net flux of solar energy F, emits an infrared radiant energy equation image (where σ is the Stefan–Boltzmann constant) and transfers to box 2 a heat flux J. Box 2 has temperature T2, emits an infrared radiant energy equation image and receives from box 1 the heat flux J. Normalizing time and/or parameters by the heat capacities, the balance equations describing this system become

equation image(20a)

In studies reported so far in the literature, the flux J is treated as a fixed, prescribed parameter . This view is unsatisfactory for at least two reasons. First, flux is inevitably subjected to fluctuations, δJ(t) associated with among others the turbulent nature of the flows responsible for the energy transfer at the boundary of boxes 1 and 2. Second, even in the absence of fluctuations and on purely thermodynamic grounds, is expected to be a function of the driving force associated with energy transfer,

equation image(20b)

where ΔT = T1T2. In short, J in (20a) is to be decomposed as

equation image(20c)

where δJ(t) will from now on be modelled as a Markov random noise source. The state in which the two-box model will be found in the limit of long times will then be determined by solving the nonlinear stochastic differential equations (20a) subject to (20b, c). This is the objective of the present section.

We first compute the steady-state solutions of Eqs. (20a) in the absence of fluctuations (mean-field limit). Upon a second scaling, σ and F are set equal to unity and, treating as a parameter for the time being, one obtains

equation image(21a)

and hence

equation image(21b)

In Figure 4 the flux is plotted against ΔT using (21b). For a given choice of value of , a unique steady state is obtained. The thermodynamic entropy production on this state is likewise uniquely determined and given by (Glansdorff and Prigogine, 1971)

equation image(22)

where dissipation associated with radiative transfer has been discarded. Among all pairs (, ΔTs), there exists one maximizing expression (22), obtained by solving the equation dPs/d = 0 for having first used the expressions of T1s, T2s and ΔTs in terms of . This state of maximum entropy production, Pmax is indicated on the versus ΔT curve in Figure 4. If, among the continuum of steady states depicted in the figure, the state Pmax were to be selected then somehow, starting from a state of lesser dissipation (Ps in Figure 4), the system should gradually drift towards the maximum state. An elaborate argument on how this might happen has been put forward (Paltridge, 2001), based on how the system is expected to respond under the effect of perturbations.

Figure 4.

Heat flux versus the temperature difference ΔT between equatorial and polar regions as obtained from the two-box climate model introduced by Paltridge (2001). Pmax is the state of maximum dissipation and Ps is a state of lesser dissipation corresponding here to = 0.3.

Now as long as a mean-field view is adopted, and in view of the uniqueness of the attracting state for any given , perturbing the value of this parameter would merely give rise to a new steady state (′, ΔTs′). The latter would lie left or right of Ps depending on the sign of the perturbation and the associated dissipation Ps′ would accordingly be larger or smaller than Ps, with no preference for Pmax whatsoever. We conclude that a proper discussion of the role of perturbations in selecting states according to the amount of dissipation prevailing needs to be carried out in the framework of an extended description in which fluctuations around are incorporated (20c).

Inasmuch as fluctuations account for the effect of processes occurring on scales that are not resolved at the level of a mean-field description, it appears reasonable to model δJ(t) as a Markovian noise process (Palmer, 2001; Nicolis, 2005). On the other hand, since fluctuations and systematic effects are generated by the very same underlying dynamics, δJ(t) should in principle be explicitly coupled to large-scale processes and thus be also state-dependent. We therefore decompose δJ(t) as

equation image(23)

where f(t) is a noise process and G a function of the state variables T1 and T2. In the following we will analyze the response of Eqs. (20) to additive (G = 1) and to multiplicative (G = αΔT) fluctuations, modelling f(t) as a Gaussian white noise. We emphasize that, even in the additive case, the response of a state variable elicited by a noise acting as a source term in the evolution law keeps a non-vanishing correlation with the noise itself (see Eqs. (18) and (27)–(28) below).

Consider first the case of additive fluctuations. In the limit of small variance one may linearize Eqs. (20a) around the most probable state Tis, leading to

equation image(24a)


equation image(24b)

Multiplying both sides of (24a) by δT1 and δT2, averaging and using the property of ergodicity we obtain

equation image(25)

where the covariance matrix of δT1, δT2 can be computed from the Fokker–Planck equation associated with (24a),

equation image(26a)
equation image(26b)

q2 being the variance of δJ. Notice that δT1, δT2 are, respectively, negatively and positively correlated with respect to a fluctuation of the flux. This entails that the excess temperature difference (δT1 − δT2) is negatively correlated to δJ,

equation image(27)

Figures 5(a, b) depict the probability of T1, T2 and ΔT obtained by numerically solving the full-scale stochastic differential equations (20a)–(20c). The variances inferred from these probabilities for = 0.3 and q2 = 0.01 are 1.65 × 10−3 and 3.17 × 10−3 respectively, in good agreement with the theoretical estimates of (26).

Figure 5.

Probability distributions of (a) T1, T2, and (b) ΔT of model (20a) in the presence of additive Gaussian white noise as obtained numerically after averaging over 10 000 time units. Parameter values are = 0.3 and q2 = 0.01.

We come next to the properties of the entropy production. The expression extending (22) to account for the fluctuations is (cf. (11))

equation image(28a)


equation image(28b)

Expanding (28a) in δTi we obtain, to the second order,

equation image(29)

Averaging over fluctuations and using the Gaussian approximation for δT1, δT2 along with Eqs. (25)–(26), we obtain

equation image(30)

In Figure 6(a) the probability π(P) of entropy production is shown for given values of flux and noise strength q2. The mean excess dissipation < ΔP> as deduced from π(P) for fixed q2 is drawn as a function of in Figure 6(b) (full line). The behaviour follows, qualitatively, the dependence predicted by the analytically deduced expression (30). In particular, < ΔP> is negative throughout.

Figure 6.

(a) As Figure 5, but for the probability distribution of entropy production. (b) Deviation of mean entropy production from its steady-state value versus the heat flux in the presence of additive (full line) and multiplicative (dotted line) Gaussian white noise with q2 = 0.001 after averaging over 50 000 time units.

We next present briefly the main results for the case of multiplicative fluctuations proportional to the instantaneous temperature difference ΔT (cf. (23)). The problem is now analytically intractable. The dotted line in Figure 6(b) depicts the numerically obtained < ΔP>. Again, < ΔP> is negative throughout. Its magnitude is less than in the additive noise case, owing to the ΔT dependence which tends to reduce the noise-induced variability.

We may summarize the results of analysis carried out in this section as follows.

  • (i)Fluctuations, whether additive or multiplicative, tend in the mean to reduce dissipation. In this respect the thermodynamic entropy production, associated with deterministic (mean-field) behaviour, is a maximum with respect to the entropy production associated with fluctuating paths around the deterministic one for given parameter values.
  • (ii)For all the types of fluctuations considered, there is no mechanism allowing the system (keeping parameters such as fixed) to approach, through successive perturbations, the regime of maximum entropy production Pmax (cf. Figure 4) with an appreciable probability.

In the light of the foregoing, we suggest that, under the effect of fluctuations, the only way to reach with appreciable probability states that cannot arise in a purely deterministic setting is through the mechanism of noise-induced transitions (Horsthemke and Lefever, 1984). There is, however, apriori no guarantee that the states so selected will necessarily be states of maximum dissipation. An example along these lines is given in the Appendix.

5. Conclusions

In this work we explored the thermodynamic properties of systems giving rise to complex nonlinear behaviours with emphasis on entropy production. We investigated possible thermodynamic signatures of transitions between states and of stochastic perturbations around the deterministic path. To achieve this latter goal, a generalization of the formalism of classical irreversible thermodynamics was also developed, in which the effect of fluctuations is accounted for in an explicit manner. Finally, in the light of these analyses, the conjecture of the climate system as a state of maximum entropy production was revisited using the classical two-box model of meridional heat transport proposed by Paltridge. Some interesting trends have been detected in certain cases. Still, as we have seen, there exists no organizing principle operating in nature in the form of an extremal property in the strict sense of the term involving the entropy production: depending on the parameters and on the initial configuration, a given system can see its dissipation increase or decrease in the course of its evolution toward the state that is eventually selected.

Despite this conclusion, we believe that irreversible thermodynamics in some form should provide useful insights in the characterization of weather and climate patterns and even, perhaps, in their prediction. The crux is, in our view, in the systematic exploration of the probabilistic dimension of the weather–climate system as revealed by the incorporation of stochastic perturbations in the traditional deterministic description. As shown recently in a more general context, the uniqueness and stability of the equations governing the evolution of the probability distributions underlying a complex system possessing sufficiently strong ergodic properties combine to yield a variational principle (Nicolis and Nicolis, 2007; Nicolis, 2009). This principle involves, however, a generalized potential function which in the most general case cannot be identified with the entropy production. Stated differently, the quantities one is dealing with in traditional irreversible thermodynamics are not sufficient to fully characterize the complex behaviours of nonlinear systems out of equilibrium, but need to be complemented with information pertaining to the probabilistic structure of the processes. The elaboration of an extended thermodynamic approach for the weather–climate system along these lines would undoubtedly be a promising orientation for future studies.


Dissipation in noise-induced transitions

We consider the minimal model of passive transport of a substance between a system and an external reservoir under the effect of multiplicative fluctuations:

equation image(A.1)

Here t is a scaled time variable, x the mass density inside the volume of interest (assumed to be well-mixed), x0 the density in the external reservoir, f(t) a Gaussian white noise process and G(x) an x-dependent amplitude (cf. (23)) taken here to be

equation image(A.2)

The Ito stochastic differential equation associated with (A.1)–(A.2) is (Horsthemke and Lefever, 1984)

equation image(A.3)

where Wt is the Wiener process. The Fokker–Planck equation associated with (A.3) can be readily solved in the steady state, yielding the invariant probability (following the Ito interpretation)

equation image(A.4)

where the value x0 = 1/2 was chosen. One can check straightforwardly that, for q2 > 2, π(x), presents a minimum on the ‘deterministic’ state x = x0 = 1/2 and two noise-induced maxima on

equation image(A.5)

as shown in Figure A.1 (dashed line).

Figure A.1.

Entropy production (full line) of model (A.1) and probability distribution of the variable x (dashed line) as obtained analytically with q2 = 3.

The thermodynamic dissipation associated with the model can be evaluated using the fact that the driving force for mass transfer is the difference in chemical potentials. In an ideal system, chemical potential depends, in turn, logarithmically on x. In the notation of section 2 this leads to

equation image


equation image(A.6)

This function is drawn along with π(x) in Figure A.1 (full line). It takes its minimum value Pth = 0 on the less probable state x = x0; in other words, fluctuations here drive the system to regimes of higher dissipation. Still, there is nothing special about the values taken by Pth on the most probable states x± (A.5) and, in particular, no preference whatsoever for a state of maximum dissipation.


This work is supported, in part, by the Science Policy Office of the Belgian Federal Government under contract MO/34/017 and by the European Space Agency under contract C90238.