While it is a physical fact that the atmosphere is molecules in motion, it is a fact which is absent in an explicit form from formulations and simulations of the atmosphere on all scales, whether these are from the large scales down or, more rarely, from the small scales up. Thereby, all meteorologically necessary molecular knowledge is assumed to be implicitly embodied in the gas constant, with local thermodynamic equilibrium being a universally applied approximation up to the stratopause at 50 km. An important reason for the absence is that both the Navier–Stokes equation and the Boltzmann H-equation, with or without its quantum mechanical developments, are nonlinear, with no useful analytical solutions respectively known for the macroscopic and microscopic cases. Numerical solutions by digital computer are therefore essential, but are handicapped by the immensity of the atmospheric problem in spanning the 15 orders of magnitude in length scale from the molecular mean free path at sea level to a great circle. Top-down solutions using the Navier–Stokes equation together with its thermodynamic and radiative accompaniments are of course commonplace, in the form of the global models used for weather forecasting and climate simulation. To the author's knowledge, there have been no attempts at a bottom-up, molecular dynamics solution for a population of air molecules via the H-equation or elaborations thereof. This review covers investigations in the last two decades which shed some observational light on the problem, with appeals to two areas of theory in an attempt to explain some unexpected correlations. The two theories are of molecular dynamics (Alder and Wainwright, 1970) and of generalized scale invariance (Schertzer and Lovejoy, 1985).
The Alder and Wainwright (1970) result was achieved by allowing a directional molecular flux to impinge upon an equilibrated population of Maxwellian molecules (‘billiard balls’) by direct numerical simulation in a computer. A striking phenomenon was observed: the emergence of ring currents on scales of 10−12 seconds and 10−8 metres at standard temperature and pressure (STP). These ring currents are what a meteorologist would call vortices, and in general they signify the emergence of fluid mechanical behaviour in a randomized molecular population, and in particular they indicate the central importance of vorticity. Potentially, this result has many atmospheric consequences via its embodiment of an overpopulation of high-speed molecules in the speed probability distribution function, relative to a Maxwell–Boltzmann distribution; the ring currents and the overpopulation of fast molecules are mutually sustaining, and prevent relaxation to a thermalized population. These consequences range from the impossibility of true Einstein–Smolukowski (isotropic) diffusion to the definition of temperature on a molecular basis. Also affected are the spectroscopic line shapes of water vapour, carbon dioxide, ozone, methane and nitrous oxide that largely determine the transfer of terrestrial radiation, particularly in the wings of the stronger lines. Chemical reactions are of course profoundly affected by the velocities of the colliding reactant molecules. Perhaps most importantly, the turbulent structure of the wind field is affected, on all scales from that of a small aerosol particle to at least that of an Earth radius and probably out to a great circle. This scale-invariant, turbulent structure imprints itself on the abundances of chemical species, including absorbers and emitters of radiation, meaning that energy is deposited in the atmosphere on all scales. That result vitiates the concept of energy deposition on the scale of a sunlit hemisphere, with a conservative cascade downscale to dissipation. It follows from the preceding that vorticity generation is central, on all scales. It represents the emergence of organized flow from small scales, rather than its dissipation from large scales. Because of the Alder–Wainwright ring current mechanism, turbulence will of necessity need to be viewed in the same framework as vorticity.
The question of variability as a function of scale lies at the heart of the work of Schertzer and Lovejoy (1985), resulting in the formulation of generalized scale invariance, the predictions of which are characterized by scaling exponents, H, of 1/3 in the horizontal and 3/5 in the vertical, along with expectations of long-tailed probability distribution functions (PDFs), accompanied by linear log-log plots of variability against scale. Intermittency is important and so are departures from monofractality; both are described by exponents, C1 and α respectively. The atmosphere's turbulent structure is neither two-dimensional (2D) or 3D, but 23/9, 2 + (1/3 ÷ 3/5), since the relationship of H to the spectral exponent is β = 2H + 1. The values for H of 1/3 in the horizontal and 3/5 in the vertical arise from dimensional analysis of the horizontal energy flux and the vertical buoyancy variance flux, respectively; see section 2.
It can be seen therefore that two approaches as different as molecular dynamics and generalized scale invariance predict that isotropic turbulence or diffusion is not to be expected in the atmosphere on any scale. The Alder–Wainwright mechanism generates vorticity at scales near the mean free path, meaning that isotropy in the atmosphere fails even on the smallest ‘dissipative’ scales. This review covers observational approaches to the problem, largely involving in situ airborne and balloon-borne instruments, and it will argue that certain correlations are explicable by appealing to both theories. However, while both theories are held to be relevant and to have physically causal properties, there are yet quantitative questions that remain unanswered.
Two commonly asked questions about the adopted approach are discussed at the end of this review. These questions are ‘Where is the physics in fractal theories?’ and ‘What can modellers do about the molecular scale generation of vorticity?’ A third might be ‘Can the ocean be approached like this?’, to which an immediate answer can be given: while the reaction from a meteorologist might be ‘good idea’, that from a physical chemist is more likely to be ‘good luck’, given the molecular complexities of liquids in general and water in particular—ice floats, for example.
This review is not intended to be comprehensive, but to summarize the author's views and results since first detecting scale invariance in atmospheric data in 1997. The emphasis is on the physical picture and diagrams, rather than on mathematics and equations. There is a large literature on other atmospheric aspects, particularly in the rainfall and hydrological literature stemming from the publication of Schertzer and Lovejoy (1987). Surface temperature (Koscielny-Bunde et al, 1998; Syroka and Toumi, 2001) has been examined, as has ozone column density (Toumi et al, 2001; Varotsos, 2005). The subject of the interaction of radiative transfer with clouds, vital both for climate simulation and remote sounding, has been subject to extensive investigation by multifractal methods; see for example Marshak et al(1997) and Davis et al(1997). Cloud physics has been and continues to be examined in a turbulent context (Bartlett and Jonas, 1972; Jonas, 1996; Falkovich et al, 2002; Falkovich and Pumir, 2007) for example, and very recently by Bodenschatz et al(2010). The importance of molecular-scale processes has been investigated by Ghosh et al(2007), who used a Lennard-Jones intermolecular potential to model the diffusion of water vapour to condensed phase surfaces. Note that all these processes involve either the condensation of molecules or their interaction with radiation in both absorption and emission, and are hence in principle subject to simulation by molecular dynamics. None of the so-called ‘different sorts of physics’ that are sometimes held to determine atmospheric evolution are separable into phenomenologically perceived or scale-separated compartments; physics is physics and in the atmosphere starts with air molecules and photons, continuing on through statistical mechanics, radiative transfer, continuum approximations and thus to fluid mechanics. A book-length account may be found in Tuck (2008). Kleidon and Lorenz (2005) discuss recent work on non-equilibrium statistical thermodynamics. In what follows we start from the small, fundamental scale of the constituent particles, first developing the ideas of molecular dynamics, section 3, asymmetric probability distribution functions, section 4, and generalized scale invariance, section 5, and then moving to the scaling of atmospheric observations, section 6. Before that, however, it is necessary to review briefly the present state of theories of atmospheric turbulence, a notoriously incomplete subject.
2. Brief review of theories of atmospheric turbulence
Taylor (1935) and Kolmogorov (1941, 1962, 1991) originated statistical theories of turbulence, with isotropy in three dimensions being a natural and effective assumption in laboratory conditions. Application to the atmosphere immediately encountered the obstacle that the ratio of a great circle to the depth of the troposphere was about 4 × 103, severely constraining the extent of 3D isotropy (Lovejoy, 2009). In response to the concomitant, observed stratification of the atmosphere, Fjørtoft (1953), Kraichnan (1967) and Charney (1971) developed theories of two-dimensional isotropic turbulence, abandoning scale invariance. Quasi-geostrophic turbulence does graft some 3D character on to the basic 2D horizontal framework, but insufficiently to represent true three-dimensional effects, including the ability of true 3D turbulence to destroy 2D isotropy (Schertzer, 2009). Resulting analyses of observations and models have resorted to interpretations in terms of combinations of large-scale isotropy in 2D with smaller-scale isotropy in 3D. Evidence accumulated against such a dichotomy, particularly given observed scale invariance with different characteristics in the horizontal and in the vertical (Lovejoy et al, 2009b and references therein). The alternative, of abandoning isotropy and embracing scale invariance over a large range, involves pronounced anisotropy with vertical structure becoming more and more stratified at progressively larger scales, obeying power-law scaling.
Such approaches have been adopted by VanZandt (1982), Schertzer and Lovejoy (1985, 1987), Fritts et al(1988), Tsuda et al(1989), Gardner et al(1993), Gardner (1994), Hostetler and Gardner (1994), Lazarev et al(1994) and Dewan (1997), although they do not predict identical scaling exponents; see below. We note that Eady (1950) made some incisive remarks about the causative role of turbulence in the atmospheric general circulation that seem to have been largely ignored in the numerical model era.
We now consider some basic relations underlying these approaches. Observations show that the turbulence is scaling, but with different exponents in the horizontal and the vertical. Considering horizontal velocity v with fluctuations Δv over horizontal length interval Δx and vertical interval Δz, we have
where ϕ(horiz) is the turbulent energy flux ε in the horizontal, ϕ(vert) is the turbulent energy flux ε in the vertical and H(horiz) and H(vert) are the associated power-law exponents. See section 4 for the calculation of H. The Kolmogorov law, which is isotropic, is obtained by setting ϕ(horiz) = ϕ(vert) = ε1/3 and H(horiz) = H(vert) = 1/3. However, this is at odds with observation, and alternative assumptions must be made that preserve scaling while permitting anisotropy. Generalized scale invariance assumes energy flux ε, units m2s−3, is dominant in the horizontal, while buoyancy variance flux η, units m2s−5, predominates in the vertical. So
and by dimensional analysis
The debate between proponents of these differing approaches has intensified recently; see Lovejoy et al(2009b), Lindborg et al. (2010) and the associated on-line comments in Atmospheric Chemistry and Physics Discussions.
The ratio of H(horiz)/H(vert) is thus predicted to be 1 for isotropic turbulence, 0 for isotropic 2D turbulence, 5/9 for generalized scale invariance and 1/3 for linear gravity wave theory, with corresponding fractal dimensions 3, 2, 23/9 and 7/3. The aircraft and dropsonde observations currently available permit discrimination between these possibilities, although data analysis has to be done carefully, given that it is as yet impossible to make observations over adequate scaling ranges that are truly horizontal and vertical, that is to say from platforms unaffected by the turbulent structure of the air motion itself. This subject has evolved appreciably in recent years.
The subject of turbulence historically has had a focus upon decaying turbulence, that is to say a contained fluid subject to a perturbation which ceases instantaneously. This is a difficult topic, see for example Frisch (1995), but one which fortunately is irrelevant in the atmosphere, which has never-ending insolation, ceaseless rotation and which consequently is in a permanent state of disequilibrium. Were this not so, there would be no biosphere (Vaida and Tuck, 2010) and we would not be here discussing turbulence.
Before considering the scale invariance of macroscopic observations, we consider the potential implications of results at the scale of the atmosphere's constituent particles, air molecules, as simulated by populations of Maxwellian hard-sphere molecules by numerical process in computers.
3. Molecular dynamics
The simulation of molecular behaviour by numerical process on electronic computers is a vast field, ranging from solid-state physics to molecular biology. While populations of order 1011 of simple molecules can be simulated at present, the field originated half a century ago with gases being represented by a few hundred hard spheres. These early numerical ‘experiments’ did provide results underpinning the basic ideas of Maxwell and Boltzmann, and eventually simulated the emergence of hydrodynamic behaviour (Alder and Wainwright, 1970). The emergence of ‘ring currents’, vortices in fluid mechanical parlance, was a nonlinear phenomenon occurring when an anisotropy, a directional flux of molecules, was incident on a randomized, i.e. thermal, molecular population. The resultant overpopulation of high-speed molecules, relative to a Maxwell–Boltzmann distribution, was interactively sustained by the ring current, once established. The mechanism was that the higher-speed molecules piled up higher number density ahead of themselves, leaving lower number density behind. The randomly moving, more-average molecules tend to equalize this difference, causing the organized ring current circulation. It is important to note that this mechanism, while maintaining a molecular population near average that allows an operational definition of temperature, prevents the attainment of a completely dissipated state: vorticity will persist on the scale of the mean free path, and local thermodynamic equilibrium in the statistical mechanical sense cannot obtain. The atmosphere of course has omnipresent anisotropies in the form of gravity, planetary rotation, the solar beam and the surface topography. Hard spheres cannot of course simulate such properties of real gases as viscosity, which arise from intermolecular forces of repulsion and attraction that are of quantum mechanical origin. Despite this, considerable progress has been made, for example in the onset of Rayleigh instability at the interface between a denser, initially stratified fluid overlying a similar lighter one (Kadau et al, 2004, 2008). The molecular dynamics of ozone photodissociation was appealed to in attempting an explanation for the correlation between the observed rate of ozone photodissociation and the observed intermittency of temperature in the Arctic lower stratosphere (Baloïtcha and Balint-Kurti, 2005; Tuck et al, 2005). The result justifies an attempt at numerical simulation of a population of air molecules in the lower stratosphere, and indeed elsewhere, were the techniques of theoretical chemistry to be deployed in such a manner. If the translational velocities of all air molecules are indeed non-Maxwellian in their distribution, as implied by the above results, then the consequences are far-reaching, affecting radiative transfer and atmospheric chemistry at the fundamental, microscopic level. Spectral lines have wings because perturbations to the central energy of transitions between molecular quantum levels, that cause the multitude of lines in the spectra of water vapour and carbon dioxide, arise from collisions with the fastest moving molecules. So an overpopulation of fast molecules relative to a Voigt line shape will shift intensity away from the line centres out to the wings. This is where there is a bigger lever on greenhouse forcing; overall, there will be more molecules in a total air column moving faster, so the infrared energy trapping effect may be more effective than is currently calculated. We shall see that skewed PDFs, characterized by long tails relative to a Gaussian or Maxwellian, are found in the observed macroscopic variables, for example temperature, winds, humidity, ozone and in both passive scalars (tracers) and reactive chemicals. The long (sometimes ‘fat’ or ‘heavy’) tails are a sign of positive feedbacks at work, producing long-range correlations and power-law behaviour. Can molecular dynamics simulation of a population of air molecules reproduce such macroscopic scale invariance? A positive answer would open a potential path to parametrization in large-scale models from the bottom up, rather than from the top down. The results of Kadau et al(2008) have shown that atomistic and Navier–Stokes fluid continuum simulations of Rayleigh–Taylor instability have resulted in the same scaling behaviour. That is an important step forward.
4. Long-tailed PDFs
Observed variables in the atmosphere are not normally or log-normally distributed; see Figure 1 for an example. Indeed, over two decades of experience with the research-quality data from aircraft and dropsondes used here show that a Gaussian PDF is a sign that instrumental noise is dominating atmospheric variability in the data, at least in the observations that we consider here, see for example Tuck et al(2003b) and Richard et al(2006). Meteorological assimilation by operational weather forecasting models also shows such long-tailed PDFs, for example tropopause temperatures between 30°N and 30°S during northern winter, shown in Figure 2. Sparling (2000) has shown non-Gaussian distributions prevailing in stratospheric satellite data. Non-Gaussian PDFs are also evident in the vertical from global positioning system (GPS) dropsonde data; see Hovde et al(2010). As was stated earlier, the physics of the air is that of molecules and photons, and the evidence is that the same processes are at work on all scales, driven by the vorticity production inherent in anisotropy arising from the boundary conditions, which include statistical multifractality at the surface, in topography, pack ice and sea surface.
Gaussian PDFs could be obtained in the atmosphere if a measured variable with adequate signal-to-noise was predominantly the product of uncorrelated processes, but there is little evidence of such data from the platforms employed in this work; the airborne microwave emission observations of the vertical profiles of temperature and pressure are a possible exception (Gary, 2006), albeit with a very limited range of scales. However, in contrast to the implications of the preceding paragraph, there have been filtered data analyses of helicopter-borne probes that have produced near-Gaussian PDFs in the boundary layer, for example Muschinski and Wode (1998) and Siebert et al(2010). It will be important in pursuing the implications of such interesting results to be clear about the effects of the airflow around the platform on the instruments, and about the assumptions, explicit and implicit, in the data analysis. Typical cautions might involve the assumption of isotropic molecular diffusion in treating dissipation, and the appearance of ‘pre-whitening’ routines in spectral analysis software; both would tend to preordain isotropy and Gaussian PDFs. There is an important and general distinction to be drawn between analysing unfiltered data that include fluctuations on all scales, and results obtained by using high-pass, or indeed low-pass, filters.
Asymmetric PDFs are a central feature of generalized scale invariance (Schertzer and Lovejoy, 1985, 1987, 1991; Lovejoy and Schertzer, 2007). They arise as a consequence of multiplicatively interactive random processes and are characterized by Lévy stable distributions, whose mathematical properties are summarized, for example, in Samorodnitsky and Taqqu (1994). Generalized scale invariance uses three exponents to describe the distributions of atmospheric variables, which are defined below. These are H, the conservation or Hurst exponent; C1, the intermittency; and α, the Lévy exponent. A Gaussian is characterized by H = 0.5, C1 = 0 and α = 2. We shall see that atmospheric data typically have average exponents H = 0.56, C1 = 0.05 and α = 1.6.
The calculation procedure for H, C1 and α is as follows. The quantity H1 is the scaling exponent calculated from an aircraft or dropsonde time series f(t) by application of the first-order structure function. The qth-order structure function of f(t) is defined by
where the lag r is real and positive, the angle brackets denote an average over t and ensemble averaging over f.
If a plot of log Sq(r;f) versus log(r) is linear (the 95% confidence interval of the fit to obtain the slope ζ(q) is generally less than 10%), then ζ(q) is a scaling exponent for f(t). In general we find:
where K(q) characterizes the intermittency. If the intermittency is linear (of the form K(q) = C1(q − 1), C1 is the codimension of the mean characterizing the intermittency intensity) then the intermittency is monofractal; generally, it will be nonlinear, hence multifractal.
We then define
If Hq is constant as q changes, and if K(q) = 0, then Hq = H. If Hq is not constant with q, then f(t) is intermittent and the scaling exponent is
(which can in principle diverge as q→0 when for example the intermittency is monofractal or more generally when α≤ 1).
Because K(1) = 0, the quantity H1 = H is a good estimate of the scaling exponent of the fluctuations in both intermittent and non-intermittent cases. The exponent C1 measures the intermittency of the signal and takes on values from zero to unity. Values near zero characterize a signal with low intermittency, for example a Brownian motion, and values near unity characterize a signal which is highly intermittent, for example a Dirac δ-function. Values in the range 0.02 to 0.10 seem to characterize atmospheric quantities, and although they are small, they are significant. Considering the signal f(t) to have been observed at discrete time intervals t = 1, 2, 3, …, tmax, define
For our signals, it is found that the quantity has a power-law dependence on the scale r. An unweighted linear least squares fit to versus log r provides a slope − K(q). A plot of K(q) versus q shows a convex function with K(0) = K(1) = 0. The exponent C1 is defined as K′(1), evaluated here numerically from the slope defined by the points (0.9, K(0.9)) and (1.1, K(1.1)). The uncertainty estimate in C1 is obtained by taking the square root of the sum of the squares of the 95% confidence intervals returned by the unweighted linear least squares fits corresponding to q = 0.9 and q = 1.1. Further discussion of these procedures can be found in Schertzer and Lovejoy (1991), Tuck et al(2004) and Tuck (2008).
The Lévy index α has the range 0 ≤ α≤ 2 and characterizes the generator of the intermittency which is the logarithm of the turbulent flux. The corresponding fluctuations will be roughly log-Lévy except for the extreme probability tail which will generally be of a power-law form with exponent qD. Unlike Lévy variables whose probability tail has an exponent qD = α< 2, for the fluctuations there is no restriction on the value of qD (note that the Gaussian is the exceptional α = 2 case with exponential fall-off). Schertzer and Lovejoy (1991) discuss the five main cases for α; here we note that the variables we have measured appear to have 1 < α< 2 (Tuck et al, 2003c, 2004). Our experience indicates that a large quantity of high-quality data is necessary for an accurate computation of α. We use the double trace moment technique to compute α. Define
where η is allowed to range from − 1.0 to + 1.0 in steps of 0.1. For q = 1.5 an unweighted fit to versus log r is made, with the slope being K(q, η) and the standard deviation being σ(q, η). A plot of logK(q, η) versus log η yields for our data a collinear region having a positive slope. A weighted linear least squares fit to this region, with weights K(q, η)ln10/σ(q, η), has slope α, with the uncertainty represented by the 95% confidence interval returned by the weighted fit.
The value of H for an atmospheric chemical constituent is less than 0.56 in the presence of a sink and greater than it in the presence of a source, a so far unexplained result (Tuck, 2008). Note that sinks could be viewed as dissipative behaviour, while sources could be viewed as producing organization, indicating the possibility of a statistical mechanical interpretation in terms of the maximization of entropy production. One of the implications of 1 < α< 2 is that while the mean converges, the variance does not, a result that might be expected to have implications for the analysis of atmospheric time series, particularly as regards the assignation of uncertainty limits.
5. Generalized scale invariance
As formulated for atmospheric observations, generalized scale invariance relies on the fact that when scale-invariant dynamical processes interact, they do so according to power-law distributions and that when there are enough of them over a finite range of scales, their behaviour is stable. The singularities and fluctuations generated do not depend on the details of the dynamics, so rather than requiring an infinite number of descriptive variables, three are sufficient (Lovejoy and Schertzer, 2007)—the scaling exponents H, C1 and α. Formulations of the mathematics and applications may be found originally in Schertzer and Lovejoy (1985, 1987, 1991) and recently in Tuck et al(2004), Lovejoy and Schertzer (2007), Lovejoy et al(2007, 2008) and Tuck (2008).
The physical essence of the approach is the recognition that the horizontal scaling and the vertical scaling are different. In the vertical, the dominant process is gravitational influence on buoyancy fluxes, while in the horizontal, planetary rotational influence on energy fluxes holds sway. The anisotropy reflects atmospheric stratification that is pronounced at larger scales and which becomes progressively weaker at smaller scales.
Recent results have shown that isotropic turbulence is absent on all scales throughout the depth of the troposphere (Lovejoy et al, 2007), that stable layers are not monolithic in that they exhibit a ‘Russian doll’ structure of nested unstable and stable layers on progressively smaller scales (Lovejoy et al, 2008) and that the intermittency of temperature is correlated with ozone photodissociation rate (Tuck et al, 2005; Tuck, 2008). Correlations exist in both horizontal and vertical between the scaling of the horizontal wind and traditional meteorological measures of jet-stream strength.
A connection between the statistical physics of molecules and generalized scale invariance may underlie the formal similarity between thermodynamic inverse temperature and the basic variable used to describe the macroscopic scaling; it extends to the partition function and to the Gibbs free energy (Schertzer and Lovejoy, 1991; Tuck, 2008). This unexplored question may be a point of departure for further research, with scale invariance being a necessary but insufficient condition for maximization of entropy production; sufficiency could come from the minimization of the scaling quantity K(q)/q, the analogue to the Gibbs free energy. The notion that scale invariance could be a macroscopic expression of a microscopic need to maximize dissipation and hence entropy production, i.e. that scale selectivity would represent an improbable emergence of organized motion, has a beguiling attraction in a statistical mechanical context. Paltridge (2001) pointed out that minimization of entropy exchange was equivalent to the maximization of entropy production, and that both were a consequence of the maximization of dissipation. Dewar (2003, 2005) has produced a formal exposition that seems very relevant. Non-equilibrium statistical thermodynamics has been discussed in Kleidon and Lorenz (2005)—the chapters by Dewar, Paltridge and Paulius are of particular interest in the present context. Finally, the Alder–Wainwright mechanism precludes isotropy on scales down to the mean free path, and also provides both an overpopulation of high-speed molecules, relative to a Maxwell–Boltzmann distribution, and a means to convert solar energy into vorticity, accompanied by the maintenance of an operational temperature by rapid exchanges of translational energy among molecules near the mean of the PDF.
6. The scaling of observations
We deal here with the scaling of observations from two sources: ‘horizontally’ from flight legs of research aircraft and ‘vertically’ from GPS dropsondes, with some data from ‘vertical’ ascents and descents of the aircraft. These sonde data, although far from perfect, are characterized by adequate frequency (2 Hz), scaling range (about 1600 equal time intervals), signal-to-noise ratio and continuity, of a quality necessary for the analyses by generalized scale invariance that are discussed. The aircraft data are at 1 Hz for intervals ranging from 1800 to 35 000 seconds, at true air speeds of 200 to 230 m s−1. Atmospheric structure prevents both aircraft and sondes from achieving true horizontal and vertical motion (Lovejoy et al, 2004, 2009a, 2009b), since the platforms move with respect to the air. The debate continues over this subject (Lindborg et al, 2010; Lovejoy, 2009; Schertzer, 2009).
There has been substantial progress in analysing the higher-quality data now available, reflected in recent publications (Tuck and Hovde, 1999; Tuck et al, 1999; Lovejoy et al, 2004; Tuck et al, 2004, 2005; Lovejoy et al, 2007, 2008; Tuck, 2008; Lovejoy et al, 2009a, 2009b; Hovde et al, 2010).
The vertical scaling has proven to be a simpler phenomenon to extract from the observations, enabled by GPS dropsondes (Aberson and Franklin, 1999; Hock and Franklin, 1999), and which has turned out to have an important impact on the longer-running and more convoluted history of obtaining the horizontal scaling from research aircraft observations, particularly in the case of the horizontal wind speed.
Analysis of dropsonde campaigns above the Northern Hemisphere western Pacific Ocean in the boreal winters of 2004, 2005 and 2006 has shown that isotropic turbulence does not occur throughout the depth of the troposphere (Lovejoy et al, 2007; Hovde et al, 2010); see Figure 3. Simultaneous releases of pairs of dropsondes in 2004 further revealed that what appeared to be stable layers at coarse spatial resolution—of order 102 metres—in fact had embedded unstable layers, which in turn had embedded stable layers and so on in a ‘Russian doll’ fractal structure that was evident down to the smallest resolved scale, about 5 m. An example is shown in Figure 4. The result was valid for three separate stability criteria: dry adiabatic, moist adiabatic and dynamic (Richardson number), albeit with different scaling exponents (Lovejoy et al, 2008). These stability criteria are defined in terms of the Brunt–Väisälä frequency N2 = g∂logθ/∂z where g is gravitation, θ is potential temperature and z is altitude. N is used for dry air, NE and θE are used for moist air and Ri = N2/s2. If N2 or NE2 < 0, a parcel of air is held to be unstable with respect to ascent in a static environment. Where there is vertical shear of the horizontal wind, ∂v/∂z = s; the necessary but insufficient criterion for instability is Ri < 1/4.
Recent analysis showed that the data dropouts in the dropsonde observations are also multifractal, presumably because the telemetry between the sonde transmitting antenna and the receiver on the National Oceanic and Atmospheric Administration (NOAA) Gulfstream 4 aircraft is affected by the turbulent structure of the atmosphere, via the orientation of the sonde (Lovejoy et al, 2009a). The corrections thus necessitated did not invalidate the earlier conclusions about the scaling of the fluctuations, although cascade calculations for horizontal wind were impacted. Here we consider only fluctuations.
The vertical scalings of wind, temperature and humidity have been examined separately, and show differing behaviours, as illustrated for a single descent in Figure 5.
The exponent H is almost but not quite unity for temperature, whereas it has values about 0.75 for horizontal wind speed and relative humidity. If an alternative, spectral analysis is used rather than the first-order structure function, values close to unity are avoided by virtue of the allowed range for H expanding from 0 < H < 1 to 0 < H < 2 and values of 1.25, 0.75 and 0.75 are returned respectively for temperature, horizontal wind speed and humidity. Figure 3 shows a summary for 315 dropsondes between 21°N and 60°N, 128°W and 172°W in January to March 2006. It is clear that temperature scales differently; it is a reflection of the importance of gravity, and of the small but highly significant deviations from hydrostatic balance that are embodied in buoyancy accelerations (Lovejoy et al, 2009b; Hovde et al, 2010). Note that all the variables have fluctuations that obey scaling, but that the presence of jet streams between 10 and 12 km results in higher values for wind speed in the upper troposphere, as shown in Figure 3 (Lovejoy et al, 2007, 2009a; Tuck, 2008). These vertical scaling exponents are all greater than 2/3; such values preclude the occurrence of isotropic turbulence, or ‘true diffusion’, on any scale from 5 m upwards. This is so because after the complicating effects of vertical motion have been removed from nominally horizontal aircraft flight legs, the true horizontal scaling of the horizontal wind is expressed by H = 1/3 (Lovejoy et al, 2004, 2009b). We note that the PDFs in the marine boundary layer from 885 dropsondes spread over three winters above the Northern Hemisphere eastern Pacific, 2004–2006, were all highly non-Gaussian, even in the bottom 158 m alone (Hovde et al, 2010). While the lowest, boundary layer, levels showed scaling close to the Bolgiano–Obukhov 3/5 exponent, there was no sign of isotropy or of any other scales arising from proximity to the surface. This is not to say that they could not arise close to the continental surface, about which we can say nothing. Further, aircraft ascents and descents show scaling closely akin to the dropsonde data and are very distinct from that of the horizontal observations from the same aircraft, indicating that the sonde's displacement from the vertical by the horizontal wind does not substantively affect the results. The analyses of temperature and humidity from ‘horizontal’ aircraft legs are simpler, and generally close to the 0.56 (5/9) predicted by generalized scale invariance (Tuck, 2008). However, that is not the end of the story for the aircraft data, in that what appear to be significant correlations emerge when the observations are examined on an individual flight leg basis rather than as a grand average.
We will first summarize the ‘horizontal’ observations of wind speed, temperature and humidity for the NOAA Gulfstream 4 aircraft for the Winter Storms 2004 mission. The three composite variograms for all 16 ‘horizontal’ flight legs are shown in Figure 6; composite variograms are constructed by analysing all points together to obtain a single slope, rather than by averaging the separate individual slopes. The H exponents for horizontal wind speed and temperature are close to the 0.56 predicted by generalized scale invariance, while the humidity has H = 0.45, consistent with the presence of a sink for total water in the upper troposphere, in the form of gravitational settling of ice crystals (Kelly et al, 1991; Yang and Pierrehumbert, 1994; Galewsky et al, 2005; Richard et al, 2006; Schneider et al, 2006; Tuck, 2008; Hovde et al, 2010). The effect of the G4 autopilot and the existence of long-range correlations in the wind field have been considered by Lovejoy et al(2009b). A much larger dataset exists for the nominally horizontal flight legs of the National Aeronautics and Space Administration (NASA) ER-2, which were largely acquired during the polar ozone missions to both Antarctica and the Arctic between 1987 and 2000.
There are over 140 flight segments from the ER-2 polar ozone missions that meet the quality and length criteria for scaling analysis, resulting in millions of samples at 1 Hz taken with an average true air speed of 200 m s−1; all were 1800 s duration or longer, taken under cruise climb with autopilot control maintaining a constant Mach number. The detailed analysis of such flight legs involves a number of factors, that have been explored by Lovejoy et al(2004, 2009a), Tuck et al(2003b) and Tuck (2008). The values of H for temperature and wind speed are respectively 0.53 ± 0.01 and 0.51 ± 0.01, lending support to the theoretical value from generalized scale invariance of 5/9 (0.56), given that the aeroplane motion is influenced in both the horizontal (Hh = 1/3) and in the vertical (Hv = 3/5) by the interaction of the autopilot, the aeroplane inertia and the turbulent structure of the wind and temperature, and for which there is insufficient information for a complete a priori solution as a problem in Newtonian physics. We note that the large variability at the small-scale end of the log-log variograms in Figure 7 may be attributed to inherently greater variability within the larger number of small samples. On the other hand, at the large scales, the larger variability there is attributable to the small number of large samples. Whether one takes a fit to the entire data range, or selects the middle ranges where the variation is less, the resulting values of H are very similar (Tuck et al, 2003b; Lovejoy et al, 2004). We may visualize the fluctuations increasing in amplitude as the spatial scale decreases, down to a sample volume so small that it would either contain a single molecule or it would not.
The ER-2 also carried a suite of chemical instruments on the polar ozone missions; see for example Tuck et al(1997) and Newman et al(2002). While the ozone instrument produced data of sufficient quality for generalized scale invariance on all missions, a subset of the other instruments did so only on the later missions (Tuck et al, 2002). An important point may be made by comparing the scaling exponents H for ozone, a chemically active species, and nitrous oxide, a known passive scalar (tracer) in the flight domain of the lower stratosphere (Loewenstein et al, 1989; Proffitt et al, 1989, 2003). Inspection of Figure 8 reveals that the passive scalar has H = 0.56 ± 0.01, in excellent agreement with generalized scale invariance theory. Ozone on the other hand has H = 0.49 ± 0.01, a value that we shall see in a later section appears to be an empirical indication of the presence of a sink process. Similar results apply to total water, where sinks are known to exist via gravitational settling of ice crystals (Kelly et al, 1991; Richard et al, 2006) from ER-2, DC-8 and WB57F observations. There is also large-scale evidence from meteorological analyses and modelling (Couhert et al, 2010).
We now illustrate the behaviour of the observations and the H scaling for individual flight legs. Figure 9(a), (b) and (c) show the traces of wind speed, temperature and the shear vectors for the longest flight leg available for the aircraft data: 7100 km on 19890220 (20 February 1989), or just over one Earth radius. The instrumental noise contribution is trivial; the fluctuations are atmospheric. The corresponding scaling for wind speed is shown in Figure 10(a), along with that for another individual flight leg in Figure 10(b), which was taken on 19941005, when one of the strongest jet streams was encountered. There is a substantial difference between the H exponents for the 19890220 flight, which was along the axis of the Arctic lower stratospheric polar-night jet stream (a headwind) and the 19941005 flight which was across the Antarctic lower stratospheric polar-night jet stream. The point here is that the mean of all flights shown in Figure 7 includes correlations with jet-stream direction and strength (Tuck et al, 2004; Tuck, 2008), to which we will return in the next section. The presence of long-range correlations along a flight track between horizontal and vertical meteorological variables seen at constant pressure for the Gulfstream 4 (Lovejoy et al, 2009b) may also exist for the ER-2, but have not been examined because the latter flies at constant Mach number, meaning that the atmospheric temperature structure enters the autopilot's algorithm.
The H exponent is more robust to data dropouts than is either the intermittency C1 or the Lévy exponent α. In order to evaluate C1 and α, the consequent rather more stringent selection criteria result in a smaller subset of missions and flight legs that can be analysed. A priori evaluations from the aircraft data of all three exponents have been made in Tuck et al(2002, 2004) and from the formulae of Schertzer and Lovejoy (1991) and of Lovejoy and Schertzer (2007) for the dropsondes described in Lovejoy et al(2007). The mean values for horizontal wind speed, temperature and relative humidity are H = 0.55, C1 = 0.05 and α = 1.6. For ozone, the C1 and α are variable, depending upon whether the molecule is in an active photochemical regime or, more rarely in the lower stratosphere, whether it is acting as a passive scalar. The implication of the mean values for all the variables analysed is that the atmosphere has ubiquitous, ever-present turbulent structure describable as arising from a non-Gaussian, random, Lévy-stable process, associated with power-law behaviour, long-tailed PDFs and long-range correlations caused by positive feedbacks. The means converge, but the variance does not.
7. Correlations in the scaling exponents
There are some significant correlations in the scaling exponents that emerge when the grand averages are examined on a flight-by-flight basis. There are four cases that show such correlations: (1) that of the H for horizontal wind speed and of temperature with measures of jet-stream strength, taken during ‘horizontal’ ER-2 flight legs; (2) that of the H for vertical scaling of the horizontal wind with measures of jet-stream strength taken during GPS dropsonde descents; (3) the correlation of the intermittency C1 of temperature with the observed ozone photodissociation rate during ‘horizontal’ ER-2 flight legs in the Arctic; and (4) correlations between the intermittency C1 and the Lévy exponent α for ozone, as chemistry within the lower stratospheric polar vortex evolved during late winter and spring.
The correlations of H for horizontal wind and of H for temperature with measures of the horizontal wind shear and the horizontal (meridional) temperature gradient respectively are shown in Figure 11(a) and (b). The range is a factor of two in the exponent and exceeds what can be accounted for by corrections arising from the complexities of the effects of the turbulent atmospheric structure upon the motion of the aircraft (Tuck et al, 2003b; Lovejoy et al, 2004, 2009b). In any case, no such complexities are expected or evident in the case of the correlation with the temperature gradient, Figure 11(b).
The correlation of H for horizontal wind speed with measures of jet-stream strength is also evident in the vertical scaling of the same variable; see Figure 12. Note that the correlation seen with meridional temperature in the horizontal is however not present, as might be expected from the unique scaling of temperature in the vertical, that was discussed earlier.
The correlations between the exponent characterizing the generalized scale invariance of the horizontal wind speed and traditional measures of jet-stream strength suggest that mechanisms exist and are operative for the transfer of kinetic energy from the smallest scales, at which it is input from solar photons and expressed as ‘ring currents’, to the largest scales, which are shaped by the planetary rotation and boundary conditions.
Perhaps the most unexpected result was the discovery of a correlation between the observed intermittency C1 of temperature along a flight leg and the averaged value of the observed rate of ozone photodissociation for that flight leg (Tuck et al, 2005; Tuck, 2008). The data were obtained during the two most recent polar ozone ER-2 missions, POLARIS during the Arctic summer of 1997 and SOLVE during the Arctic late winter and early spring of 2000 (Newman et al, 1999, 2002). The value of J[O3], the ozone photodissociation rate, varies over a range of a factor of four against a range of a factor of three in the intermittency of temperature, C1(T); see Figure 13. It signals a correlation between the rate of production of translationally hot (fast moving) O and O2 photofragments and the sparseness (‘spikiness’, colloquially) of the macroscopic temperature. The implications for the physical view of and the fundamental definition of atmospheric temperature will be discussed later, in section 8. The correlation extends very nearly to zero for the photodissociation rate, a result only obtainable at latitudes and times when the aircraft could fly at the same speed as the surface was rotating below it. That, and the need to average the temperature over a flight leg, prevented ‘ideal’ experiments from being done. The cost per flight was order of $ 105 to $ 106, and they are unlikely to be repeated any time soon. The summer data cover a period of 136 days, and so do not deal with a single air mass. Nevertheless, the instruments were calibrated before and after each flight, and checked by in-flight manoeuvres, and the reproducibility of the scaling exponents over many missions between 90°N and 72°S in several longitude sectors supports the correlation as real. Better observations are obviously desirable.
Finally, correlations emerged between the exponents H and α for ozone, in both the Antarctic vortex of 1987 and the Arctic vortex of 2000. The intermittency of ozone was correlated with neither of the other exponents, in either vortex. The correlation between H and α for ozone was more developed in the Antarctic, as might be expected from the fact that ozone destruction can proceed to completion there, a state not yet observed in the Arctic (Solomon, 1999; Richard et al, 2001). Figures 14 and 15 display the scaling of the observations in the three planes: (C1, H1), (α, H1) and (α, C1). Interpretation of these scalings will be discussed in section 9, but it should be noted here that there were no concomitant evolutions of the scaling exponents for wind speed and temperature. It is therefore likely to be an expression of chemistry.
8. Some considerations for numerical modelling
There was one flight of the WB57F for which a comparison was made with a local area forecast model, the MM5, at the highest spatial resolution then available, initialized with assimilated global data from the European Centre for Medium-Range Weather Forecasts (ECMWF) model (Tuck, 2008). The flight, on a great circle track at about 18 km altitude, was from (30°N, 95°W) to (46°N, 108°W), and encountered severe turbulence above the Rocky Mountains in Wyoming, on 19980411. A comparison between the H scaling exponents for wind speed and temperature measured from the aircraft with those from the model reveals that, although the model data did respect scaling, the H exponents along the flight track were considerably different than those from the observations; see Figure 16. It is encouraging that the model data did exhibit scaling, albeit over a more limited spatial range than the observations. The comparison suggests a new way of testing models against observations, in a way that incorporates the energy and mass transfers on all scales, which in this case seem to be inadequate quantitatively. Stolle et al(2009) have recently performed extensive studies of the horizontal scaling in ECMWF reanalyses.
Because of the correlation of the H exponents with jet-stream strength, and the correlation of temperature intermittency with ozone photodissociation rate, questions arise about the representation of temperature and wind in numerical models. As used meteorologically, all molecular information in models is contained in the gas constant, R, with the implicit assumptions of local thermodynamic equilibrium, Maxwellian molecular speed distributions and the condition that fluid velocities in the Navier–Stokes equation are much less than the average or most probable molecular speeds (which are closely related to the speed of sound). The first two of these assumptions are questionable, based on the results in section 6. The third may be questioned on observational grounds: observed wind speeds in the subtropical jet stream are as high as 130 m s−1, or one third of the average velocities of air molecules at 200 K. The problem is even more severe in the upper stratosphere, as may be seen from Figure 17, which shows the analysed wind field at 2 hPa in austral midwinter. The maximum wind speeds exceed 180 m s−1, well over half the speed of sound. Indeed, given this and the high heating rates there from ozone absorption of solar photons and its link to temperature intermittency, perhaps the whole basic approach needs re-examination in the middle and upper stratosphere. The systematic cold bias in global models (IPCC, 2001) may be related to these issues.
We note that atmospheric models usually make mean-field approximations at several points, and often assume Gaussian statistics, particularly for error variances. Both of these assumptions are not well supported by observations that produce long-tailed PDFs, as we do here. Indeed, our values of the Lévy exponent, α, imply that variances do not converge.
We may also revisit the debate about whether the stratospheric winter polar vortex was a ‘containment vessel’ (Juckes and McIntyre, 1987; Schoeberl et al, 1992) or a ‘flow reactor’ (Proffitt et al, 1989a, 1989b; Tuck, 1989; Tuck et al, 1992; Waugh et al, 1997). The fact that the ER-2 flights across and along the lower stratospheric polar-night jet stream show scale invariance indicates that the so-called ‘barrier’ deduced from contour maps of potential vorticity is in fact more like a sieve, being porous on all scales. This is illustrated in Figures 18 and 19, where original 1987 potential vorticity (PV) maps at T63 resolution from the UK Meteorological Office are compared with the T799 equivalents from ECMWF for the same 19870922 date available in 2007. The fine structure in the latter maps is more compatible with the scale-invariant structure observed from the ER-2 in Figures 20 and 21. Similar considerations apply to the contour-based analyses in the Arctic (Tuck et al, 1992, 2004); PV contours create a false impression of impermeability, denied by the observed shear vectors and the generalized scale invariance of wind speed. Maximization of entropy production via dissipation is the root cause, expressed as turbulent vorticity generation from molecular scales up, and it cannot be escaped on any scale.
There have been attempts to examine the scaling of model fields, both on a case-study basis as described above and more generally. Lovejoy et al(2009c) and Stolle et al(2009) concluded that weather could be modelled as a cascade process, pointing out that whereas horizontal scaling could be investigated for models with regularly spaced grids, the general absence of such regularity with altitude in atmospheric models used for assimilation and forecasting hampered analysis of their vertical scaling. In this case, the cascades refer to the meaning whereby the same processes are at work scale by scale across the whole scaling range
9. Discussion and summary
The atmosphere is far from equilibrium, as evidenced by its motion and by the presence of solar photons; see Vaida (2009) for a review of the photochemical consequences of the latter. Prodded by the correlation between the observed intermittency of temperature with the ozone photodissociation rate, an appeal to molecular dynamics calculations showing that fluid flow, characterized by an overpopulation of high-speed molecules relative to a Maxwell–Boltzmann probability distribution, emerges from a randomized population of Maxwellian molecules subject to anisotropy, has some important consequences. With no isotropic diffusion on any scale, the concept of local thermodynamic equilibrium is brought into question. Accompanying this conclusion are fundamental questions concerning atmospheric radiative transfer, particularly in the wings of individual rovibrational lines of water vapour and carbon dioxide, where the most violent collisions involving the highest-speed molecules are effective. A chemical consequence is to raise questions about the applicability of the law of mass action in calculating the rate of chemical reactions in atmospheric models. Concrete examples of this exist in both the stratosphere (Tuck et al, 2003a) and the troposphere (Parrish et al, 2007). The transport of water vapour molecules to the surfaces of liquid droplets and ice crystals will also need re-examination. There is an empirical indication for chemical species, from the H exponent for their abundance, depending on whether there is a sink, whether it is a tracer or whether there is a source. The respective cases are H less than, equal to or greater than the passive scalar value of 0.56, a result without explanation (Tuck, 2008). This behaviour may be linked to the maximization of entropy production. One would expect the maximization of dissipation and hence entropy by the chemical reactions that will tend to produce thermochemically stable products (for example, O2, N2, HNO3, CO2, H2O, HCl), while reactive species produced by low entropy solar photons, such as O3 and ClO, will behave as sources while there is sufficient photochemical activity for their production rates to exceed their loss rates.
The maximization of entropy production, evidenced by scale invariance, implies irreversibility. Such irreversibility will have implications for climate change, and for geo-engineering projects. The more rapidly they are performed, the further from reversibility will the associated processes be, with a consequently large entropy budget. The entropy cost of removing greenhouse gases from the atmosphere after they have become part of the air will be very large for example. Minimization of Gibbs free energy implies an enthalpy change approaching the product of temperature and entropy change, and so a high input energy cost.
The molecular definition of temperature is bound up with the existence of Maxwell–Boltzmann speed distributions and isotropic diffusion. Their non-existence in air raises an interest in how large such effects could be in the context of atmospheric temperature. The heating rates associated with the correlation between temperature intermittency and ozone photodissociation rate were tenths of a degree Kelvin per hour, at 65°N latitude, near the time of equinox. Considering this, and the thermal inertia of thermometers, effects of order 101 K can be ruled out, but effects of order 10−1 to 100 K cannot. Such effects will be altitude dependent; we have seen that serious problems with modelled temperature throughout the atmosphere have been diagnosed (IPCC, 2001), but especially in the middle and upper stratosphere. Thus although the existence of calibrated thermometric records of atmospheric temperature shows clearly that observed temperatures have increased over the last century, the interpretation of the meaning of air temperature may involve deeper considerations. For example, can we be certain that the rotational energy levels of molecular nitrogen and molecular oxygen have Boltzmann populations?
The generation of vorticity by the Alder–Wainwright mechanism has the potential to provide the upscale energy transfer process by which jet streams are maintained, given the high fractions of the average molecular speed reached by the westerly winds in upper tropospheric and stratospheric polar-night jets. This is a subject of current interest (Baldwin et al, 2007).
Since 2D turbulence is associated with an upscale enstrophy cascade, we note that it cannot occur—because the scale invariance of the fluctuations in the number densities of absorbers and emitters of radiation, the source of the atmosphere's energy, means that energy is deposited variably in the air on all scales, eliminating the possibility of conservative energy cascades downscale from the largest. Since the Alder–Wainwright mechanism ensures that the radiative energy is converted to vorticity, half the square of which is the enstrophy, there can be no conservative enstrophy cascade. Note that the word ‘cascade’ may have different meanings other than that of a conservative flux either downscale or upscale—it can mean simply that the same set of processes, including diabatic ones, are at work scale-by-scale over a given scaling range.
As regards the quality of observations, while these have clearly improved greatly in recent decades, there is still great scope for enhancement. The airborne data could be enhanced by long-endurance drone aircraft, see for example MacDonald (2005), particularly if the aircraft autopilot responses are recorded along with the aircraft motion, allowing solution of the problem by Newtonian calculation. As regards the acquisition of truly vertical profiles, there is no current technology available. One might hope for active laser sounding to be useful, and perhaps even speculate about active, profiling versions of Richardson's experiment in the 1920s when ball bearings were fired vertically upwards and their horizontal displacement on landing was taken as a vertical integral of the wind. Whatever the coordinate, the effects of the platform on the observations will remain a problem.
Finally, we attempt to answer some questions that have frequently occurred during the last decade, posed at the end of section 1, ‘What can modellers do about vorticity generation at molecular scales?’ I suggest collaboration with theoretical chemists and statistical physicists to perform molecular dynamics calculations of sample air parcels, now possible for populations of order 1011 molecules. Penland and Ewald (2008) is an example, although not explicitly molecularly based. If scale invariance can be simulated, there will be a way forward for parametrization by upscaling, guided by the scaling of observations reported above. The second question, ‘Where is the physics in fractal formulations?’, can be answered by considering a view from non-equilibrium statistical mechanics. The observed scale invariance can be explained on the basis of the need to maximize entropy production; scale selectivity would correspond to a failure to maximize dissipation, and hence is not observed. Irreversibility is implied; any chance of reversibility is disappearing into space at the speed of light, as the flux of infrared photons embodying the entropy maximization in the atmosphere, that permits organized atmospheric flow, propagate outward over the whole 4π solid angle, a high-entropy state compared to the relatively low-entropy beam of incoming solar radiation. The organization observed in atmospheric flow is a remanent consequence arising from the anisotropies in the disposition of the air on the planet: gravity, rotation, the solar beam and the scale-invariant planetary surface, both marine and continental, acting on a molecular population that strives ceaselessly for randomization. Jet-stream maintenance is a prime candidate for this interpretation. Another possible example is the problem posed by Hoskins (1991) asking why, following cyclogenesis, the isentropes had a tighter gradient than they had had prior to the development. If the traditional view of dissipation had obtained, the isentropes should have been more uniform (slacker gradients) rather than more organized (tighter gradients). In the view expressed here, interpretation would be that organization has emerged as an expression of anisotropy, after dissipation and hence entropy production had been maximized (Paltridge, 2001; Dewar, 2003, 2005). The organization expresses the boundary conditions. On this view, turbulence is an emergent property, acting via vorticity generation (‘ring currents’) in molecular populations in anisotropic environments, linking microscopic and macroscopic scales. Statistical multifractality is a result of these physical processes interactively at work.
The many people whose efforts underpinned the observations have been acknowledged elsewhere (Tuck, 2008). However, the contributions of Susan Hovde, Shaun Lovejoy and Daniel Schertzer constitute a sine qua non. The PV maps in Figure 19 were supplied by Adrian Simmons. Two anonymous reviews resulted in significant clarifications to the paper.