Beyond deadlock



[1] Today's atmospheric global circulation models can represent the effects of clouds through “conventional” parameterizations on coarse grids, through the use of global high-resolution grids, or through the use of embedded cloud-resolving models as superparameterizations in a lower resolution global model. Recent work on conventional parameterizations has been aimed at improving the representation of entrainment, including nondeterministic effects, and achieving resolution independence. Global high-resolution grids have been very useful for studying the interaction of clouds with the global circulation out to time scales of about one simulated year; longer simulations are not yet feasible. Superparameterizations have already been used in simulations longer than a century and have succeeded in simulating the Madden-Julian Oscillation, the diurnal cycle of precipitation, and other phenomena that have presented challenges for conventionally parameterized models.

1 Introduction

[2] Ever since their origins in the 1960s, large-scale atmospheric models, including global circulation models (GCMs), have used parameterizations to represent the wide range of cloud processes that occur on scales close to or smaller than the horizontal grid spacing. Because of the many processes and wide range of scales involved, cloud parameterization has proven to be a long-term challenge. Randall et al. [2003a] wrote that “The cloud parameterization problem is ‘deadlocked,’ in the sense that our rate of progress is unacceptably slow.” That was 10 years ago. Has the situation improved? This paper offers a perspective on progress, over the past decade, in the representation of cloud processes in GCMs.

2 Conventional Parameterizations

[3] There are now three complementary approaches to representing cloud processes in global atmospheric models. The most thoroughly established of these is conventional parameterization, which is applicable to GCMs with grid spacings on the order of 50 km or larger. GCMs that use conventional parameterizations can be called conventional GCMs. Conventional parameterizations were first developed in the 1960s [Smagorinski, 1960; Manabe et al., 1965; Kuo, 1965; Arakawa, 1969] and are intended to represent the collective effects of a system of convective and stratiform clouds that coexist in a large grid column. To maximize simplicity and computational speed, conventional parameterizations are typically based in part on highly idealized cloud models, such as entraining plume cumulus clouds [e.g., Arakawa and Schubert, 1974] and well-mixed stratocumulus layers capped by discontinuous inversions [e.g., Lilly, 1968]. The review by Arakawa [2004] summarizes work on conventional cumulus parameterizations up to about 10 years ago. Here we briefly discuss two current issues: entrainment and resolution dependence.

[4] Over the last 10 years, attempts have been made to improve the representation of entrainment into cumulus updrafts. High-resolution, large-domain simulations of deep convection show that undilute “hot towers” are extremely rare [Kuang and Bretherton, 2006; Khairoutdinov and Randall, 2006; Romps and Kuang, 2010; Khairoutdinov et al., 2010; Fierro et al., 2012]. In addition, there is evidence [e.g., Tokioka et al., 1988; Derbyshire et al., 2004; Bechtold et al., 2008; Chikira and Sugiyama, 2010; Hannah and Maloney, 2011; Crueger et al., 2013] that stronger parameterized entrainment can lead to better simulations of the Madden-Julian Oscillation (MJO) because it increases the sensitivity of the convection to midtropospheric water vapor. Further discussion is given in section 5. The parameterization of entrainment is a challenging problem, especially with highly idealized cloud models, because entrainment is a turbulent process that involves scales smaller than a cloud [e.g., Randall and Huffman, 1982] and because the rate of entrainment is very difficult to observe or even to diagnose from high-resolution simulations [e.g., Romps, 2010].

[5] The statistical properties of a turbulent, chaotic cloud system are not fully deterministic [e.g., Hohenegger et al., 2006]. Recent work on “stochastic” parameterizations of deep cumulus convection is aimed at including such nondeterministic effects [e.g., Buizza, 1997; Lin and Neelin, 2003; Shutts and Palmer, 2007; Plant and Craig, 2008; Peters et al., 2013]. The results are intriguing, but further research is needed to establish to what extent stochastic parameterizations are needed for successful simulations of the global circulation of the atmosphere and climate. The answer will depend on the resolution of the model.

[6] Some of today's global weather prediction models use horizontal grid spacings on the order of 20 km, but grid spacings closer to 100 km are still the norm in global climate models. Some current research is aimed at developing a cumulus parameterization that is resolution independent, i.e., that can be used without change in models with a wide range of grid spacings. Arakawa and Wu [2013] take the view that a resolution-independent parameterization should generate statistics with coarse resolution and individual clouds with sufficiently fine resolution. Guided by analysis of simulations with a high-resolution numerical model, they studied the dependence of upward convective energy fluxes on the areal fraction of a grid cell occupied by convective updrafts, hereafter called the updraft fraction. With grid cells 100 km wide, the updraft fraction is guaranteed to be small, and in fact its smallness is an important simplifying assumption that is used in virtually all existing cumulus parameterizations. This is problematic for high-resolution models because the updraft fraction can be of order 1 with grid columns 10 km wide or smaller. Arakawa and Wu proposed a simple way to determine the updraft fraction (their equation (14)). The input needed is available from a conventional cumulus parameterization; in fact, their “Unified Parameterization” can be implemented by making modest changes to a conventional parameterization. Work is under way to test the Unified Parameterization in a GCM.

[7] It has been suggested [e.g., Bogenschutgz et al., 2013] that a second approach to resolution-independent parameterizations can be based on the class of methods called “higher-order closure” (HOC), which was introduced to the atmospheric science community through the work of Donaldson [1973] and Mellor and Yamada [1974]. HOC uses the equations that govern selected “moments” of the predicted variables. The first moments are simply the grid-cell-averaged values of the primary variables, which might include the liquid water potential temperature, θl, total water mixing ratio, qtot, and the three velocity components u, v, and w. These first moments are directly predicted by the host model. Second moments include subgrid-scale (SGS) variances and fluxes, e.g., math formula and math formula. Here a prime denotes a departure from a grid-cell average. The third moments include SGS fluxes of second moments, such as math formula.

[8] The averages used to define the moments can be interpreted as grid-cell means. The grid cells can be large, as in a conventional GCM, or small, as in a large-eddy simulation. To this extent, the equations of HOC are resolution independent. Larson et al. [2012] and Bogenschutz and Krueger [2013] showed that the use of HOC can make the results of a cloud-resolving model (CRM) less sensitive to resolution.

[9] As their name suggests, HOC models make use of closure assumptions. Broadly speaking, four kinds of closures are needed:

  1. [10] Closures for the effects of higher moments that are not predicted, e.g., as mentioned above, the fourth moments in a third-order closure model.

  2. [11] Closures for moments involving the pressure, which occur in the equations for moments that involve velocity components.

  3. [12] Closures for dissipation rates, which are especially important in the equations governing variances.

  4. [13] Closures to determine SGS phase changes [e.g., Sommeria and Deardorff, 1977; Mellor, 1977] and other microphysical processes [e.g., Larson et al., 2005], as well as radiative heating and cooling.

[14] Although HOC has most often been used to parameterize turbulence, it can also be used diagnostically to analyze a convective system. For example, Khairoutdinov and Randall [2002] used horizontal domain averages to diagnose the vertical structures of the terms of selected HOC equations in an analysis of a high-resolution numerical simulation of deep convection.

[15] The earliest test of a HOC-based parameterization in a GCM was reported by Miyakoda and Sirutis [1977], who used it to parameterize turbulence in a conventional GCM. A more modern version of HOC, which makes use of assumed joint probability density functions for temperature, specific humidity, and vertical velocity [Randall et al., 1992; Lappen and Randall, 2001] has recently been tested by Bogenschutz et al. [2012] and Bogenschutz and Krueger [2013]. In these studies, HOC was used to represent only the effects of turbulence and small cumulus clouds. Further discussion is given in section 4.

[16] “Eddies” can be used as a generic term for the departures from the grid-cell means. A generic term is needed because of the very wide range of SGS processes, including turbulence, deep and shallow cumulus convection, gravity waves, and often all of those in combination. It is possible to imagine a future model in which HOC is used to represent cumuli of all sizes, turbulence, and gravity wave momentum transport in a resolution-independent framework. Reality intrudes, however. It is very unlikely that today's closures, which have been designed for simulations of boundary layer turbulence, will also work well for deep convection or momentum transport by gravity waves. For example, dissipation in the boundary layer (for which a closure is needed) occurs mainly in the surface layer and near the inversion, but dissipation in deep convection occurs mainly inside cumulus towers and dissipation is practically nonexistent in (nonbreaking) gravity waves.

3 Global Cloud-Resolving Models

[17] A second approach to representing cloud processes in global atmospheric models makes use of globally uniform horizontal grids fine enough to resolve (or at least “permit”) large convective clouds and mesoscale convective structures. Such models must abandon the quasi-static approximation that has been used in all lower resolution GCMs. The first such global cloud-resolving model (GCRM) was the Nonhydrostastic Icosahedral Atmospheric Model (NICAM), which was developed for use with the Earth Simulator [e.g., Tomita and Satoh, 2004; Tomita et al., 2005; Miura et al., 2007] and has been run with a horizontal grid spacing of 3.5 km.

[18] Like coarser-resolution GCMs, GCRMs must parameterize the effects of microphysics, radiation, and turbulence, including small clouds. One of the important strengths of GCRMs is that they can explicitly simulate the spatially detailed input that is needed by such parameterizations. In contrast, conventional GCMs use parameterizations to generate the input to other parameterizations. For example, the radiation and microphysics parameterizations use input that is based in part on parameterizations of fractional cloudiness. As a second example, a microphysics parameterization needs information about the cloud-scale vertical velocity, which is sometimes diagnosed using an entraining plume model [e.g., Del Genio et al., 2007; Park and Bretherton, 2009; Chikira and Sugiyama, 2010; Donner et al., 2011].

[19] Particular topics that have been explored using NICAM include the MJO [e.g., Miura et al., 2009], tropical cyclones [e.g., Emanuel et al., 2010], and cloud and precipitation climatology [e.g., Satoh et al., 2008; Inoue et al., 2010], including the diurnal cycle of precipitation [Sato et al., 2007] and cloud feedbacks on climate change [Satoh et al., 2012]. NICAM is now being tested in coupled simulations; preliminary results with a slab ocean were reported by Nasuno [2013].

[20] Recent development of NICAM has included a more complete parameterization of microphysics [Tomita, 2008] and an enhanced turbulence parameterization based on HOC [Noda et al., 2009].

[21] Although GCRMs are computationally expensive, they are now being used in simulations of up to about a simulated year in length—long enough to study many interesting aspects of the global atmospheric circulation. Within 10 years or so, continuing increases in computer power may make it possible to use GCRMs in simulations of climate change. Alternatively, increasing computer power can be used to further refine a GCRM's grid spacing in shorter simulations. For example, there are plans to use NICAM in global simulations with ~400 m horizontal grid spacing.

4 Superparameterizations

[22] Superparameterization [Randall et al., 2003a, 2003b] is an approach to global atmospheric modeling that lies midway between conventional parameterizations and GCRMs. A superparameterization is based on a simplified CRM with a two-dimensional domain and periodic lateral boundary conditions. A “copy” of the CRM is embedded in each grid column of a conventional GCM. The combination of a GCM with a superparameterization can be called a Multiscale Modeling Framework (MMF).

[23] In a superparameterization, the horizontal grid spacing of the embedded CRM is typically 4 km, similar to that of a GCRM. The CRM is therefore not truly cloud resolving, but experience shows that such a model can crudely but explicitly simulate the formation of large convective clouds, including some aspects of their mesoscale organization [Pritchard et al., 2011]. Parameterizations of microphysics, turbulence, and radiation operate on the CRM's grid and take the place of the GCM's parameterizations of the same processes. In effect, the CRM with its parameterizations replaces all of the parameterizations of a conventional GCM. Although the two-dimensionality and periodic lateral boundary conditions of the CRM are unrealistic simplifying assumptions, the CRM does solve the equation of motion and thus ties back to the basic physics more closely than a toy cumulus cloud model such as an entraining plume. The turbulence parameterization of the CRM produces entrainment into the simulated updrafts and downdrafts. The intensity of the parameterized turbulence is influenced by horizontal and vertical shears, as well as static instability, on the grid scale of the CRM. In addition, the CRM's parameterized turbulent flux of moisture is sensitive to spatial variations of moisture on the CRM's relatively fine grid; in contrast, conventional parameterizations typically assume that the environment between the convective updrafts is horizontally uniform.

[24] Because the CRM is two-dimensional, it cannot realistically simulate the vertical transport of horizontal momentum by convection or gravity waves. For this reason, momentum feedback from the CRM to the GCM is not allowed. Only temperature and moisture feedbacks are included. Khairoutdinov et al. [2005] experimented with momentum feedback using a three-dimensional superparameterization with a small domain, and similar experiments are currently being performed by Pritchard and Bretherton [2013].

[25] The embedded CRM is a nonlinear dynamical system that is sensitively dependent on initial conditions. In this sense, the convective activity simulated by the CRM is nondeterministic, so that a superparameterization qualifies as a stochastic parameterization.

[26] Superparameterization was pioneered by Grabowski and Smolarkiewicz [1999] and Grabowski [2001] and subsequently implemented in a version of the Community Atmosphere Model (CAM) by Khairoutdinov and Randall [2001]. The modified CAM is called the SP-CAM. Tao et al. [2009] created a second MMF by starting from the Goddard GCM, including a superparameterization based on a simplified version of the Goddard Cumulus Ensemble Model. More MMFs are currently under development.

[27] Because the SP-CAM does much more arithmetic per simulated day than the CAM, it consumes much more cpu time, although still much less than a GCRM. Fortunately, the increase (relative to the CAM) in wall-clock time per simulated year is much less than the increase in cpu time. The reason is that unlike the CAM, the SP-CAM is almost embarrassingly parallel. This high degree of parallelism is due to the fact that the embedded CRMs, which do a very large percentage of the SP-CAM's computational work, do not directly communicate with each other. The highly parallel nature of the SP-CAM allows it to use many more cores than the CAM.

[28] The SP-CAM has been used to study a wide range of climate phenomena. (See the list of publications at Particular topics include the MJO [Benedict and Randall, 2011; Thayer-Calder and Randall, 2009; Kim et al., 2009; Arnold et al., 2013], the Asian and African monsoons [DeMott et al., 2011, 2013; McCrary, 2012], the diurnal cycle of precipitation [Pritchard and Somerville, 2009a, 2009b; Pritchard et al., 2011], the intensity of precipitation [DeMott et al., 2007], and cloud feedbacks on climate change [Wyant et al., 2009; Blossey et al., 2009]. In a major development, Stan et al. [2010] coupled the SP-CAM to an ocean model and found that the simulated atmospheric circulation became significantly more realistic. This coupled model was used in several of the studies cited above. Recently, Stan and Xu (Climate simulations and projections with the super-parameterized CCSM4, submitted to Journal of Advances in Modeling Earth Systems, 2013) have performed climate change simulations using an updated version of the coupled model.

[29] HOC has been tested as an improved parameterization of turbulence in the SP-CAM's embedded CRM [Cheng and Xu, 2011, 2013; Xu and Cheng, 2013a, 2013b], with very promising results. HOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This is accomplished in part by making use of the assumed joint probability density functions mentioned earlier. With judicious attention to coupling the parameterizations, the subgrid information obtained through HOC can be used by improved parameterizations of microphysics [e.g., Morrison and Grabowski, 2007, 2008] and radiation [e.g., Pincus and Stevens, 2009].

[30] A significantly enhanced version of the SP-CAM has recently been developed at the Pacific Northwest National Laboratory. Wang et al. [2011a] replaced the microphysics parameterization of the original SP-CAM with a new parameterization that couples the numbers of droplets and crystals in clouds to a simulated aerosol population. The aerosol particles are allowed to respond to cloud updrafts, chemical processing in droplets, and removal from the atmosphere by precipitation [Wang et al., 2011b; Wang et al., 2012; Wang et al., 2013].

[31] A more sweeping reformulation of the superparameterization concept has reached a late stage of development. Jung and Arakawa [2010] have developed a Quasi-Three-Dimensional Multiscale Modeling Framework or Q3-D MMF. The superparameterization of the Q3-D MMF includes the dynamical effects of three-dimensionality in a simplified way and eliminates the periodic boundary conditions. The Q3-D MMF still uses much less computer time than a GCRM. Jung and Arakawa tested the Q3-D idea in a limited-area model. A global version is now under development.

5 Progress Toward Understanding and Simulating the MJO

[32] The preceding sections of this paper deal with parameterizations, per se. We now consider how the parameterizations influence simulations of a particularly important and problematic weather system: the MJO. For the past decade or more, the problem of cloud parameterization for GCMs has been closely linked to the problem of understanding and simulating the MJO. The reason is that many conventional GCMs fail to produce satisfactory simulations of the MJO, although the situation is gradually improving [e.g., Hung et al., 2013]. These difficulties are often attributed to deficiencies of the cumulus parameterizations. In this section, we try to identify the missing ingredient that prevents many conventional GCMs from simulating the MJO. Zhang [2005] provides a comprehensive review of other MJO-related topics.

[33] Although the MJO is one of the most important modes of tropical variability, it was not discovered until the early 1970s [Madden and Julian, 1971, 1972]. It can be defined as a broad region of humid air and vigorous precipitation that maintains itself as it drifts slowly eastward across the tropical Indian and Western Pacific Oceans. As would be expected from the theoretical work of Matsuno [1966] and Gill [1980], the precipitation maximum of the MJO is accompanied by low-level winds that trace out twin cyclones on the west side of the precipitation maximum and a zonally broader patch of easterly winds on the east side. The zonal wind field thus converges at low levels near the precipitation maximum and diverges aloft.

[34] The problem of understanding the MJO can be separated into several linked parts. First of all, the steady motion generated by a moving heat source on an equatorial beta plane is well described by the model of Matsuno [1966] and Gill [1980], hereafter called the “MG model,” and has been studied by Hendon and Salby [1994] and Schubert and Masarick [2006], among others. The relevance of the MG model to the MJO has been recognized for decades [e.g., Chao, 1987]. Given a realistic moving heat source, the dynamical core of any GCM should be able to simulate a wind field similar to that of the observed MJO.

[35] On the other hand, the failure of Matsuno's [1966] theory of equatorial waves to produce free modes resembling the MJO, despite the theory's success in predicting the other observed Equatorial waves (with the possible exception of Easterly Waves) [Kiladis et al., 2009], implies that the MJO depends fundamentally on processes that were not included in Matsuno's model. It is now believed that moist processes are essential to the MJO [e.g., Raymond, 2001; Grabowski and Moncrief, 2004; Bony and Emanuel, 2005], and the term “moisture mode,” which was coined by Fuchs and Raymond [2007], is now widely used to describe the MJO [Sugiyama, 2009]. The dry MG model cannot describe moisture modes.

[36] Although variations of longwave radiative heating and surface evaporation may be important for the MJO [e.g., Raymond, 2001; Bony and Emanuel, 2005; Andersen and Kuang, 2012; Arnold et al., 2013], GCMs should be able to simulate them, at least qualitatively, so they are probably not the missing ingredients that prevent many conventional GCMs from simulating the MJO.

[37] Moisture advection is a key process in the MJO [Maloney, 2009; Maloney et al., 2010; Andersen and Kuang, 2012; Pritchard and Bretherton, submitted manuscript, 2013]. Both meridional and vertical advections favor strong drying to the west of the humid, strongly precipitating core of the MJO; under such conditions, positive water vapor anomalies must move eastward or be destroyed. Given the meridional and vertical gradients of water vapor in the “basic state,” it appears that the wind field predicted by the MG model (or any GCM) can account for the advective drying. In modeling terms, the drying is due to resolved-scale advection, rather than parameterized processes. The precipitation rate is observed to decrease as advection dries the air on the west side of the MJO. Perhaps surprisingly, not all models simulate this [Holloway et al., 2013]. In an analysis of numerical experiments with an aquaplanet version of the SP-CAM, Andersen [2012] found that the eastward drift of the MJO speeds up if the subtropical reservoir of dry air is brought closer to the equator.

[38] The moistening of the air on the east side of the MJO is necessary for its persistence and eastward progression. Zonal advection of moisture plays an important role in this moistening [Sobel and Maloney, 2012, 2013]. The observed specific humidity fluctuations are strongest near the 700 hPa level, however [Sherwood, 1999; Holloway and Neelin, 2009]. Since the ultimate source of moisture is the ocean, the MJO needs a process that lifts the water vapor from the surface and moistens the 700 hPa level. The upward transport of moisture on the east side of the MJO is due to processes that are parameterized in conventional GCMs. Some conventional GCMs that fail to simulate the MJO are unable to moisten the air in regions of strong precipitation [Thayer-Calder and Randall, 2009; Kim et al., 2009; Landu and Maloney, 2011; Mapes and Bacmeister, 2012; Hung et al., 2013; Kim et al., Process-Oriented MJO Simulation Diagnostic: Moisture Sensitivity of Simulated Convection, submitted to Journal Climate, 2013].

[39] The MJO has been simulated by both NICAM [e.g., Miura et al., 2007, 2009] and the SP-CAM [Benedict and Randall, 2009, 2011]. As discussed by Thayer-Calder and Randall [2009], an essential factor in the SP-CAM's successful simulation of the MJO is its ability to produce realistically deep layers of high relative humidity in regions of strong precipitation. This is consistent with the idea that entrainment-induced sensitivity of deep convection to midtropospheric humidity is needed for a realistic MJO simulation. The other side of the coin, however, is that such sensitivity is important only to the extent that the midtropospheric water vapor actually varies in a realistic way. Chikira [2013] presents an insightful analysis of the processes that lead to midtropospheric humidity changes in tropical convective systems.

6 Outlook

[40] GCRMs and superparameterized GCMs are simultaneously global models and process models (Figure 1). They explicitly simulate much of the input that is needed for parameterizations of microphysics, turbulence, and radiation. This makes it possible to use what we actually know about these important but necessarily parameterized processes. Future work with GCRMs and MMFs will accelerate our efforts to understand the roles of microphysical and turbulent processes in the global circulation of the atmosphere.

Figure 1.

In this Venn diagram, the circle on the left represents process models, including both large-eddy simulation models (LES models) and CRMs. The circle on the right represents global atmospheric models. Until recently, these two classes of models did not overlap. Today, as shown in the figure, there is some intersection in the form of GCRMs and MMFs.


[41] This work has been supported by the National Science Foundation Science and Technology Center for Multi-Scale Modeling of Atmospheric Processes (CMMAP), managed by Colorado State University under cooperative agreement ATM-0425247. Akio Arakawa, Eric Maloney, Steven Krueger, William Collins, Leo Donner, Wojciech Grabowski, Mike Pritchard, Masaki Satoh, and Bjorn Stevens made valuable comments on preliminary drafts of the manuscript. Kerry Emanuel provided a very useful signed review.

[42] The Editor thanks an anonymous reviewer and Kerry Emanuel for their assistance in evaluating this paper.