A fast, flexible, approximate technique for computing radiative transfer in inhomogeneous cloud fields

Authors


Abstract

[1] Radiative transfer schemes in large-scale models tightly couple assumptions about cloud structure to methods for solving the radiative transfer equation, which makes these schemes inflexible, difficult to extend, and potentially susceptible to biases. A new technique, based on simultaneously sampling cloud state and spectral interval, provides radiative fluxes that are guaranteed to be unbiased with respect to the benchmark Independent Column Approximation and works equally well no matter how cloud structure is specified. Fluxes computed in this way are subject to random, uncorrelated errors that depend on the distribution of cloud optical properties. Seasonal forecasts, however, are not sensitive to this noise, making the method useful in weather and climate prediction models.

1. Introduction: Radiative Transfer in Large-Scale Models

[2] Global weather forecast and climate models predict the evolution of the atmosphere by computing changes in the energy, momentum, and mass budgets at many levels in each of many columns around the globe. One term in the energy budget is the local heating or cooling due to transfers of radiation, which is derived from the radiative fluxes averaged across each model grid cell. Computing these fluxes in a large-scale model (LSM) is in principle a two-part process. The LSM must first determine the state of the atmosphere within each grid cell, including the horizontal and vertical distributions of clouds, aerosols, and optically active gases, then give this description to a radiative transfer solver, which computes fluxes at each level.

[3] The description of clouds in current LSMs is quite simple: Most predict the proportion of each grid cell filled with cloud (the “cloud fraction”) and the mean in-cloud condensate concentration, then prescribe vertical structure using simple, fixed rules known as overlap assumptions. This leads to a relatively small number of possible cloud configurations within each column. In nature, though, domains the size of LSM grid cells often contain clouds with substantial horizontal variability [e.g., Barker et al., 1999; Pincus et al., 1999; Rossow et al., 2002] and complicated vertical structure [e.g., Hogan and Illingworth, 2000; Mace and Benson-Troth, 2002]. Unresolved subgrid-scale variability impacts radiative fluxes as well as microphysical process rates [Pincus and Klein, 2000], so cloud schemes are now emerging that address this structure either parametrically [Cusack et al., 1999; Tompkins, 2002] or explicitly [Grabowski and Smolarkiewicz, 1999; Khairoutdinov and Randall, 2001].

[4] Domain-average fluxes in variable clouds can be determined quite accurately using the plane-parallel independent column approximation (ICA) by averaging the flux computed for each class of cloud in turn [Cahalan et al., 1994; Barker et al., 1999]. Unfortunately, the ICA is far too computationally expensive when the number of cloud states is even moderately large. Radiative transfer is time consuming because fluxes and heating rates are broadband quantities that must be integrated over many spectral intervals: A heating rate profile in a single column is, in fact, the result of many narrowband calculations.

[5] The impracticality of the ICA has inspired a variety of computational shortcuts. Simple representations of overlapping homogeneous clouds, for example, can be treated by weighting clear-and cloudy-sky fluxes [Morcrette and Fouquart, 1986]. A variety of methods exist to compute domain-averaged radiative fluxes for internally variable clouds; all invoke restrictive assumptions about the nature of the variability and link layers in the vertical with further ad hoc assumptions [e.g., Stephens, 1988; Oreopoulos and Barker, 1999; Cairns et al., 2000].

[6] What existing radiative transfer schemes have in common is an intimate coupling between assumptions about cloud structure and methods for computing radiative transfer. This is an unnatural marriage, since cloud structure and radiative transfer are conceptually distinct, and leads to a variety of problems. It is difficult to ensure consistency, for example; radiative fluxes computed using different implementations of the same overlap assumptions may differ substantially from each other and from benchmark calculations [Barker et al., 2003]. More importantly, weaving assumptions about cloud structure into the fabric of radiative transfer solvers makes these codes hard to extend or generalize. Large-scale models that provide estimates of subgrid-scale variability will require accurate, flexible radiative transfer solvers. It seems very unlikely that small modifications to existing, highly particular treatments of clouds and radiation will be up to the task.

[7] This paper describes a computationally efficient technique for computing domain-averaged broadband radiative fluxes in vertically and horizontally variable cloud fields of arbitrary complexity. The method makes random, uncorrelated errors in estimates of radiative quantities, but the expectation value of these estimates is completely unbiased with respect to the ICA. In the sections that follow we describe the method, quantify the noise it produces, and demonstrate that random errors of this magnitude do not affect forecasts made by a large-scale model. Finally, we describe a variety of model formulations in which the technique may be useful.

2. Monte Carlo Integration of the Independent Column Approximation

2.1. Conceptual Background

[8] Imagine a domain R several tens or hundreds of kilometers in extent within which the three-dimensional distribution of cloud optical properties is known exactly. The true domain-averaged broadband flux 〈F〉 at some level is an integral over wavelength λ and horizontal position

display math

where the weighting S(λ) in each spectral interval dλ depends on the incoming spectral flux and F3D denotes fluxes computed accounting for three-dimensional radiative transfer. For large-scale calculations net horizontal transfers of radiation can be neglected, and 〈F〉 can be approximated with the ICA as

display math

where F1D indicates fluxes computed using one-dimensional radiative transfer theory.

[9] Radiative fluxes are typically much more uniform in clear skies than in clouds, so we partition the domain into clear and cloudy portions and perform a single calculation for the clear sky. Furthermore, because each x, y location is treated independently, we may write equation (2) as an integral over the distribution p(s) of possible states s of the cloudy atmosphere:

display math

where Ac represents vertically projected cloud fraction. Finally, the spectral integration in equation (3) is approximated by discrete sums with potentially unequal weights w:

display math

Equation (4) is general and applies to any method of solving the radiative transfer equation (e.g., two-stream solutions or adding-doubling). The spectral intervals may be thought of as bands [e.g., Slingo, 1989] or as the quasi-monochromatic intervals in a k distribution. It is the nested sum in the last term in equation (4) that makes the ICA impractical in large-scale models. Values of K in typical k distribution schemes may be of order 50–100 [Fu and Liou, 1994], so even 10 possible cloud states lead to an impractical 500–1000 multilayer radiative transfer calculations per LSM column.

[10] In large-scale models, equation (4) applies to individual grid cells, and the average is over all states s implied by the model variables in each column. If the model predicts cloud fraction and mean in-cloud condensate in each column, for example, these cloud states and their associated probabilities p(s) can be determined by enumerating all the possibilities implied by the model variables and additional overlap assumptions [e.g., Collins, 2001]. We describe other ways in which s and p(s) might be chosen in section 6 and seek here a computationally efficient way to solve or approximate equation (4) given an arbitrarily complicated set of cloud states.

2.2. Heart of the Method

[11] The full ICA calculation of cloudy-sky flux 〈Fcld〉 is a two-dimensional integral, with wavelength varying in one dimension and cloud state in the other. Rather than computing the contribution of every cloud state to every wavelength interval, we approximate 〈Fcld〉 by choosing a cloud state at random for each spectral interval:

display math

so that the flux in spectral interval k is computed for a cloud state srnd chosen at random with probability p(s) from the distribution of possible states within the LSM column. Equation (5) is, in effect, a Monte Carlo integration of the ICA, so we refer to equation (5) as the McICA.

[12] If only a single cloud state exists within a column, the McICA and ICA are equivalent. Direct application of equation (5) requires K cloudy-sky calculations per domain, so it is no more expensive than a broadband calculation for a single column. This procedure will, of course, introduce sampling error into each estimate of 〈Fcld〉, but that error is guaranteed to be random and, in the limit of many calculations, can be shown to have zero mean bias. The questions, then, are how big these errors are likely to be and whether random noise of this magnitude will impact weather or climate forecasts.

3. How Much Error is Introduced by Sampling the ICA?

[13] If the three-dimensional distribution of cloud properties in a domain (i.e., an LSM grid column) is known perfectly, accurate domain-averaged fluxes can be computed with the ICA. To assess the amount of noise introduced by the McICA (as opposed to any errors introduced in the specification of cloud structure), we compare broadband fluxes and heating rates as computed by the ICA and McICA in domains in which cloud structure is known explicitly. We seek an upper bound on the amount of noise introduced by McICA, so we choose domains containing very complicated clouds.

[14] We make our tests on 120 cloud fields generated at 5-min intervals by a two-dimensional cloud-resolving model simulation of a tropical squall line [Fu et al., 1995]. The simulations have a horizontal grid spacing of 1 km, a domain size of 512 km, and 35 layers on a stretched grid, and over the course of the simulation produce a wide variety of ice and liquid clouds. We compute radiative transfer using a two-stream solver, obtaining cloud and gas optical properties in each of 31 spectral intervals in the shortwave and 46 in the longwave (J. Li, personal communication, 2002). The solar zenith angle is fixed at 45°, and 100 McICA estimates are calculated for the first 256 km of each domain (a choice between the grid sizes of current climate and numerical weather prediction models, and one to which our results are not particularly sensitive). This yields a distribution of McICA errors for each scene, as well as a distribution of errors for the collection of scenes taken as a whole, which allow us to estimate the likely magnitude of the error in a single application of the McICA relative to the benchmark ICA calculation.

[15] The clouds in this simulation are vertically extensive and subject to large amounts of shear, so more than 80% of the scenes exhibit Ac > 85%, with most having substantial variability in optical thickness. Scene-by-scene standard deviations of McICA errors in net surface flux (Figure 1, top panel) are distributed about 105 W m−2, which is roughly the same as the standard deviation of all the errors taken as a single population (solid line). This relatively large error, about 10% of the incident solar flux, is caused by sampling errors in both absorption and transmission in the atmosphere. Within the atmosphere itself, McICA errors are smaller. The bottom panel shows the joint probability distribution of the standard deviation of errors in layer-by-layer heating rates and the cloud fraction within each layer, along with the standard deviation of errors computed from all layers in each cloud fraction interval. Errors in fluxes computed at one layer affect the flux incident on other layers, so some cloud-free layers show variability in fluxes computed with the McICA. Errors in heating rate generally increase with cloud fraction, as might be expected from equation (4). In this data set 90% of the layers are less than 40% cloudy, so we cannot determine how the error behaves at larger values of cloud fraction.

Figure 1.

Errors introduced by the Monte Carlo integration of the independent column approximation (McICA) in calculations of (top) surface fluxes and (bottom) heating rates. For each of 120 cloud fields over the life of a tropical squall line we compare a single benchmark ICA calculation to 100 separate estimates made using an implementation of the McICA. The standard deviation of these differences lets us estimate the error that might be expected in a single application of the McICA. At the surface this error is roughly 105 W m−2 or 10% of the solar radiation incident at the top of the atmosphere, though errors for individual scenes may be larger or smaller by about 40 W m−2. The typical error in layer-by-layer heating rates increases with cloud fraction, as the joint probability distribution indicates. (Note that contour levels are logarithmic.) Solid lines in both panels show the standard deviation of errors for all scenes lumped together.

4. Does Random Noise in Radiative Forcing Affect Forecast Skill?

[16] The McICA introduces substantial random noise into individual flux and heating rate calculations, and if this noise degrades the model's forecasts, the McICA is untenable. However, several factors suggest that large-scale models can digest substantial random noise without ill effects. Experience with ensemble prediction systems shows that the temporal and spatial autocorrelation of random perturbations strongly influences any forecast changes [Buzzia et al., 1999], and the noise introduced by McICA relative to the benchmark calculation is completely uncorrelated between calculations (more exactly, it is correlated perfectly up to the temporal and spatial resolution of the radiative transfer calculations and is uncorrelated at longer and larger scales). Furthermore, radiative heating is usually a small term in an LSM's atmospheric energy budget, so short-term forecasts are not particularly sensitive to radiation calculations, and the mean error decreases with repeated application. This implies that noise in radiative fluxes may not affect short-term forecasts, while accuracy in fluxes computed over the long term should ensure accurate climatic predictions.

[17] We test the feasibility of using the McICA operationally by introducing uncorrelated random noise into the heating rates and surface fluxes in a prediction model. We then compare the resulting forecasts with those made using the standard radiation scheme and with forecasts in which the radiative transfer calculation is perturbed in a small but systematic manner. We build three ensembles of seasonal forecasts at a resolution of TL95 L60 using the European Centre for Medium-Range Weather Forecast's cycle 25R1 prediction model. Each ensemble contains 30 members initialized from the analysis on each day of April 2001. The control ensemble uses the standard model configuration. In a second ensemble we systematically increase particle size in the radiation scheme by a small amount (1 μm for liquid clouds and 10 μm for ice clouds). In the final ensemble we introduce a proxy for the noise introduced by McICA. We randomly perturb the heating rates in each layer at hourly intervals. The magnitude of the perturbation is drawn from a Gaussian distribution with standard deviation σQnet suggested by the calculations in section 3:

display math

where Qnet, QLW, and QSW are the net, longwave, and shortwave heating rates, respectively, in the layer and ac is the layer cloud fraction. The perturbation is constant with height below the last cloudy layer, and the surface flux is perturbed from its control value by the same proportion as the heating rate in these layers. We then compare the last 3 months of the forecasts made with systematic and random perturbations to those made using the control configuration.

[18] Seasonal forecasts made with random noise do not differ by a statistically significant amount from the control forecasts, indicating that the model is not sensitive to uncorrelated noise in radiative fluxes at the levels described by equation (6). Small systematic perturbations, however, have an unambiguous impact. We average forecasts of surface and top-of-atmosphere fluxes, as well as meteorological quantities such as surface pressure and 500 hPa heights, over the last 3 months of the simulation of each ensemble. Differences between the control and randomly perturbed ensembles are small and distributed randomly in space, while differences between the control and systematically perturbed runs are substantially larger in magnitude and show greater spatial coherence. Student's t tests (Table 1) confirm that small systematic changes in the radiation scheme cause measurable changes in model forecasts, but that the noise introduced by McICA is unlikely to reduce forecast skill.

Table 1. Percentage of Area Where the Student's t Test Indicates Differences Between Control and Perturbed Ensembles of Simulations (3-Month Averages) That Are Significant at the 97.5% Confidence Levela
QuantityRandom PerturbationsSystematic Perturbations
  • a

    Random samples drawn from the same population are expected to be statistically different at the 97.5% level over 2.5% of their area when ensembles are large. Model forecasts are sensitive to even small systematic changes in radiative calculations but are unaffected by random, uncorrelated noise as would be introduced by McICA.

Top-of-atmosphere net longwave flux3.6628.78
Top-of-atmosphere net shortwave flux2.8054.99
Surface net longwave flux3.0313.91
Surface net shortwave flux2.8245.61
Surface pressure1.6417.77
500 hPa geopotential2.4918.67

5. How Might the McICA be Applied in Practice?

[19] Equation (5) can be used directly in current radiation schemes. In our experience this is easily accomplished by removing one loop from the radiative transfer routines and supplying these routines with many columns instead of one. Large-scale models could make this change today to ensure consistent application of arbitrary overlap assumptions.

[20] However, a straightforward implementation of equation (5) has several drawbacks. If the number of bands is small, the random errors in each column calculation may be quite large. Furthermore, randomly chosen individual columns do not contribute equally to the final flux calculation, which might slow convergence of the McICA. Rather than doing the spectral interval as a weighted sum, it may be preferable to compute the spectral integral as an unweighted sum of a number of quasi-monochromatic calculations whose wavelengths are chosen randomly (but, again, according to the correct probability). This represents a completely Monte Carlo approach to both spectral and cloud state integration.

[21] Random error in a given McICA calculation depends on the number and variety of cloud states in the domain. One benefit of Monte Carlo integration is that integration uncertainty estimates for each quantity (in this case, surface flux or the heating rate at each level) can be computed. The McICA may be implemented by dividing an initial calculation of M quasi-monochromatic calculations into N batches. The uncertainty in the McICA estimate can then be computed as equation image, where σ is the standard deviation of the N estimates. If this uncertainty is too large, additional calculations can be made “on the fly” until the error is sufficiently small.

[22] The McICA can also be used to sample temporal or spatial variability currently resolved by a large-scale model but neglected in radiative calculations. The current version of the ECMWF forecast model, for example, computes cloud properties every 15 min but radiative heating and cooling rates only every 3 hours (after the first 12 hours [Morcrette, 2000]). An alternative would be to compute radiative fluxes each time cloud properties are updated (12 times as often as presently) using a randomly chosen one-twelfth of the k values and to apply heating rates computed as a running average over the last 3 hours. Similarly, models that compute radiation at a coarser spatial resolution than cloud properties might use the McICA to sample the variability between grid cells.

6. Implications

[23] The McICA completely decouples the processes of determining cloud structure within a domain from the calculation of radiative transfer. This has two advantages: The radiation code can become both simpler and more flexible, while assumptions about cloud structure can be applied uniformly to flux and heating rate calculations. Methods for generating cloud structure may therefore be arbitrarily complicated, and can be used consistently in all calculations.

[24] We expect that subgrid-scale structure will be most simply realized in large-scale models by introducing an interface that provides randomly chosen populations of profiles from each large-scale model column. This interface would encapsulate any necessary assumptions, such as those regarding vertical correlations. Models that predict cloud fraction and a single value of cloud condensate can explicitly calculate every possible configuration and its probability. Statistical cloud schemes [e.g., Tompkins, 2002] could generate columns consistent with the probability distribution of condensate predicted within each model layer, with overlap relationships incorporated by making the probability of condensate values for one layer conditional on other layers. If the structure comes from a fine-scale model embedded within the LSM, the interface would return columns chosen at random from the cloudy portion of the domain. We believe this is vastly preferable to the current state of affairs, in which assumptions about cloud structure (e.g., the horizontal and vertical distribution of extinction) are inextricable from radiative transfer solvers.

[25] Until recently, large-scale models have contained simple representations of cloud structure, and radiative transfer schemes have focused on computing accurate fluxes in a small number of well-defined cloud states. However, as models introduce more complicated cloud distributions, this exactness will become unaffordable and, in all likelihood, unattainable. The choice seems to be between precise treatments for demonstrably wrong clouds that converge instantly to the wrong answer and imprecise radiative transfer for more correct clouds producing instantaneously noisy computations that converge over time to the correct answer. Because radiation affects the atmosphere and ocean so slowly, we suggest that it is better to solve the right problem approximately than the wrong problem exactly.

Acknowledgments

[26] We are grateful for support from the U.S. Department of Energy under grants DE-FG03-01ER63124 and DE-FG02-90ER61071 as part of the Atmospheric Radiation Measurement Program and the Modelling of Clouds and Climate Proposal funded through the Canadian Foundation for Climate and Atmospheric Sciences, the Meteorological Service of Canada, and the Natural Sciences and Engineering Research Council. We appreciate provocative conversations with K. Franklin Evans.

Ancillary