The importance of cloud to the distribution of radiative heating rates has long been recognised Liou, 1986, as has the importance of representing subgrid-scale cloud inhomogeneity (Cahalan et al. 1994). However, the radiative transfer schemes used in many general circulation models (GCMs) ignore subgrid-scale cloud-water content variability and assume that clouds are vertically overlapped according to the maximum random approximation Geleyn 1979 and Hollingworth, 1979). Barker et al. (1999) showed that these assumptions lead to biases in the calculated fluxes and heating rates, both individually and when combined.
The Monte Carlo Independent Column Approximation (McICA), described by Pincus et al. (2003), is a method for representing cloud inhomogeneity in radiative transfer schemes. It approximates the accurate but costly full Independent Column Approximation (ICA) calculation (Barker et al., 1999) at considerably less computational expense. For the integral over wavelength, at each quadrature point, instead of calculating the monochromatic flux for every subcolumn the monochromatic flux for one or more randomly chosen subcolumns is calculated. For further details, see section 2.
McICA has two major advantages compared with the alternative methods for representing cloud inhomogeneity in GCMs, such as the use of a scaling factor (Cahalan et al., 1994) or ‘Tripleclouds’ (Shonk and Hogan, 2008). Firstly it is unbiased with respect to the full ICA calculation. Secondly and perhaps more importantly it removes the cloud-structure representation from the radiative transfer code and thus allows for a more flexible cloud representation. On the other hand, it introduces conditional random errors, which depend on the choice of subcolumns mapped to each quadrature point. The amount and effect of this noise has been the subject of several articles. However, such articles have generally tended to focus on its impacts in climate models.
Pincus et al. (2003) performed a number of tests on cloud fields from a cloud-resolving model (CRM). They calculated the standard deviation of McICA errors for a short-wave (SW) surface flux of 105 W m−2 (approximately 10% of the incident top of atmosphere (TOA) radiation). The effect of this noise on a seasonal forecast model was estimated by randomly perturbing radiative fluxes and heating rates. They found no statistically significant differences from their control.
Raisanen et al. (2005) and Raisanen et al. (2007) investigated the effect of McICA noise on the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM) and European Centre Hamburg Model 5 (ECHAM5) climate models respectively, using a low-noise version of McICA as the reference. In both models they found that their noisiest implementations of McICA led to a significant reduction in low cloud fractions. They were able to remove this effect by reducing the level of noise.
More recently, Barker et al. (2008) investigated the effect of McICA noise on several global models. Again using a low-noise version of McICA as the reference, they ran 14 day simulations. They found that some of their models responded significantly to the noisiest tests, but no models displayed significant impacts when noise was reduced.
While the effect of McICA noise on climate models has been quite extensively studied, its effect on numerical weather prediction (NWP) models, particularly where the time and spatial scales of interest are smaller, is not so well documented. McICA has been tested in the European Centre for Medium-Range Weather Forecasts (ECMWF) integrated forecast system (Morcrette et al. 2008) and, as in the climate simulations, the related noise was not found to be detrimental to results. However, the radiation scheme employed at ECMWF has many more quadrature points than most other forecast models and as a result the magnitude of McICA noise in the ECMWF model is significantly smaller.
As McICA noise is generally thought to be of little consequence, only a single article has been published regarding methods for reducing noise. Raisanen and Barker (2004) suggested two methods for minimizing McICA noise. Combining these methods, they found that they could reduce the standard deviation of McICA noise by approximately a factor of three, while increasing the number of monochromatic calculations required by 50%.
This article investigates the effect of McICA noise on the MetUM. In section 2 we consider the McICA scheme in more detail, discuss its implementation in the MetUM and describe the cloud generator that provides the subgrid cloud profiles required. Section 3 considers the magnitude of the noise associated with McICA, introduces techniques for efficiently minimizing this noise and compares these techniques with a previous method. In section 4 we consider the effect of McICA noise on a MetUM NWP simulation, with regards to 1.5 m temperature in particular. Finally, conclusions are presented in section 5.
2. The McICA method and its implementation
2.1. The McICA method
At its core, the McICA method is an efficient mechanism for approximating the full ICA calculation. As such, we shall first describe the full ICA calculation in detail and then go on to describe how the McICA method relates to it.
The full ICA calculation consists of splitting each GCM column into a number of subcolumns, each of which is either overcast or cloud-free. The distribution of water content within the cloud can be represented by allocating different water-content values to each subcolumn. Assuming the flux in each subcolumn is independent of the flux in the other subcolumns, i.e. using the independent column approximation, the radiative transfer calculation is performed for each subcolumn individually. The fluxes for the entire profile are then determined as the mean of the subcolumn fluxes. Considering a profile divided into N subcolumns and a radiative transfer scheme with K quadrature points to approximate the integral over wavelength, the full ICA flux, , is given by
where fi,j denotes the flux calculated for the quadrature point j and the subcolumn i. This method is far too computationally expensive for practical use in GCMs, due to the double sum over quadrature points and subcolumns (in the equation above, NK monochromatic radiative transfer calculations are required, as opposed to K monochromatic calculations for the corresponding plane-parallel homogeneous calculation). However, it is often applied as a benchmark when evaluating other methods of treating horizontal inhomogeneity.
In the McICA scheme, each GCM column is again divided into subcolumns. However, rather than calculating fluxes for every combination of quadrature points and subcolumns, the flux for each quadrature point is calculated using one or more randomly chosen subcolumns. Thus the McICA flux, F, is given by
where n(j) denotes the number of randomly chosen subcolumns for the quadrature point j and frand(i,j),j denotes the flux calculated for the quadrature point j and a randomly chosen subcolumn, rand(i,j). If the distribution of subcolumns amongst the quadrature points is truly random, then McICA fluxes are unbiased with respect to the full ICA fluxes. However, conditional random errors are introduced. If the subcolumns are sampled without replacement, then at the limit of n(j) equals N for all j we have exactly the full ICA calculation.
2.2. Implementation in the Met Office Unified Model
The Edwards–Slingo radiative transfer code (Edwards and Slingo, 1996) employed in the MetUM currently splits both the SW and long wave (LW) spectra into a number of distinct bands. Within each band, extinction due to cloud condensate is treated as a ‘grey’ process. The correlated k-distribution method (Fu and Liou, 1992) is employed to represent gaseous absorption within the bands. This consists of reducing the number of quadrature points representing the integral over wavelength by binning those wavelengths with similar absorption coefficients (hereafter referred to as k-terms). Overlap of absorption is accounted for by equivalent extinction (Edwards 1996) where full calculations are only performed for the k-terms representing the ‘major’ gas in each band. External ‘spectral files’ are used to provide information on the decomposition of the spectrum into bands and k-terms, along with the optical properties of gases, aerosols and cloud condensate.
The homogeneous plane-parallel method currently in operation in Edwards–Slingo allows for fractionally cloudy layers, by splitting each layer into horizontally homogeneous regions (Shonk and Hogan, 2008). Thus, assuming we have two regions in each layer (i.e. cloudy and clear) we must solve for a clear and cloudy flux in each layer in order to obtain total up and downwelling fluxes.
The McICA scheme requires information about subgrid cloud. We use a stochastic cloud generator that implements the approach described by Raisanen et al. (2004). This generates 100 subcolumns, which may or may not contain cloud, once per radiative time step, so the same set of subcolumns can be sampled in both the SW and the LW. Moreover it is called independently of the radiation scheme and hence the cloud subcolumns are available for use in other parametrization schemes, such as the precipitation scheme, if required.
We have extended the generator to include a representation of the exponential-random-overlap approximation suggested by Hogan and Illingworth (2000), in addition to the separate exponential- and random-overlap parametrizations already available. Here contiguous clouds are overlapped according to the exponential parametrization while non-contiguous clouds are randomly overlapped. In addition, we redefined the decorrelation length-scale of Hogan and Illingworth (2000) in terms of pressure, as a pragmatic measure to match more closely the vertical coordinates used within the radiation scheme.
For our implementation of McICA, we allocate a distinct cloud subcolumn to each k-term. Thus cloud optical properties must be calculated once per k-term rather than once per band as in the plane-parallel case. Moreover, each subcolumn is either overcast or clear, so we only need to solve for a single set of fluxes, which is significantly less computationally expensive.
The computational cost of the McICA method depends on the chosen numbers for n in Eq. (2). Clearly, as the values of n increase more monochromatic calculations are required, so the cost of the code increases. Consider a McICA calculation that requires the same number of monochromatic calculations as the plane-parallel calculation (so in Eq. (2) n(j) = 1 for all j). As we have the same number of monochromatic calculations, but only require a homogeneous solver, calculating fluxes once the optical properties have been calculated is computationally cheaper. However, calculating the optical properties is more expensive, as the cloud contributions must be calculated once per k-term rather than once per band. Moreover we must account for the further cost of generating the cloud subcolumns. Thus the total cost of such a McICA calculation is comparable to that of the plane-parallel calculation.
3. Evaluation and optimization of noise
In order to study whether noise has an impact, a mechanism for controlling the level of noise is necessary. Thus in this section we consider methods for optimally reducing the magnitude of McICA noise.
3.1. Methods for reducing noise
Techniques for efficiently reducing McICA noise were first considered by Raisanen and Barker (2004). Two methods were introduced: optimizing the spatial sampling and optimizing the spectral sampling.
The method of optimizing the spatial sampling consists of splitting the flux calculation into clear and cloudy parts and restricting the random sampling of subcolumns to the cloudy part of the calculation. Thus Eq. (2) becomes
where Ctot is the total cloud cover in the profile, is the clear-sky flux calculated for the jth k-term and is the flux calculated for the jth k-term and a randomly chosen cloudy subcolumn. Clear-sky fluxes are often determined for diagnostic purposes, in which case this optimal spatial sampling is no more expensive than the basic calculation as given by Eq. (2).
The method of optimizing the spectral sampling is based on the fact that the various k-terms in a spectral file respond differently to changes in cloud (e.g. Manners et al., 2009). The idea is to sample more cloudy subcolumns with the k-terms that contribute most to the radiative effects of cloud. Thus in Eq. (3) higher values of n are used for the k-terms that are more responsive to cloud. Hence a method is required to choose how many subcolumns each k-term should sample. Note that, as described in section 2.2, in the MetUM cloud subcolumns are generated independently of the radiative transfer scheme. Thus, we are taking additional samples from an existing pool of subcolumns.
We developed a simple algorithm that can be run once offline to choose how many subcolumns each k-term should sample. The aim of our algorithm is to identify those k-terms that contribute most to the changes in fluxes that occur for a change in cloud water content. Several factors contribute to the magnitude of this change in flux: the weight of the k-term, which represents the proportion of the total flux corresponding to the k-term, the atmospheric clear-sky transmission and the difference in atmospheric transmission for the two different cloud water contents. The importance of the k-term increases as the magnitude of each of these factors increases. As the weight increases, the proportion of the total flux increases. Similarly, as the clear-sky transmission increases there is less clear-sky extinction and a greater proportion of the flux is available to be extinguished by the cloud. A larger difference in cloud transmission corresponds to greater sensitivity to the actual value of the cloud condensate. Thus the following equation gives the relative ‘importance’ ι of each k-term:
where, n(j) is the number of subcolumns sampled by the jth k-term, w(j) is the weight of the k-term given by Eq. (5), tg(j) is the atmospheric gaseous transmission for that k-term calculated as in Eq. (6), tcthin(j) is the total atmospheric transmission value for optically thin cloud as given by Eq. (7) and tcthick(j) is the total atmospheric transmission value for optically thick cloud, calculated as in Eq. (8).
wb(j) is the weight of the k-term as a fraction of the band;
wf(j) is the weight of the band as a fraction of the flux, calculated using the solar spectrum in the SW and a Planck function in the LW;
kesft(j) is the absorption coefficient for the gas for the k-term;
ug is the integrated column amount of the gas;
ϵ(j) is either the total extinction coefficient of the cloudy component or the absorption coefficient of the cloudy component, depending on whether the aim is to minimize surface flux or heating-rate errors;
u1 is the integrated column amount of condensate for an optically thin cloud, 0.002 kg m−2;
u2 is the integrated column amount of condensate for an optically thick cloud, 0.2 kg m−2.
We include three gases in this calculation: water vapour, carbon dioxide and ozone, with integrated column amounts of 25, 5 and 0.008 kg m−2 respectively. Each k-term represents absorption by one of these gases. The particular choices of condensate values above are rather arbitrary, but allow us to estimate the change in transmission due to changes in cloud thickness, as opposed to whether or not cloud is present. This prevents us from wasting samples on k-terms, which respond strongly to cloud but quickly become saturated.
Initially n(j) equals 1 for all k-terms. The algorithm consists of the following steps:
calculate the values of ι(j) for all k-terms;
allocate an additional sample to the k-term with the largest value of ι;
add 1 to the value of n(j) for the k-term that has been allocated an additional sample;
if there are further subcolumn samples to allocate, go back to the first step.
By assigning each additional sample to the most ‘important’ k-term, rather than the k-term that will have its importance reduced most, we minimize the individual importance of the k-terms rather than the sum of their importances. Minimizing the individual importance results in more samples being assigned to fewer k-terms. Moreover, results showed that this leads to lower heating-rate errors than when the sum is minimized.
As mentioned above, we suggest that ϵ(j) is defined as the total extinction coefficient of the cloudy component if one wishes to minimize flux errors and as the absorption coefficient if one wishes to minimize heating-rate errors. In the SW, both heating rates and surface flux are significant, so we assigned subcolumns to minimize heating rates and fluxes alternately. In the LW, there is far more absorption than scattering, so we found that we obtained the same values for N irrespective of whether we used total cloud extinction or absorption.
Raisanen and Barker (2004) use an alternative method to allocate subcolumns to k-terms. This algorithm consists of calculating the global mean cloud radiative effect (CRE) for each k-term, either in terms of surface flux or heating rates. Then further subcolumn samples are allocated one by one to the k-term with CRE reduced most by allocating it an additional subcolumn. We compare the two algorithms in the case-study described in the following section.
3.2. Test set-up and results
The magnitude of noise associated with the McICA method was evaluated using the offline version of the Edwards–Slingo code, together with the spectral files used for both the Hadley Centre Global Environmental Model (HadGEM) climate model and the global forecast model. The SW file contains six spectral bands and 20 k-terms for the major gases, while the LW file contains nine spectral bands with a total of 33 k-terms for the major gases.
We shall discuss heating rate and SW surface-flux errors for two different versions of McICA. In the single-sampling version of McICA, optimal spatial sampling is applied but no further additional cloud subcolumns are used. In the optimal version of McICA both optimal spatial and optimal spectral sampling are applied. Here we distribute 16 additional subcolumns in the SW and 12 in the LW using the algorithm outlined above (the effect of changing these numbers is discussed later in the article).
As mentioned in the previous section, the choices of gas and integrated cloud water amounts used in the algorithm were somewhat arbitrary. In the SW, for this particular spectral file, the algorithm is rather insensitive to the particular integrated column amounts of gas used; doubling or halving the amount of any of the gases has no effect. The algorithm is more sensitive to the cloud amounts, which of course are more variable. Nevertheless, modifying either of the cloud amounts by a factor of 10 only changes the distribution of at most three (out of 16) subcolumns. In contrast, in the LW the algorithm is more sensitive to the gas amounts than the cloud amounts, but is relatively insensitive to both gas and cloud amounts and the temperature from which the Planckian is calculated. While the algorithm showed little sensitivity to the particular values used for the temperature, gas and cloud amounts for the particular spectral files tested, it should be noted that this may not be the case for significantly different spectral decompositions.
The test cases were nine 100 column strips taken from various cloud-resolving model (CRM) simulations. Properties of these cases are given in Table I. Each of these strips is considered to be representative of a single GCM grid box, divided into 100 subcolumns. For each case, the full ICA calculation was applied to obtain benchmark fluxes and heating rates. For each of the versions of McICA, for each CRM case, 1000 different McICA calculations were performed, with a different random assignment of subcolumns to k-terms for each calculation. For each of the 1000 McICA calculations, differences relative to the full ICA calculation were determined. The absolute values of these errors were then averaged, resulting in mean absolute heating rate and SW surface-flux errors for each cloud case.
Table I. Properties of the CRM cloud fields on which methods for reducing McICA noise are tested.
Description of CRM simulation
Total cloud fraction
Fractional standard deviation of integrated total water content
convection over west Pacific
convection over west Pacific
convection over west Pacific
convection over Amazon
convection over Amazon
convection over Amazon
In order to put the degree of noise into context, we also used mean water content and cloud-fraction profiles from each of the CRM fields to calculate fluxes using the homogeneous plane-parallel maximum-random-overlap treatment of cloud, which we shall hereafter refer to as PP-MRO. It is important to remember that the McICA values represent the expected magnitude of an unbiased error. On the other hand, the PP-MRO errors we compare them with are biases, due to the combination of two biased assumptions: maximum-random-overlap of plane-parallel cloud.
Figure 1 shows the mean value of the SW downwelling surface-flux absolute errors for each of the CRM cloud fields. Note that although the magnitude of the noise varies significantly from case to case, the optimal method is always less noisy than the single-sampling method. Furthermore, the range of values for the optimal McICA experiment is significantly less than that of the PP-MRO experiment. It should be noted that the PP-MRO errors depend on the particular combination of maximum-random and plane-parallel errors. The McICA errors, on the other hand, depend on the cloud water-content fractional standard deviation and the number of cloudy subcolumns available to sample.
Table II shows the mean across all cloud cases for the SW surface-flux errors shown in Figure 1. Also shown are the equivalent SW, LW and net heating-rate errors. These mean absolute heating-rate errors were calculated as follows. For each cloud case, the absolute heating-rate error for each of the rows between the cloud top and the cloud base was diagnosed. These absolute heating-rate errors were then weighted by the normalized mass of the layer. Finally, the means of these weighted absolute heating-rate errors were calculated.
Table II. Mean absolute surface flux and heating-rate errors for each of the methods of representing clouds.
Downwelling SW surface flux (W m−2)
SW heating rates (K day−1)
LW heating rates (K day−1)
Net heating rates (K day−1)
Raisanen and Barker
Note that unlike for the SW surface-flux errors, both SW and LW (and consequently net) mean absolute weighted heating-rate errors are smaller for the PP-MRO experiment than for either of the McICA experiments. This is because the surface fluxes depend on the water-content values in the entire column. Thus, assuming that the distributions of water content in distinct layers are not completely correlated, there is some cancellation of water-content sampling errors within each subcolumn. The heating rates for each layer depend only on the change in flux for that layer and thus in their case no such cancellation of errors occurs.
We repeated the experiment described above using the alternative algorithm suggested by Raisanen and Barker (2004), tuned using the mean cloud radiative effect for each k-term from the CRM cases. The results are included in Table II. As we used the same CRM cases on which we test the algorithm to tune the algorithm, the magnitude of the errors is probably somewhat underestimated. Nevertheless, the results are not significantly different from those obtained using the simpler and quicker ‘importance’ algorithm.
For the initial experiments we have used an arbitrary number of additional subcolumns for optimal sampling (16 SW and 12 LW). Thus we performed further experiments to investigate how the magnitude of the noise depends on the number of additional subcolumns used. These experiments consisted of repeating the tests described above with different numbers of additional subcolumns, assigned using the ‘importance’ algorithm. Once we had derived mean absolute errors for each cloud case, we then averaged these values to get a single value for each number of subcolumns. Figures 2 and 3 show the result of increasing the number of additional subcolumns for heating rates and SW surface fluxes respectively. The magnitude of the noise decreases sharply with the first few additional subcolumns and less sharply as further subcolumns are added. The number of additional subcolumns applied in an operational model will depend on the accuracy and cost constraints of that particular model. Also shown in Figure 3 is the corresponding error calculated when two and three subcolumns are designated to each k-term. This line shows how the error would decrease if the number of subcolumns sampled by each k-term were chosen randomly.
3.3. Combining SW and LW Errors
Generally, individual SW and LW heating rates are significant only in the context of their contribution to the net heating rates. Comparing the net heating rates in Table II, we see that for the McICA experiments the mean absolute weighted net heating-rate error is larger than either the SW or LW values, which is not the case for PP-MRO. This result can be explained as follows. As a general rule, clouds have a warming effect in the SW and a cooling effect in the LW. Thus, in both the full ICA and plane-parallel methods, the net cloud heating is smaller than either the SW or LW components. In particular, this leads to a cancellation of errors in the plane-parallel run. In the McICA experiments described in the previous section, the sampling of subcolumns in the SW is independent of the sampling in the LW. Thus it is perfectly feasible for the dominant k-terms in the SW to be randomly assigned subcolumns that are relatively optically thin, while the dominant k-terms in the LW are assigned relatively optically thick subcolumns (or vice versa). In this case the errors are in the same direction and the net (SW+LW) error is larger than either of the constituent SW and LW errors.
This combination of errors of the same sign increases the magnitude of the mean errors and can lead to very large deviations. Table III shows the percentage of errors exceeding a given threshold, calculated from the cloudy layers in every case. In a GCM simulation, calculating a heating rate with large errors in the same direction over successive time steps is unlikely. However, due to the long time steps generally utilized in radiation schemes, errors persist for long enough to lead to quite a large erroneous temperature change. For example, in the HadGEM climate model, which has radiation time steps of three hours, a radiative heating error of 30 K day−1 would lead to erroneous heating of almost 4 K.
Table III. The percentage of mass-weighted heating-rate errors exceeding the given magnitude, calculated for layers between cloud top and base, in every simulation.
Percentage of Errors Exceeding...
Single + reordering
Optimal + reordering
In order to reduce the mean net heating-rate errors and the likelihood of very large errors occurring, we introduced an extension to the method of distributing subcolumns to k-terms. For the single-sampling case, we rank the k-terms in order of ‘importance’ separately for the LW and SW. We then assign the same cloud subcolumn to LW and SW k-terms of the same rank. This ensures that the dominant k-term calculations in both the SW and LW are performed for the same cloud. When the list of either SW or LW k-terms is exhausted, some k-terms will remain unmatched, but these are the least important and will have the smallest effect on the total error. For the optimal-sampling case, the ranking will include a number of terms of equal importance for each k-term, but otherwise the assignment of subcolumns proceeds as for the single-sampling case.
The rearrangement of the subcolumns has no effect on the magnitude of the noise for the individual SW and LW calculations, but reduces the magnitude of the noise for the net (SW+LW) heating rates for each CRM case as shown in Figure 4. For comparison with Table II, the mean net absolute heating-rate error across all cases is reduced to 2.8 K for the single-sampling version of McICA and 1.2 K for the optimal-sampling version of McICA. These reductions in mean net heating-rate errors are significant. Moreover the rearrangement comes at no additional computational cost and can be used in conjunction with any method for reducing noise that involves estimating the importance of the k-terms. The rearrangement also reduces the errors in total net surface and TOA fluxes.
Some of the reduction in the total error shown in Figure 4 is simply due to the fact that the same set of subcolumns is used in both the SW and the LW, rather than the actual matching of terms of equal rank. Further experiments were conducted to separate these effects. For the single-sampling experiments, virtually all of the reduction in error was due to the reordering, while for the optimal experiment most of the reduction in error was due to sampling the same set of subcolumns in both regions. This is because the distribution of ‘importance’ amongst the subcolumns is much smoother by design in the optimal-sampling experiments and matching the terms in exact rank order is less important.
4. The effect of McICA noise on an NWP model
In this section we study the effect of McICA noise on a global version of the MetUM. We compare the errors due to noise with those due to the combination of the plane-parallel and maximum-random-overlap assumptions.
4.1. Model set-up
The case-study adopted for test purposes is a five-day simulation, beginning on 16 December 2002 at 0900 UTC, as used by Manners et al. (2009). The model considered was a reduced-resolution configuration of the operational global forecast model. This model has 50 vertical levels of varying resolution and is divided into 96 longitudes and 73 latitudes. The model dynamics use a semi-implicit time-integration scheme with a time step of 15 minutes. The radiation scheme is called every 12 time steps (i.e. every three hours), as for the full operational resolution. The spectral files are the same as those used in the offline calculations described in section 3.2. A fixed distribution of sea-surface temperatures was used.
Although the resolution of the test model is significantly lower than that of operational NWP models, it was necessary to run at this resolution in order to perform the benchmark ICA calculations described below. Increasing resolution will mean that more water-content variance is resolved, and noise will be injected at smaller scales. However, cloud water-content variance remains significant at higher resolutions (Hogan and Illingworth, 2003) and grid box-scale results remain important for forecasting applications even at resolutions of 25 km. Thus we would expect our results to remain applicable at higher resolutions.
Benchmark fluxes and heating rates were derived by fully sampling the generated cloud with every k-term (i.e. using the full ICA). Three different versions of McICA were tested: the noisiest using the single-sampling method, a second using the single-sampling method together with the reordering of subcolumns and the third using the optimal method for sampling subcolumns together with reordering. As in the offline experiments, a PP-MRO simulation was performed that uses the plane-parallel and maximum-random-overlap approximations.
For the ICA and McICA experiments, subgrid-scale cloud profiles were generated using the generator described in section 2. The exponential-random-overlap method was used, with a global cloud decorrelation scale of 100 hPa and a condensate decorrelation scale of 50 hPa. In-cloud water condensate followed a gamma distribution, with a fractional standard deviation of 0.75. The sensitivity of the results to these parameters is discussed below.
Although exponential-random-overlap can be represented without using the McICA scheme, the combination of the exponential-random-overlap and plane-parallel assumptions leads to larger errors than the combination of the maximum-random-overlap and plane-parallel assumptions (Shonk and Hogan, 2010). This is because the maximum-random-overlap and plane-parallel approximations have biases in opposite directions. Thus, when combined, there is some cancellation of errors. For this reason, we compare the exponential-random McICA results with maximum-random plane-parallel results.
The following results mainly concern the model 1.5 m temperature. This near-surface temperature is an important forecast variable. Moreover, we expect it to respond strongly to radiative changes, due to its dependence on surface SW fluxes. Precipitation was also considered, but was found to be insensitive to the radiative changes.
For each experiment a set of instantaneous absolute 1.5 m temperature errors, relative to the full ICA value, was calculated every three hours. The mean of these absolute errors was then calculated and is shown in Figure 5. As the simulation used a fixed sea-surface temperature, we considered land and sea-ice points only.
At the start of the forecast, the lowest errors are obtained using the optimal McICA experiment, with slightly larger errors for PP-MRO and larger errors again for the two single-sampling experiments. The relative magnitudes of these errors are similar to the relative magnitudes of the SW surface flux absolute errors shown in Figure 1, which, given the dependence of near-surface temperatures on the SW surface flux, might be expected. This shows that our results for the CRM cases hold globally.
As the forecast progresses, the errors grow due to radiative errors propagating into the rest of the model, which cause the experiments to diverge. While the errors in each of the McICA experiments appear to grow at roughly the same rate, the mean absolute error for the PP-MRO run grows more quickly. Thus after 48 hours the PP-MRO errors are as large as the single-sampling errors and by the fifth day of the experiment the PP-MRO errors are larger than any of the McICA errors. This can be explained by the fact that the PP-MRO errors are biased.
Errors in the single-sampling experiments are consistently lower when subcolumns are reordered, indicating that noise in the net flux values contributes to the surface temperature errors.
Figure 6 shows the distribution of the 1.5 m temperature errors, three hours into the forecast and on the final time step of the forecast for the optimal McICA and PP-MRO experiments. These errors were calculated by subtracting the temperature values in the benchmark from their equivalent values in the experiments. The vertical scale is logarithmic and, in order to avoid discontinuities, we added 1 to all frequencies. Moreover, although only errors of magnitude 4 K are shown, larger errors were obtained, but these occur very infrequently.
As the radiative transfer scheme is called every three hours, the temperature errors at noon on the first day of the forecast (the inner two distributions) can be traced to radiative fluxes and heating rates on the first model time step, where the radiative transfer calculation was performed for identical atmospheres. Subsequent temperature errors are contaminated by radiative feedback in the simulations.
The flux errors introduced by the McICA scheme are unbiased, so there is no bias in the 1.5 m temperature errors, as Figure 6 shows. The mean McICA 1.5 m temperature errors at the start and end of the model are −0.007 and −0.008 K respectively. In contrast, the PP-MRO temperature errors are biased; on average the PP-MRO temperatures are too small, corresponding to a tendency to overestimate the SW cloud extinction. This is because the overestimation of cloud forcing due to the plane-parallel approximation is larger on average than the underestimation due to the maximum-random-overlap assumption (Shonk and Hogan, 2010; Barker et al., 1999). As a result, the corresponding mean temperature errors for the PP-MRO experiment are −0.022 and −0.125 K respectively.
The results presented so far depend on the generated cloud and thus in particular on the input estimates of fractional standard deviation and decorrelation scale. We tested the sensitivity of the results to these parameters by repeating the above experiments using a range of parameter values. Sensitivity to fractional standard deviation was examined by conducting experiments with fractional standard deviations of 0.5 and 1.0, while sensitivity to decorrelation scales was studied in further simulations with decorrelation scales of 50 and 200 hPa. Table IV presents the results from these experiments. For conciseness, we have calculated mean values in time.
Table IV. Absolute1.5 m temperature errors for each of the methods of treating cloud and a selection of decorrelation scales and fractional standard deviations. For conciseness, these means are calculated by averaging across all model points at all times.
From Table IV we can see that the extent of horizontal inhomogeneity and vertical overlap in the generated cloud has little impact on the magnitude of the McICA 1.5 m temperature errors. On the other hand, the PP-MRO errors are sensitive to the choice of decorrelation scales and fractional standard deviation used in the benchmark. If the input parameters are considered individually, the PP-MRO error will tend to zero as the fractional standard deviation tends to zero and decorrelation scales tend to infinity. However, the pattern is complicated by the fact that the inhomogeneity and overlap errors are in opposite directions. In any case, although the PP-MRO errors depend on the input parameters to the cloud generator for all choices of parameters, the optimal McICA experiment results in a smaller mean absolute 1.5 m temperature error than the PP-MRO.
This article has considered the effect of McICA noise on NWP simulations. We used CRM cloud fields and an offline version of the radiative transfer scheme to examine the magnitude of McICA noise and suggested methods for efficiently reducing the magnitude of this noise, including a mechanism for reducing the net error when SW and LW errors are combined. We tested the effect of noise on a low-resolution global NWP simulation, focusing in particular on near-surface temperature. We found that a simple implementation of McICA gives worse forecasts of near-surface temperature than the widely used combination of the plane-parallel and maximum-random-overlap assumptions. However, when noise was reduced using the methods we suggest, the temperature forecasts were an improvement over those from the plane-parallel, maximum-random-overlap simulation.
While we have shown that the McICA scheme can improve forecasts of near-surface temperature in comparison with a full ICA benchmark, it remains to be shown whether it gives an improvement compared with observations. This will depend on the ability of the cloud generator to provide realistic cloud fields, which in turn depends on the input values of decorrelation scale and fractional standard deviation. Thus future work will consider the values of these input parameters, with the aim of refining the generator, thereby improving the model across all time-scales from NWP through monthly and seasonal to climate.
We thank Jason Cole for providing the original code for the cloud generator. Thanks also to Cyril Morcrette and Jonathan Wilkinson for useful comments and suggestions. Finally, we thank both anonymous reviewers for their positive and constructive comments.