Modeled and observed clouds during Surface Heat Budget of the Arctic Ocean (SHEBA)



[1] Observed monthly mean cloud cover from the SHEBA site is found to differ by a substantial amount during winter depending on cloud observing instrument. This makes it difficult for climate modelers to evaluate modeled clouds and improve parameterizations. Many instruments and human observers cannot properly detect the thinnest clouds and count them as clear sky instead, resulting in too low cloud cover. To study the impact from the difficulties in the detection of thin clouds, we compute cloud cover in our model with a filter that removes the thinnest clouds. Optical thickness is used as a proxy to identify thin clouds as we are mainly interested in the impact of clouds on radiation. With the results from a regional climate model simulation of the Arctic, we can reproduce the large variability in wintertime cloud cover between instruments when assuming different cloud detection thresholds. During winter a large fraction of all clouds are optically thin, which causes the large sensitivity to filtering by optical thickness. During summer, most clouds are far above the optical thickness threshold and filtering has no effect. A fair comparison between observed and modeled cloud cover should account for thin clouds that may be present in models but absent in the observational data set. Difficulties with the proper identification of clouds and clear sky also has an effect on cloud radiative forcing. The derived clear-sky longwave flux at the surface can vary by some W m−2 depending on the lower limit for the optical thickness of clouds. This impacts on the “observed” LW cloud radiative forcing and suggests great care is needed in using satellite-derived cloud radiative forcing for model development.

1. Introduction

[2] Clouds are an important regulator for the climate, and it is a fundamental task for every climate model to reproduce a realistic cloud cover. To validate their models, the climate modeling community relies on cloud observations from space or from the surface. It is, however, not trivial to reliably observe clouds, and climate modelers encounter difficulties when deciding what data set or what observation method to choose for the comparison with the model results. Different estimates of cloudiness from surface or satellite instruments reveal discrepancies that are not always easy to understand [e.g., Rossow et al., 1993; Mokhov and Schlesinger, 1994; Schweiger et al., 1999]. Given the uncertainty of observed cloudiness, climate modelers are faced with the problem that even small errors in cloudiness may translate into considerable errors in the energy budget [e.g., Slingo, 1990; Arking, 1991; Rinke et al., 1997]. A better understanding of the differences between modeled and retrieved cloud cover is of great importance to improve the description of clouds in climate models.

[3] The “Global Implications of Arctic Climate Processes and Feedbacks Project” (GLIMPSE) aims at the development of a coupled regional climate model for the Arctic. As the energy budget of the Arctic is largely determined by radiation [Duynkerke and de Roode, 2001] and the main modulator of radiation is cloud [Randall et al., 1998], we need a good representation of clouds in our models. A profound problem with the Arctic is the absence of observation sites; there are no field stations that routinely report clouds for longer periods. Recently, better field observations have become available from various expeditions to the Arctic [e.g., Rohde, 1996] (see also Swedish Polar Research Secretariat, International Arctic Ocean Expedition, 2001, information available at Most notable of these is the SHEBA experiment [Uttal et al., 2002] during which a Canadian Coast Guard vessel was frozen into the pack ice north of Alaska and drifted with the ice for 1 year, collecting information about the state of the ocean, sea-ice, and atmosphere. Another source of data are polar-orbiting satellites whose observations have become available over the last few decades.

[4] The difficulties when comparing cloud cover from different instruments are not new; in particular the differences found between satellite and ground-based retrieval of cloud cover have been investigated thoroughly before. Goodman and Henderson-Sellers [1988] summarize the difficulties with the detection of clouds and point out that there is a tendency to assume that ground-based observations of clouds are “correct” but may differ from the satellite-derived cloud cover because of different viewing geometries. The Arctic presents a number of additional obstacles: Cold snow and sea-ice covered surfaces are difficult to distinguish from clouds for satellite instruments, and the absence of sunlight during several months renders remote sensing methods that rely on visible light impossible. Infrared methods are better suited to separate clouds from sea ice, but thresholds for the detection of clouds need to be set carefully in the Arctic [Lubin and Morrow, 1998; Minnis et al., 2001]. On the other hand, cloud climatologies from surface stations in the Arctic suffer from large uncertainties since it is difficult for human observers to observe clouds in the prevailing harsh conditions at high latitudes. Curry et al. [1996] present an overview of cloud characteristics in the Arctic and find a broad agreement between different surface based cloud climatologies. However, a problem seems to be low-level ice crystal precipitation (“diamond dust”) that is generally not included in cloud reports from human observers but may be detected by instruments. Surface and satellite climatologies of Arctic cloud cover show significant differences [Schweiger and Key, 1992; Schweiger et al., 1999], and there is a tendency for satellite cloud cover to be lower than surface estimates in summer and higher in winter [Curry et al., 1996, and references therein].

[5] The recent SHEBA field campaign provides the opportunity to compare cloud observations from different sources. Several papers have been published that compare the skills of different sensors to retrieve microphysical properties of clouds [e.g., Minnis et al., 2001; Intrieri et al., 2002; Xiong et al., 2002]. For our model validation, though, we would like to have an observed estimate for the area cloud cover. A comparison of microphysical quantities would be more complicated, as the cloud phase or the hydrometeor size are even more uncertain than cloud cover in a climate model. Several cloud data sets are available to the community detailing cloud cover, cloud presence, or cloud fraction, but we do not know which one is most suitable for comparison with our model results. Figure 1 illustrates this difficulty, showing monthly mean cloud cover obtained from various instruments. There are large differences between the different measurements, especially during winter when the human observer at the surface (SYNOP) registers slightly more than 10% cloud cover while the lidar/radar (ETL) combination observes cloud during about 60% of the time. Note that the two methods are not directly comparable; the human observer measures the portion of the sky that is covered by clouds while the lidar/radar instrument tells us how often clouds have been above the instrument. Section 2.2 will further elucidate this difference.

Figure 1.

Observed cloud cover with different instruments: ETL combined lidar/radar, Atmospheric Polar Pathfinder (APP) retrieved with two methods, and SYNOP (human observer). See section 2 for details.

[6] In a recent comparison of TOVS Path-P derived cloud cover with data from the SHEBA site, Schweiger et al. [2002] find a fair correlation between surface and satellite observations for timescales larger than 5 days. However, TOVS cloud fraction is higher in winter than corresponding surface observations but matches lidar observations, supporting the notion that a surface observer may underestimate cloudiness in winter.

[7] From Figure 1 it is obvious that we have difficulties when faced with the question which data set to choose to judge how well our model simulates clouds in the Arctic. To start with, we will look closer at the various data sets and try to understand the difference between them. We suggest that different instruments have different sensitivities for the detection of clouds, based on a model calculation where cloud data are filtered using optical thickness thresholds. Depending on the threshold for when a cloud is counted as a cloud, we find large differences in the model simulated cloud cover during winter time, much in agreement with the differences found between the different instruments in Figure 1. These results indicate that cloud cover from the model cannot straightforwardly be compared against observational data sets. By modifying the model output, that is by removing some of the thinnest cloud to reproduce the detection limit of a given instrument, we are able to make a fairer comparison between modeled clouds and what a particular observer classifies as a cloud.

[8] Difficulties with the distinction between clear and cloudy sky also have an effect on the radiative fluxes that are reported as clear-sky average. Separating clear-sky from all-sky radiation is an important but difficult task to assess cloud radiative forcing. The erroneous classification of thin clouds as clear sky can directly lead to an underestimate of surface cloud radiative forcing by satellite data sets (e.g., ERBE). Tuning of model cloud radiative forcing toward these data may induce unwanted inaccuracies in the model's radiation budget. Here we will try to give an estimate for the sensitivity of the radiative fluxes to the difficulties associated with the separation of clear and cloudy sky column in a regional climate model.

2. Observed Cloud Cover

2.1. Retrieval Methods

[9] For this study, we rely on publicly available data from the SHEBA campaign. There are two different sets of observations: time series of observations made from the ship (human observer and lidar/radar remote sensing) and time series of satellite observations derived from the high-resolution AVHRR sensor. Figure 1 shows the monthly mean cloud cover for the SHEBA year that starts September 1997 and ends 13 months later. Not all instruments cover the entire SHEBA year, though. As already described above, a large spread exists between the different retrieval methods during winter. To understand this difference the characteristics of each retrieval method are described first before looking for an explanation.

2.1.1. APP

[10] The AVHRR Polar Pathfinder project (APP) [Fowler et al., 2002] provides twice-daily gridded and calibrated satellite channel data and derived parameters. All data are mapped onto a 5-km equal area grid. Here we use a smaller data set that consists of the APP data extracted along the drift track of the SHEBA station ( Cloud cover is computed as the fraction of cloudy pixels in an 11-by-11 pixel array centered on the SHEBA site.

[11] The Cloud and Surface Parameter Retrieval System (CASPR) for Polar AVHRR provides two different retrieval methods for the distinction between cloudy and clear pixels; details of the two methods are described by Key [2002]. The single image method uses the information in a single satellite image and constructs a cloud mask based on typical cloud signatures in multiple spectral channels. The multiday method refines the detection of clouds by examining statistics of day-to-day changes in the spectral information. Both methods have been tuned to Arctic conditions and take into account ice-clouds and extended periods without sunlight.

[12] The two retrieval methods yield different monthly mean cloud cover despite the same AVHRR data at their source (Figure 1). Both methods agree on the shape of the annual cycle with lower cloud cover in winter and higher cloudiness during summer. The multiday method results in higher cloud cover throughout the entire year. The statistical treatment of radiance over several days allows this method to adjust the detection of clouds to changing conditions. The single image method relies on fixed or only seasonally varying thresholds that may be too restrictive. The largest difference is found in December when the multiday method yields more than twice as many clouds as the single image method. During spring and summer, the cloud cover difference is reduced to 10 to 20%.

2.1.2. SYNOP

[13] Traditional 6-hourly surface observations at the SHEBA site are available from the SHEBA Project Office ( The regular SYNOP reports include cloud cover in octas that we have used to compute the monthly mean cloud cover.

[14] The annual cycle is found to be similar to the other data sets, but the decrease of cloud cover during the winter is much more pronounced. It is a well-known fact that cloud cover is difficult to observe when it is dark [Hahn et al., 1995], and the observations during winter are associated with large uncertainties. The estimated uncertainty of 5% during winter months as explicitly cited in the README document of the SHEBA Project Office data set is undoubtedly too low, not at least when comparing SYNOP against satellite observations of cloud cover.

2.1.3. ETL

[15] The NOAA Environmental Technology Laboratory (ETL) has developed a cloud detection system that combines lidar and radar [Intrieri et al., 2002]. The instruments have been deployed at the SHEBA ship station and operated during the SHEBA year although the lidar failed after 8 months. Both instruments measure the return of electromagnetic radiation that has been sent out in a narrow beam. The lidar is more sensitive to water clouds while the radar is more sensitive to ice clouds. In contrast to the other cloud observations, the lidar/radar combination measures the occurrence of clouds in a single spot over the SHEBA station, but it does so every 10 min. Contrary to human observers, the active sensors lidar and radar are unhindered by darkness, and therefore we may expect better sensitivity to thin clouds from these instruments (J. M. Intrieri, personal communication, 2003) compared to satellite sensors or human observers.

2.2. Cloud Area Cover Versus Cloud Frequency

[16] For historical reasons, cloud cover is reported as the fraction of the sky that is covered by clouds. Such a measure for cloudiness is also known as area fraction, and numerical models use it to describe clouds that fill only parts of a model column. Cloud cover according to this definition is observed by human observers (reported in SYNOP) or instruments that take a picture of the sky and estimate the fraction of clouds in the image (e.g., whole-sky imager). Imaging methods suffer from detection problems under difficult conditions, for example in darkness when the contrast between cloudy and clear sky is weak.

[17] Another set of instruments makes measurements in one or a few directions only. Examples of such instruments are lidar or radar that sample the atmosphere in a narrow pencil along one direction, mostly straight up. The response from such an instrument is either “clear” or “cloudy,” but not partial cloudiness. Under the assumption of homogeneous conditions and a constant wind field, the time-mean and space-mean become similar if the average is taken over sufficiently many observations (Taylor hypothesis). Hence we can average the time series of cloud presence over a sufficiently long time period to obtain an estimate for the areal cloud cover (see also discussion by Intrieri et al. [2002]).

[18] The question remains how long an averaging period is sufficient. We can provide an estimate for the averaging interval based on a comparison of two data sets from APP. There are two simultaneous cloud products available from APP: a cloud mask that is 0 or 1 depending on whether the pixel (5 km by 5 km) over the SHEBA ship is cloudy, and the fraction of cloudy pixels in a 55 pixel by 55 pixel array centered on the SHEBA station. We first divide the full 13 months in small intervals of a few days length. In each interval we then average the time series of the cloud mask, and compare the result against the time-average of the cloud fraction. The differences from all intervals are squared, averaged, and the square root taken to obtain an RMS difference that is plotted as a function of the length of the averaging interval (Figure 2). Since there are only two retrievals each day, averaging is sensitive to the start-time of the integration periods; therefore we repeat the computation by using 21 different offsets at the begin of the 13-month periods and look at the average of the RMS differences (solid line). The largest difference between discrete single-point measurements of cloud occurrence and area fraction of cloud cover is found for short averaging intervals. The RMS difference between the two averaging methods levels off after about 15 days, indicating that the cloud occurrence averaging does not exactly reproduce the cloud fraction, but the difference becomes stationary. The second decrease of the RMS difference for more than 35 days averaging should be disregarded as the number of averaging intervals for the 13-month period becomes too small (<10) and the statistical uncertainty increases dramatically. On the basis of the results shown in Figure 2 we conclude that averaging cloud occurrence over a month should provide realistically stable estimates of the cloud cover with an error below 0.1.

Figure 2.

RMS difference of time-averaged cloud occurrence and time-averaged cloud fraction. Computation is based on twice daily retrieved cloud data from APP, with two different retrieval algorithms (single day and multi-day). Each cross or circle represents an RMS difference for a given start-time offset, and the solid line is the average of 21 different start times.

[19] We can use this result as a guideline when computing time averages of the cloud occurrence from the ETL lidar/radar. However, there are a few noteworthy differences between the satellite snapshots and the ETL system. Even in the reduced form that is available to us the ETL system reports an observation every 10 min, unlike the APP product that is available only twice daily. A single pixel of the satellite image measures 5 km by 5 km which is larger than the beamwidth of the lidar or radar. Therefore we cannot expect the two cloud observation system to be totally compatible, and the minimum averaging time that has been estimated for APP data can only provide a guideline for the averaging of the lidar/radar system.

3. Modeled Cloud Cover

3.1. RCA2

[20] RCA2 is a regional atmospheric climate model that has been developed within the SWECLIM project [Jones et al., 2004a]. For the Arctic Model Intercomparison Project (ARCMIP) [Curry and Lynch, 2002] we have run the model for a small domain (78 × 60 grid points) in the Arctic that encompasses the track of the SHEBA station. The horizontal resolution is 0.42 degrees on a rotated grid or roughly 50 km. The model is forced with ERA-40 data at the lateral boundaries, and with SSM/I derived ice fraction and temperature at the bottom. The simulation extends over 13 months, it starts 1 September 1997 and ends 1 October 1998. The small domain constrained the model sufficiently to allow an accurate simulation of the dynamical evolution of the atmosphere. This allows us to compare grid box values of cloud quantities over the SHEBA site directly with observations. Differences detected can be attributed to parameterization failures rather than mismatches in time and/or space. Details of the RCA2 setup for ARCMIP and an evaluation of its performance in the Arctic are presented by Jones et al. [2004b].

[21] For the model study of cloud detection thresholds, we extract a 3 × 3 grid point array centered over the SHEBA ship station from the RCA2 results. The model yields cloud fraction and cloud water at each of the 24 levels that is then used to compute the column cloud cover (cov2d) and the optical thickness (τ) as described in the next subsection. Finally, we also extract the radiation flux at the top of the atmosphere and at the surface.

3.2. Cloud Fraction and Column Cloud Cover

[22] In an atmosphere model with high horizontal resolution, a cloud at any level either fills the grid box entirely, or the grid box is cloud free. Most numerical models (like RCA2) have a rather coarse resolution, and clouds fill a grid box only partially, which is usually expressed as cloudiness or cloud fraction (c). In RCA2, the amount of the grid box that is filled with cloud condensate is expressed as a function of the relative humidity (RH) in the grid box and a parameter (RHthresh) that sets the lower limit for the formation of clouds [Slingo, 1987],

equation image

No clouds are permitted in the grid box if RH is below the threshold value. More sophisticated schemes exist that also account for the amount of cloud water when computing the cloud fraction [e.g., Xu and Randall, 1996]. Note that cloud cover is diagnosed, but the amount of cloud condensate is a prognostic variable in the model.

[23] If a cloud only appears at one level of a model, the column cloud cover or cloud area (cov2d) is identical to the cloud fraction at the cloud level. If several levels are partly filled with clouds, cov2d becomes more complicated as the overlap between clouds at different levels needs to be accounted for. Several overlap assumptions have been suggested; see Barker et al. [2003] for an overview. The RCA2 model employs a type of maximum-random overlap,

equation image

where ci is the cloud cover at level i, and the levels are numbered from top (i = 1) to bottom (i = nlev).

3.3. Optical Thickness

[24] To distinguish between clouds and clear sky, we will use the visible optical thickness (τ). We assume plane-parallel and homogeneous clouds with no sub-grid scale variability except for the cloud cover. In a perfect world, even the thinnest cloud with a very low τ would be considered a cloud, but in reality, there are limits to this concept. An example are sub-visible cirrus that are difficult to detect by human observers but nevertheless have a distinct impact on the radiation balance [Jensen et al., 1999]. We will compute the optical thickness from our model data, and then apply a finite threshold for τ to remove the thin clouds before averaging cloud cover.

[25] The optical thickness of a cloud with geometrical thickness Δz is defined as

equation image

where Qe(r) is the extinction efficiency of a hydrometeor (droplet or ice crystal) of size r, A(r) is its cross section, and n(r)dr is the size distribution of the hydrometeors. Assuming spherical particles and applying the usual definition for the effective radius,

equation image

we obtain

equation image

where equation imagee is the averaged extinction efficiency over the size spectrum, LWP is the liquid water path, and ρliq is the bulk density of water, respectively. For visible radiation, equation imagee ≈ 2 and (5) becomes the familiar

equation image

and similarly for ice clouds if LWP is replaced by the ice water path (IWP) and ρliq by the bulk density of ice (ρice).

[26] Further refinement in the computation of τ is achieved by separating the contributions from liquid and ice clouds. The effective radius for liquid clouds in RCA2 depends on the amount of cloud condensate, and the effective radius for ice clouds is a function of temperature [Wyser et al., 1999]. The optical thickness at level i is computed as the sum of the optical thickness for water and ice clouds,

equation image

RCA2 distinguishes only diagnostically between water and ice clouds through a factor fice(Ti) that is a linear function of temperature only. With the grid box average cloud condensate mixing ratio qc,i at level i, and cloud cover ci, the liquid and ice water path are computed as

equation image
equation image

with Δpi the pressure difference across layer i and g the gravitational constant.

[27] Adding τi in the vertical is more complicated as we have to consider partial cloud overlap. Here we suggest to “smear out” the optical thickness at any level to fill the same area. For consistency, we take the cloud area in the column to be cov2d that has been previously computed from ci under a maximum-random overlap assumption. The in-cloud optical thickness for the whole column then becomes

equation image

The optical thickness is a complex function of cloud condensate, phase, and effective radius. Phase and effective radius are also functions of temperature. For a given amount of cloud condensate, the optical thickness is lower for a cloud that consists of ice crystals than for a cloud with liquid water droplets. The ice crystals are larger, and therefore there are fewer scattering particles in the cloud. Hence the fraction of ice in a cloud has a strong impact on the optical thickness, and the parameterization of τ in any model is very sensitive to the way how liquid water and ice clouds are distinguished.

4. Cloud Cover and Optical Thickness Threshold

[28] Why do we find such a large variation in averaged cloud cover between different instruments (Figure 1)? Our working hypothesis is that these differences are caused by the different sensitivities of the instruments. Each instrument has a threshold to separate clear from cloudy skies, and some instruments may detect thin clouds where for the same conditions; other instruments find clear sky. We do not claim that cloud observing instruments actually have a specific threshold for the optical thickness (some instruments may work better under certain conditions and worse under others), but all instruments have some form of a threshold to separate clouds from clear sky. Clouds with too low water/ice content, too little extinction, or too small areal extension will not be counted as clouds but as clear sky. This is in contrast to the clouds in a climate model where only the numerical precision sets a limit for how small clouds can be. The tiniest amount of cloud condensate will be counted as a cloud, and it is therefore likely that models will yield higher cloud cover under the same conditions. For the validation against observations, it is thus necessary to remove the thinnest clouds in the model before doing a comparison. To distinguish between detectable and non-detectable clouds, we need a measure for “thin” or “faint” clouds, and we choose the optical thickness as a proxy. For climate studies, the optical thickness is directly related to the extinction of radiation and is therefore well suited to distinguish between thin and thick clouds. Using this approach we can subsequently evaluate the radiative importance of thin clouds. For other applications, for example air quality, the number of cloud droplets or the amount of condensate may provide a better guideline to distinguish between clouds and clear sky.

[29] We will study the effect of filtering clouds as a function of τ, using cloud data from the RCA2 model. This model simulates a full spectrum of clouds and has no explicit lower limit for cloud water/ice or cloud cover, and therefore can produce optically extremely thin clouds. We compute the monthly mean cloud cover for the SHEBA year, but remove optically thin clouds before averaging. This filtering of clouds as a function of τ is done to mimic the imperfectness of instruments that are not able to measure thin clouds. We set cov2d = 0 in those columns with integrated optical thickness smaller than a given threshold (τthresh). Note that this filtering procedure uses the in-cloud value for τ as given by (10). We vary the threshold optical thickness between 0 and 2 and compute the average cloud cover for the different settings (Figure 3). During summer, the average cloud cover is rather insensitive to the value of τthresh, but during winter cloud cover remains high with a low value for τthresh and decreases dramatically with higher values for the optical thickness threshold. Under cold conditions, there is less water vapor in the atmosphere and clouds will have a lower water content. This reduces their optical thickness and, consequently, there will be fewer clouds with τ above the threshold. The lower temperature in winter will also lead to more ice clouds that consist of few but large ice crystals, which lowers the optical thickness even further. Hence the average cloud cover is found to be very sensitive to the value of τthresh during winter. During summer, most clouds are optically thick and removing the few (if any) columns with τ below the threshold hardly affects the average cloud cover. From this we contend that cloud observing instruments, with a finite detection limit for clouds, tend to underestimate cloud cover during winter when there are more optically thin clouds, but not during summer when most clouds are sufficiently thick to be above the detection limit.

Figure 3.

Monthly mean cloud cover from RCA2, filtered with optical thickness τ. When the column integrated optical thickness is below τthresh the cloud cover is set to 0 before averaging over a month. The values for τthresh are listed in the legend.

[30] A potential source of uncertainty in the model computations is the sub-grid scale variation of clouds that has an effect on τ. To estimate the impact of this variability, we have computed the optical thickness with a variety of overlap assumptions. Although the frequency distribution of τ changes when the overlap assumption is varied, we find only a minor effect on the filtering of clouds with low τ. Independent of the overlap assumption, there is always a substantial proportion of optically thin clouds during winter, and removing these clouds before computing the monthly average always reduces the cloud cover.

[31] When plotting the cloud cover from RCA2 together with the various observations, we find a good agreement between the unfiltered model results (τthresh = 0) and the combined lidar/radar cloud cover (Figure 4). On the other hand, the cloud cover from satellite (APP) or human observers (SYNOP) matches better the model results that have been filtered with τthresh = 1. We would like to stress that the RCA2 model was not tuned to reproduce any of the observed cloud covers. The value for τthresh for the filtering has been chosen to illustrate the fact that a lower average cloud cover can be obtained when some thin clouds are removed from the model data when computing the average. This does not imply that the model generates too many thin clouds; instead it expresses the fact that some instruments may not be able to detect the thinnest clouds. For a fair comparison between model and a specific set of observations, we need to remove model clouds below the sensitivity of the instrument. It is in that sense that for certain observation platforms a cloud really is not a cloud.

Figure 4.

As in Figure 1 but additionally with monthly mean cloud cover from RCA2, with all clouds and with thin clouds removed before averaging (τthresh = 1).

[32] Figure 4 strongly supports our hypothesis that the differences in observed cloud cover can be explained by the sensitivity of different instruments. The lidar/radar system is able to detect even small amounts of cloud water/ice, while the satellite instrument or a human observer only registers clouds with sufficient optical thickness.

4.1. Implications for Radiation

[33] As outlined above, there is a large sensitivity of the average cloud cover to the detection limit of clouds. We may expect that the difficulties with the separation of cloudy and clear columns have an impact on the derived cloudy and clear-sky flux of radiation and implied cloud radiative forcing. Modelers tend to tune cloud radiation interaction to reproduce the satellite-derived cloud radiative forcing. If the observational cloud forcing is in error, it causes tuning of the model toward an inaccurate value. Monthly mean clear-sky fluxes are computed from a simulation of the SHEBA year. We use the optical thickness of the cloud in a column to identify clear sky. If the value of the optical thickness is below a given threshold, we include the radiative flux at that instant when computing the monthly mean clear-sky flux. In the climate modeling community, cloud radiative forcing is often computed by using the same atmospheric column twice, with and without clouds. However, instruments cannot measure cloud radiative forcing the same way, as the sensor's field of view is either cloudy or clear, and the influence of clouds cannot be removed. From a time series of all-sky data, the cloud forcing can be extracted by identifying the times with clear sky, and computing an average clear-sky flux that is then subtracted from the average all-sky flux. To mimic this behavior, we compute the average clear-sky flux by selecting only those columns from our model in which the optical thickness is below a threshold (τthresh). With this method it is sufficient to look at the changes of the clear-sky radiative flux as the all-sky flux is unaffected. The τthresh is varied, which results in different clear-sky flux.

[34] We study the sensitivity of the clear-sky flux to different values for τthresh. The previous results for the mean cloud cover indicate that we can expect an effect from a different accounting of cloudy and clear sky during winter, but during summer most clouds are optically thick and removing thin clouds has little effect. Since solar radiation is absent during a large part of the winter, we will therefore focus on the sensitivity of the clear-sky longwave (LW) fluxes to the filtering using an optical thickness threshold. We will study the effects on the LW flux at the surface (SFC) and at the top of the atmosphere (TOA).

[35] Figure 5 shows the sensitivity of the clear-sky LW flux at the SHEBA site to the value of the optical thickness threshold. The higher the value for τthresh, the more columns with optically thin clouds are included in the clear-sky average and the greater the underestimate of cloud radiative forcing. There are not enough true clear-sky columns (with τ = 0) to compute a clear-sky average in any month of the SHEBA year. During summer, there are even not enough columns with τ < 2 to compute monthly averages of clear-sky fluxes.

Figure 5.

Sensitivity of the monthly mean clear-sky LW flux over the SHEBA site at (top) TOA and (bottom) SFC when thin clouds contaminate the clear-sky average. Clouds and radiation have been computed with the RCA2 model. A column is considered to be “clear” if its optical thickness is below a threshold value as shown in the legend.

[36] Despite the absence of a pure clear-sky flux, the figure clearly shows the trend when varying the threshold for clouds. The more optically thin clouds are included in the “clear-sky” average, the less “clear-sky” LW radiation escapes to space, and the more “clear-sky” LW radiation reaches the SFC. The most significant difference between true and cloud contaminated clear-sky LW flux is found at low values of the optical thickness threshold. This difference can be interpreted as the sensitivity of a cloud sensing instrument to the optically thinnest clouds, and it is a measure of how large the potential error can be when erroneously counting clouds as clear sky. At TOA the difference between the curves with τthresh = 0.5 and τthresh = 1 is usually below 3 W m−2 and only reaches 5 W m−2 in March. On the other hand, the same difference in mean fluxes is found to be about 10 W m−2 at SFC. It turns out that the clear-sky LW flux at SFC is 2 to 3 times more sensitive to the contamination by thin clouds than the corresponding flux at TOA. This implies that low clouds, the predominant cloud type over the SHEBA site, have a strong impact on the radiation balance at the SFC but much less so at TOA. This might be expected given the general close correspondence between SFC temperature and low cloud top temperature.

5. Conclusions

[37] Cloud cover from different observational data sets during the SHEBA campaign is found to show substantial differences during wintertime even on a monthly base. This problem has implications for the validation of modeled cloud cover: which data set should be used to compare a model simulation with? When is a simulated cloud cover deemed to be sufficiently close to reality, and when are improvements to parameterizations warranted because models truly fail to simulate cloud cover? We relate the differences at least partly to instrumental problems in distinguishing between clear sky and optically thin clouds; some instruments may find clouds where others detect clear sky. The problem appears mainly in wintertime when temperature and humidity are low, and clouds contain only small amounts of condensate. Furthermore, many clouds are ice clouds that are more difficult to detect than liquid water clouds as the concentration of ice crystal is much lower than that of water droplets, which makes clouds more difficult to detect. Darkness during winter in the Arctic makes it difficult for human observers to distinguish between thin clouds and clear sky. Models, on the other hand, will count even the tiniest amount of cloud condensate as a cloud, and therefore are likely to be on the high side with their estimate of cloud cover.

[38] We suggest that filtering of model cloud data, by removing some of the thinnest clouds that are not visible to a particular instrument, improves the chances for a fair comparison between observations and model. To test our hypothesis we have removed clouds with a visible optical thickness below a given threshold prior to computing the monthly averaged cloud cover, and by that improved the agreement with certain observations and degraded it with respect to others (e.g., ETL combined lidar/radar). However, the threshold optical thickness below which clouds should be disregarded depends on the particular instrument. For the data from the SHEBA campaign in our comparison we found that our model simulation agrees well with the lidar/radar without filtering, but for a good agreement with AVHRR cloud cover we need to remove clouds with τ < 1. Note that this result does not imply that a particular sensor has a specific limit for visible optical thickness. We only use visible optical thickness as a proxy to separate thin and thick clouds; the actual detection capabilities of a cloud instrument may be more complicated and depend on many other factors, for example, surface albedo and temperature, spectral signature of clouds and background, and so forth. However, irrespective of the exact nature of cloud detection in the instrument, there will always be clouds too faint/weak/thin to be detected, and one of the goals of this study was to estimate the effects from clouds that are not considered clouds by certain instruments.

[39] The difficulties associated with the distinction between clear air and thin clouds has implications beyond the validation of cloud cover. Clear-sky radiation is an important term to assess the climate forcing of clouds. We have studied the sensitivity of the monthly mean clear-sky radiation to different thresholds for the optical thickness of clouds. The effect of a contamination of the clear-sky flux by thin clouds appears in the LW radiation mainly in winter. When optically thin clouds are counted as clear sky, the monthly mean LW flux at SFC increases by 10 W m−2 or more, depending on the value of the optical thickness threshold that separates clear sky from clouds. Effects at TOA are somewhat smaller, but still on the order of a few W m−2. If the satellite-derived clear-sky fluxes really are in excess by this amount, it has consequences for the computation of the cloud radiative forcing. Cloud forcing is a key quantity for any climate model that is used to tune the model to reproduce a realistic climate.

[40] The overestimated clear-sky LW at SFC is substantial during winter time when the energy balance is dominated by the contribution from LW. An error in the LW flux can have grave consequences in a model, especially in coupled models. To illustrate this point we show an example from a multiyear coupled run of the RCAO model [Döscher et al., 2002] over the Baltic Sea where a small change in the treatment of cloud overlap in the radiation scheme adds about 10 W m−2 to the downwelling LW at SFC (Figure 6). This change leads to a subsequent reduction of the ice extent of the Baltic Sea by about a 30% (and a better agreement with observations). Some climate parameters like sea-ice area and thickness are very sensitive to small changes in the radiation balance, and even if an error in the separation of clear sky and clouds does not have the same magnitude as the change in the radiation budget from the example, we would still like to emphasize the importance of a realistic description of cloud radiative forcing.

Figure 6.

Mean LW radiation and sea-ice cover in the Baltic Sea from a coupled regional climate model. The standard version of the model (STD) produces too much sea-ice during most of the winters. A small change in the radiation scheme increases the LW flux at SFC which reduces the ice extent considerably (MOD). Data are courtesy of R. Döscher.

[41] The data from the SHEBA field station are extreme, as they stem from the Arctic pack ice and from a single year only. Darkness, cold temperatures, and low specific humidity during winter make it a very special environment where optically thin clouds can form. These thin clouds have a small but by no means negligible effect on radiation, and it is difficult to identify their signal in satellite images, for example. Active sensors like ETL's combined lidar/radar may be better suited to retrieve clouds even under these adverse conditions. For most parts of the globe the traditional remote sensing instruments may be sufficient as there is a distinct difference between the optical thickness of clouds and clear sky. A possible extension from this are thin cirrus that occur frequently and cover large areas of the globe. A reliable detection of these cirrus clouds, and their successful representation in climate models, is a considerable challenge for modelers and experimentalists alike.


[42] We would like to thank A. Ullerstig for assistance with setting up the RCA2 model. This work has benefited from discussions with J. Intrieri, K. G. Karlsson, and U. Willén. Observation data were provided by the SHEBA Project Office, University of Washington. Support for this study was granted by the European Union through project GLIMPSE (EVK2-CT-2002-00164).