Combined CloudSat-CALIPSO-MODIS retrievals of the properties of ice clouds

Authors


Abstract

[1] In this paper, data from spaceborne radar, lidar and infrared radiometers on the “A-Train” of satellites are combined in a variational algorithm to retrieve ice cloud properties. The method allows a seamless retrieval between regions where both radar and lidar are sensitive to the regions where one detects the cloud. We first implement a cloud phase identification method, including identification of supercooled water layers using the lidar signal and temperature to discriminate ice from liquid. We also include rigorous calculation of errors assigned in the variational scheme. We estimate the impact of the microphysical assumptions on the algorithm when radiances are not assimilated by evaluating the impact of the change in the area-diameter and the density-diameter relationships in the retrieval of cloud properties. We show that changes to these assumptions affect the radar-only and lidar-only retrieval more than the radar-lidar retrieval, although the lidar-only extinction retrieval is only weakly affected. We also show that making use of the molecular lidar signal beyond the cloud as a constraint on optical depth, when ice clouds are sufficiently thin to allow the lidar signal to penetrate them entirely, improves the retrieved extinction. When infrared radiances are available, they provide an extra constraint and allow the extinction-to-backscatter ratio to vary linearly with height instead of being constant, which improves the vertical distribution of retrieved cloud properties.

1. Introduction

[2] On 28 April 2006, two satellites, CloudSat (a 94 GHz cloud profiling radar [Stephens et al., 2002]) and CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations [Winker et al., 2003]) were launched. They joined Aqua, hosting MODIS (Moderate Resolution Imaging Spectroradiometer) and a large number of radiometers as part of the “A-Train”. Instruments on board on these satellites can be used separately or together to derive the properties of clouds and aerosols. A number of single-instrument cloud products have been released, for example the CloudSat Radar-Only Cloud Water Content Product (2B-CWC-RO) contains retrieved estimates of cloud liquid and ice water content, effective radius, and related quantities for each radar profile. Regarding CALIPSO, NASA Langley Research Center Atmospheric Science Data Center provides ice clouds properties such as optical depth, visible extinction, ice water path and ice particle size. MODIS can be used to estimate effective radius and optical depth from visible or infrared channels, [King et al., 1998].

[3] Although each instrument individually can be used to retrieve ice cloud properties, each has weakness. For instance radar alone cannot accurately estimate particle size and is less sensitive to small particles. On the other hand, the lidar is more sensitive to optically thin clouds but suffers from attenuation. Radiometers only sense integrated properties; moreover, infrared radiances are mainly sensitive to the top of moderately thick ice cloud.

[4] The subject of this paper is the retrieval of ice cloud properties exploiting radar, lidar and infrared radiometer synergy. If we combine these instruments we have access to the vertical distribution of detailed cloud properties since the radar and lidar backscatter are proportional to very different powers of particle size, so in principle the combination provides accurate particle size with height. Furthermore, infrared radiances ensure that the retrieved profiles can be used for radiative transfer studies: use of a single infrared channel provides information on extinction near cloud top and a pair of infrared channels gives particle size information near cloud top [Inoue, 1985; Chiriaco et al., 2004].

[5] Delanoë and Hogan [2008a] (hereafter DH08) proposed a variational method to use the synergy of radar, lidar and infrared radiometer to retrieve the ice cloud water content, visible extinction coefficient and effective radius, from the ground. This algorithm overcame some of the limitations of previous radar-lidar ice retrieval schemes [Wang and Sassen, 2002; Donovan et al., 2001; Okamoto et al., 2003; Mitrescu et al., 2005; Tinel et al., 2005], which only work in regions of cloud detected by both radar and lidar, and are difficult or impossible to adapt to use other measurements, e.g. passive radiances.

[6] Early work on combining radar and visible optical depth derived from passive measurement in optimal estimation framework has been done, for instance by Benedetti et al. [2003], but does not use the lidar signal and only applicable during day-time.

[7] The DH08 algorithm retrieves cloud properties seamlessly between regions of the cloud detected by both radar and lidar, and regions detected by just one of these two instruments. For instance, when the lidar signal is unavailable (such as due to strong attenuation), the variational framework ensures that the retrieval tends toward an empirical relationship using radar reflectivity factor and temperature [e.g., Liu and Illingworth, 2000; Hogan et al., 2006b; Protat et al., 2007], and when the radar signal is unavailable (such as in optically thin cirrus), accurate retrievals are still possible from the combination of lidar and infrared radiometer. In this paper we demonstrate the application of this algorithm to the A-Train. We added the following improvements to the DH08 algorithm: (1) adapted to Cloudsat/CALIPSO instrument settings, (2) rigorous calculation of forward model errors, including errors in the temperature profile, (3) allowing retrieval of variable extinction-to-backscatter ratio, (4) using the lidar molecular signal beyond the cloud as a constraint on optical depth (5) phase classification adapted to space.

[8] The structure of the paper is as follows. In section 2, the preprocessing of the radar, lidar and radiometer data set is described, including a brief presentation of the method used to discriminate cloud phase from the lidar. We outline the DH08 algorithm and some improvements to it in section 3. In a variational scheme, the specification of measurement and forward model error plays a key role and is described in section 4. Some examples of application of the variational method on A-Train data are shown in section 5.

2. Radar, Lidar, and Radiometer Data Set and Preprocessing

[9] In this section we describe the data set used and its preprocessing. The first step is to merge radar and lidar measurements on to the same grid, followed by classification of the nature of the targets into liquid droplets, ice particles, aerosol, insects, melting ice and rain. The nature of data used is described in section 2.1 and the way the target classification is carried out in section 2.2. This work leads to an intermediate product, similar in role to what was done from the ground in Cloudnet [Illingworth et al., 2007], which facilitates the implementation of subsequent synergetic algorithms.

2.1. Radar, Lidar, and Radiometer Data Set

[10] The first step is to merge all the data available, to ensure that the radar and lidar are coordinated such that they are observing the same column of the atmosphere, and to ensure that the nearest Infrared radiometer pixel to each radar and lidar footprint is selected. This work is achieved using the official CloudSat and CALIPSO products. The 94 GHz radar reflectivity, Z, is obtained from the CloudSat “2B-GEOPROF” product (including correction for gaseous attenuation, which is available in the same data set) and the lidar backscatter coefficient at 532 nm, β, from the CALIPSO Lidar Level 1B profile data [Anselmo et al., 2006]. The “MODIS-AUX” product (in the CloudSat archive) provides three infrared channels at 8.55 μm, 11.0 μm and 12.0 μm from MODIS, subset to each CloudSat ray. The MODIS-AUX radiances and uncertainties are obtained from a subset of the original MODIS level 1B radiance product, consisting of a 3 × 5 grid of MODIS pixels for each CloudSat profile. In order to get an unique value for each CloudSat profile, a simple mean is taken of radiance and uncertainty values in the 3 × 5 grid. We could also use infrared radiances provided by the Infrared Imaging Radiometer (IIR) at 8.65 μm, 10.6 μm and 12.05 μm on-board CALIPSO. We assume that all instruments have been accurately calibrated and that the nature of the random errors in the measurements is known. The thermodynamic variables needed (temperature, pressure, specific humidity, ozone, skin temperature and surface pressure) are given by the “ECMWF-AUX” data set, another product of the CloudSat Data Center that contains ECMWF (European Centre for Medium-Range Weather Forecasts) variables interpolated to each CloudSat cloud profiling radar bin.

[11] Unfortunately, CloudSat and CALIPSO products are not on the same vertical and horizontal grid. We colocate the lidar and IIR products with the radar beam using the geolocation data from the “2B-GEOPROF” product as the reference data set; we calculate the separation between each footprint and if the distance is greater than 1 km the colocated profile is not used. A 60 m vertical grid is also used to regrid both radar and lidar products; radar measurements and ECMWF variables are interpolated and lidar measurements averaged horizontally in the CloudSat footprint (typically 3 lidar beams are averaged). The result is lidar, radar, MODIS, IIR and ECMWF interpolated or averaged to the same time grid.

2.2. Target Categorization and Cloud Phase

[12] An important issue, once the data are merged and colocated, is to determine the nature of the targets in each observed pixel. The variational algorithm is currently restricted to retrieve ice cloud properties (liquid water properties will be included in the future). The retrieved visible extinction for ice particles (see section 3) can be contaminated by the presence of supercooled droplets, their concentration and optical properties being very different from ice. Therefore, lidar measurements in and below supercooled layers must be excluded due to lidar's sensitivity to very small liquid droplets. However, the radar can still be used as its return is dominated by ice particles and hence in such situations the retrieval will tend toward a radar-only retrieval. The cloud phase information is also crucial for the radiance calculation (see section 5.4). The presence of liquid layers, optically thick in the infrared spectrum, can lead to a significant reduction in the top-of-atmosphere infrared radiance compared to when only ice is present. Thus, simulated infrared radiances will be overestimated if we do not take account of the liquid part of the profile, and this effect will increase with optically thin ice clouds. When profiles contain liquid at any height we do not assimilate the infrared radiance. So we need to identify clearly the location of clouds and their phase.

[13] In order to facilitate the application of retrieval algorithms, both the CloudSat Data Center and the CALIPSO NASA Langley Center provide level-2 products providing some information on the nature of the targets, which we use as starting point for our target categorization. A radar mask is available from 2B-GEOPROF [Mace, 2004] that contains a value between 0 and 40 for each range bin, with values greater than 5 indicating the location of likely hydrometers. We interpolate this mask vertically to the new grid. The “Lidar Level 2 Vertical Feature Mask” [Anselmo et al., 2006] for each lidar pixel identifies five categories: clear air, cloud, aerosols, surface and no signal (totally attenuated). It is converted to the new horizontal grid by taking the closest value of the original grid.

[14] For the cloud identified by radar or lidar (when radar cloud mask greater than 30 and/or when lidar mask indicates cloud), we still have to distinguish the phase of hydrometeors. To do that we follow the methodology implemented by Hogan and O'Connor [2004] in the Cloudnet project [Illingworth et al., 2007]. Hogan et al. [2004] used a similar method to estimate the global distribution of supercooled liquid water clouds using the spaceborne lidar “LITE”. First we define the “cold” pixels within the profile. This cold pixel is defined to be where the wet-bulb temperature Tw is less than 0°C. Tw is calculated from the model temperature, pressure and humidity, and is where ice particles falling through sub-saturated air will melt. We initially assume that all cloud pixels assigned to “cold” will be ice and the rest liquid water. As described by Hogan and O'Connor [2004], we cope with inversions in the vicinity of the melting layer by considering all pixels above the highest 0°C isotherm in the profile to be “cold”, i.e. it is assumed that falling melted ice is unlikely to refreeze. The next step is to locate any supercooled liquid in the region of Tw < 0°C and T > −40°C in the profile.

[15] In principle this could be achieved using the lidar depolarization ratio [Sassen, 1991]. Unfortunately, even though depolarization ratio is available from CALIPSO in contiguous layers [Hu et al., 2009], the signal is too noisy to allow a confident identification of supercooled droplets at each radar-lidar pixel. So we prefer to use the lidar backscatter signal, since to lidar the top of liquid clouds appears as a strong echo that is typically confined over only a few hundred meters [Hogan et al., 2003b, 2004].

[16] We exploit this behavior to distinguish supercooled water within the profile. Figures 1a and 1b show a latitude-height representation of a cloud sampled by both the CloudSat radar and CALIPSO lidar over the Atlantic ocean (from Iceland toward the Azores), respectively. The overplotted isotherms from ECMWF in Figure 1c show that almost all the cloud was colder than −10°C, implying that this cloud is mainly composed of ice particles. However, red boxes exhibit probable supercooled layers, where a strong lidar echo is observed while the radar echo is weak. These microphysical characteristics lead to different instrument signals. At the lidar wavelength of 532 nm, cloud particles scatter in the geometric optics regime, where the backscattered intensity is approximatively proportional to the square of particle diameter (D), and consequently dominated by liquid droplets due to their large number concentration. On the other hand, radar reflectivity factor is, in the Rayleigh regime approximation, proportional to much higher moment (proportional to D6, assuming solid spheres), and so dominated by ice particles due to their large diameter. A single supercooled water identification method has been originally proposed for spaceborne lidar by Hogan et al. [2004] and will be described briefly and step by step. This method identifies supercooled layers in close agreement with those identified subjectively from examination of the backscatter images, but independent verification is still required using other sources of cloud phase information.

Figure 1.

Latitude-height representation of an ice cloud observed by both (a) the CloudSat radar and (b) CALIPSO lidar 13 October 2006 between 03:52 and 03:58 UTC. The presence of supercooled layers is indicated by red boxes, where a strong lidar echo is observed while the radar echo is very weak. (c) The result of our categorization.

[17] Each lidar ray is examined in turn, and we search for one or more liquid layer. In order to identify the first liquid layer, we first find the highest pixel where simultaneously:

[18] 1. Attenuated backscatter β > 2 × 10−5 m−1 sr−1.

[19] 2. The β value within 240 m below this peak value is a factor ten lower.

[20] 3. Tw < 0°C and T > −40°C; we consider that supercooled liquid water cannot persist outside this range.

[21] These criteria are subjectively defined from examination of the CALIPSO backscatter images. Once this “pivot” pixel is identified, we estimate the vertical extent of the supercooled layer by defining Δβtop as is the maximum gate-to-gate increase in β in the 180 m above the pivot, and Δβbase as is the maximum gate-to-gate decrease in β in the 300 m below the pivot. The supercooled layer upper limit is defined as the highest pixel within the 180 m above the pivot where the difference in β between it and the pixel below exceeds Δβtop/4. The supercooled layer base is identified where the lowest pixel within the 300 m below the pivot where the decrease of β from the pixel above exceeds Δβbase/4 or when the lidar return falls to 0 m−1 sr−1 within the 300 m below the pivot. All pixels within these limits are classified as containing supercooled water. Then, we search the ray below the layer to see if any more pixels satisfy the three criteria above. When the lidar is totally extinguished (or attenuated such that no liquid could be detected) the pixel is flagged as being impossible to determine whether or not liquid is present (even though ice still may or may not be identified by the radar).

[22] Figure 1 shows the result of the supercooled detection algorithm on A-Train data and the cloud phase identification and categorization. Here the strong lidar echo is predominantly due to the supercooled liquid droplets and the radar signal is coming from larger ice particles. Note that the layers containing supercooled droplets only can be identified where the radar does not see any cloud (indicated in light green).

[23] Note that convective clouds are an issue in our categorization. Due to the presence of abundant supercooled liquid water, the cores of deep convective clouds can contain rimed graupel and hail particles of much higher density than the particles usually found in cirrus, which makes the usual assumptions in ice-cloud retrievals invalid. It is therefore desirable to diagnose these situations. In the future we would like to propose a new approach to identify these clouds using the horizontal gradient of the reflectivity, since such clouds have a very small horizontal extension compared to their vertical extension.

3. Variational Method Combining Radar, Lidar, and Radiometers

[24] In this section we describe briefly the method used to retrieve ice properties combining CALIPSO attenuated lidar backscatter, CloudSat radar reflectivity factor and infrared radiances from IIR or MODIS. This scheme was originally proposed by DH08 and readers needing more detail can refer to this paper where the methodology is fully described. Here, the description is restricted to the essential parts and also the changes in the algorithm due to the application to A-Train data.

3.1. Formulation of the State Vector and Observation Vector

[25] The algorithm used here is restricted to the ice clouds. Lidar and radiometer are not used where ice clouds are obscured by liquid drops. Keeping that in mind, in this variational scheme we must decide what variables to use to describe ice cloud properties. Following DH08 these variables will be retrieved and are represented as the state vector, x:

equation image

[26] The visible extinction coefficient, αv, in the geometric optics limit, is directly linked to the lidar measurements and the optical depth of the cloud, and in (1) is represented by a value at each of the gates that ice is detected. DH08 included in the state vector the extinction-to-backscatter ratio, S, which was assumed constant with height. However this strong constraint can be relaxed when the number of independent measurements allows it, for instance when an independent estimate of αv is available (e.g. from a Raman or high spectral resolution lidar), providing information on the height dependence of S. Unfortunately, CALIPSO does not have such channels, but when infrared radiances are available we have enough independent information to allow S to vary with height in the retrieval. Platt et al. [2002] showed that ln(S) varies linearly with temperature. We assume that the altitude dependence can be used instead of temperature if temperature varies linearly with altitude. In that case, ln(S) is expressed as a linear function of height:

equation image

where z is the altitude and zmid the height of the middle of the cloud. Coefficients, aln S and bln S are, respectively, the slope and the value of ln(S) at the middle of the cloud sampled by the lidar, and they can be used to represent ln(S) in the state vector instead of ln(S). When no radiances are assimilated (e.g. when liquid in the profile prevents the radiance from being forward-modeled using ice properties above), aln S is removed from the state vector to revert to the original assumption of DH08.

[27] Following DH08, we represent size information by Nb, which is a basis-function representation of the ratio of “normalized number concentration parameter”, N*0, described by Delanoë et al. [2005], to αv0.67, hereafter N′. N′ has a good temperature-dependent a priori. This choice of variable leads to an efficient algorithm due to the reduced size of the state vector, and simple 1D-lookup tables in the forward model.

[28] The observation vector, y contains the measurements Z (the CloudSat radar reflectivity factor), β (the CALIPSO apparent lidar backscatter), Iλ (the Infrared radiance at wavelength λ) and ΔI (the difference between two Infrared radiances) from MODIS or IIR.

[29] Valid lidar and radar measurements (detected as ice) are not necessarily both available at each ice pixel due to the radar's insensitivity to thin cloud and the lidar's inability to penetrate thick cloud. Moreover, in the case of the lidar signal it is advantageous to include in y any gates beyond the far end of the cloud where molecular echoes are detectable. This enables any molecular return measured here to be used automatically as a constraint on optical depth [Young, 1995; Cadet et al., 2005], explored further in section 5.2. Nevertheless, this requires a confident identification of molecular signals, as misclassification of cloud or noise as molecular can lead to large errors in retrieved optical depth. If radar or lidar measurements are missing at any ice pixel they are simply excluded from y. Since the lidar signal is strongly attenuated by liquid water, when supercooled layer is detected, the lidar signal in and below the liquid is not used even if it is identified as also containing ice. In such regions, we assume that radar echo is dominated by the ice, and that liquid attenuation of radar in supercooled clouds can be neglected [Hogan et al., 2003a], and hence in such a situation, the retrieval reverts back to one using reflectivity only. Since liquid clouds are not included in the forward models, it is not possible to simulate the infrared radiances accurately when there is any liquid water within the profile. Thus, we do not assimilate the infrared radiances when there is any liquid detected. The radiances are only introduced into the retrieval after the radar-lidar part of the algorithm has been run to convergence.

3.2. Formulation of the Optimal Estimation and Forward Model

[30] The aim is to find the state vector that minimizes the difference between the observations and the forward model in a least-squares sense. This is achieved by minimizing a cost function using Gauss-Newton iteration, as described fully by DH08. A key input is observational error covariance matrix (R), which includes both instrument and forward-model errors as discussed in section 4.

[31] The forward model used in the scheme is described by DH08, and produces an estimate of the observations y from the state vector x. This is achieved by first calculating N*0 using N′, then using one-dimensional look-up tables to relate the ratio αv/N*0 to either an intensive variable y, or to Y/N*0, where Y is an extensive variable. To create the one-dimensional look-up tables, we assume a shape of the particle size distribution [Delanoë et al., 2005]:

equation image

where N*0 is the normalized number concentration parameter, given by

equation image

where Mn is the nth moment of the ice particle size distribution. Particle size in (3) is normalized by Dm, a measure of the mean size of the distribution and defined as

equation image

[32] The function F in (3) is the “unified” size distribution shape given by Delanoë et al. [2005], and has been found to fit measured size distributions when they are appropriately normalized (i.e. F fits N/N*0 versus D/Dm).

[33] To generate the look-up tables, we cycle through a wide range of values of Dm and for each calculate αv/N*0, y and Y/N*0 (where y and Y represent all intensive and extensive variables of interest). The ice particle mass is assumed to follow the Brown and Francis [1995] mass-size relationship derived from aircraft data in mid-latitude ice clouds. The corresponding area-size relationship is taken from Francis et al. [1998], who used the same aircraft data set as Brown and Francis [1995].

[34] Geometric optics is used to calculate αv via the area-size relationship above.

[35] The radar reflectivity factor Z, is derived using Mie theory and assuming the particles to be homogeneous ice-air spheres of diameter D and mass m. Similarly, the ice water content, IWC, is simply the integrated particle mass across the size distribution. The intensive variable effective radius, re, is derived using Foot [1988]

equation image

where ρi is the density of solid ice.

[36] The fast multiple-scattering model of Hogan [2006] is used to derive the lidar attenuated backscatter, it needs as input the “equivalent-area radius” ra, i.e. the radius of a sphere with the same cross-sectional area as the mean area of the entire size distribution. A look-up table is used to convert αv/N*0 to ra.

[37] The individual radiance calculations employ the “two-stream source function technique” of Toon et al. [1989]. For each infrared radiometer channel, the radiance forward model takes as input the scattering and absorption properties derived from lookup-tables using the relevant cloud variables from the state vector (profiles of visible extinction coefficient αv and N′) and estimates of other variables (profiles of temperature, pressure, humidity, O3 and CO2 concentrations, as well as skin temperature and emissivity). To built the lookup-tables, the scattering and absorption properties for an individual ice particle, at each radiometer wavelength λ, are taken from the database of Baran [2003], which assumes aggregates. For each value of αv/N*0 we derived; extinction coefficient αλ, single-scatter albedo equation imageλ and asymmetry factor gλ at each radiometer wavelength λ.

[38] As a priori, we use a N′(T) relationship derived using in situ database as used by Delanoë et al. [2005]. DH08 showed that ln N′ is varying linearly with the temperature, and therefore we derived the following expression:

equation image

where T is the temperature in degree Celsius. So when one of the measurements is missing the N′ has the value of the a priori and allows to retrieve N*0 knowing αv, then all the forward model inputs can be deduced from the lookup-tables.

[39] A large amount of ancillary information is required for each component of the forward model. This includes the thermodynamic state of the atmosphere (in particular, profiles of temperature, pressure, humidity and ozone concentration), the properties of the surface (skin temperature and emissivity at the radiometer wavelengths), as well as the properties of the instruments themselves (in particular the lidar field-of-view to calculate the contribution from multiple scattering). Such information can be obtained with adequate accuracy from the standard ECMWF analysis and forecast products that are archived within the CloudSat database introduced in section 2.1.

4. Errors Assigned in the Variational Scheme

[40] As stated in section 3.2, when a variational approach is used we need to estimate R, the error covariance matrix of the observations, and retrieved properties can be very dependent on the values used. For the retrieval error to be realistic it is important that R includes the errors in the forward model. Consequently, it may be given by R = O + M [Cooper et al., 2006], where O is the error covariance solely due to instrumental error and M is the forward model error. We assume R to be a diagonal matrix, where the diagonal elements are simply the sum of error variances of each of the elements of y and forward model errors. It is reasonable to assume that errors in the measurements are uncorrelated, but forward model errors can be slightly correlated, due, for example, to errors in the ancillary data affecting several forward-modeled values in the same sense. However in this work we will assume that all these errors are uncorrelated. This section describes which are these errors and how we compute them; this work is relevant to any variational scheme that uses radar, lidar or infrared radiometers from the A-Train.

4.1. Radar

4.1.1. Radar Forward Model Error

[41] As shown by DH08, the forward model error in Z is a combination of the representation of the size distribution and mass-size variability, which leads to a random error in Z due to microphysical assumptions of around ΔZmicro = 1 dB. This is used to form the diagonal elements of M corresponding to the radar measurements.

4.1.2. CloudSat Instrumental Error

[42] We distinguish two categories of error in the radar measurement, systematic error and random error. Systematic error is typically a bias due a calibration error, but we will assume that CloudSat radar calibration has been corrected [Tanelli et al., 2008]. However we have found that using our algorithm with a change in calibration of 1 dB and 2 dB would lead, respectively, to a 10% and 20% error in IWC. The variational formalism is not well suited to calibration errors due to their high correlation.

[43] Random measurement error is due to a combination of a finite number of samples and the background instrument noise and can be computed following Hogan et al. [2005] via:

equation image

where ΔZdB is the one-standard-deviation random error in dB, and M and SNR are respectively the number of pulses averaged and the linear signal-to-noise ratio of CloudSat. It is valid to assume that, due to the motion of the satellite, each pulse is independent. M is reported for each ray but SNR is not, so we have to compute it using the level 1 echo power (containing noise) and the level 2 reflectivity (with noise subtracted). We estimate that the background noise (in dBZ) is given by the following relationship:

equation image

where r is the distance in meters of each radar gate from the satellite. The linear signal-to-noise ratio can be easily obtained:

equation image

As mentioned in section 2.1, radar and lidar measurements are at the same vertical grid. Radar reflectivity factor is now vertically interpolated to 60 m, so random error must be interpolated too.

4.2. Lidar

4.2.1. Lidar Forward Model Error

[44] When infrared radiances are not assimilated, the error in the lidar forward model is dominated by the fact that the lidar ratio S is assumed constant with height (exemplified by the fact that we retrieve just a single value for the whole profile). In reality this may vary with height in which case the results of Hogan et al. [2006a] indicate that a radar-lidar algorithm will retrieve approximately a mean value for the profile with the local error in extinction proportional to the local error in lidar ratio. However, the contribution of this error will be reduced when infrared radiances are assimilated because the extra information enables us to relax the assumption of constant lidar ratio. The lidar forward model is also susceptible to errors in our ability to represent multiple scattering, but these are believed to be smaller than those due to variations in S.

[45] In this section we estimate the impact of making the assumption of S constant, first in the simulated backscatter and then in the extinction retrieval. To do so, we used simulated profiles of lidar backscatter and a visible extinction derived from in situ data from the aircraft size spectra obtained during the European Cloud Radiation Experiment (EUCREX), in the same way as done by Hogan et al. [2006a]. Apparent lidar backscatter profiles have been computed by combining the visible extinction profile with different extinction-to-backscatter ratio profiles, including the effects of multiple scattering and attenuation using the model of Hogan [2006]. The lidar characteristics of CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) are used: a wavelength of 532 nm, and the lidar full-angle beam divergence and field-of-view are set to 0.1 and 0.13 mrad respectively.

[46] A wide range of values of S have been shown in the literature for ice clouds; according to [Platt et al., 1987] and Chen et al. [2002], S typically varies between 20 sr and 60 sr. Platt et al. [2002] show that for very cold ice clouds, S can go up to 100 sr. However, there has been less study of how S varies with height, and we test the sensitivity of the retrieved to different shapes of profile. The dashed lines in Figure 2a show two different profiles of S, considered as “truth” in the simulations. Both vary with height as expected in real ice clouds. Profile 1 varies over around a factor of 2 similar to the range found by Ansmann et al. [1992], from 60 sr at the cloud top to 30 sr at cloud base. Profile 2 aims to represent an extreme case of S variation (basically two orders of magnitude), corresponding to specular reflection [Thomas et al., 1990; Sakai et al., 2006] (we will come back later to this effect) below 7 km; S is set to 50 sr above this height and 0.5 sr below.

Figure 2.

Simulated profiles used to estimate the random error due to assuming the extinction-to-backscatter ratio (S) constant with height: (a) true profiles of S described in the text (dashed lines) and the corresponding retrieved constant profiles (solid lines), (b) visible extinction profile simulated from particle size distributions collected during EUCREX (thick dashed black line) and retrieved visible extinction coefficients using the DH08 algorithm using constant S when the true S has the shapes shown in Figure 2a (solid lines).

[47] Two apparent backscatter profiles are computed, combining the extinction profile (Figure 2b, dashed black line) and the two extinction-to-backscatter ratio profiles described above, using the multiple scattering model of Hogan [2006]. Note that these apparent attenuated backscatter profiles are simulated without instrument noise in order to avoid contamination of the comparison between varying S and constant S. The fractional error of using a constant S to simulate these two apparent backscatter profiles is 0.48 and 2.6 (in a root-mean-squared difference sense) for profiles 1 and 2 respectively. Therefore, we assume that the error in forward modeling natural logarithm of the attenuated backscatter is 0.5 (which corresponds to 50% in attenuated backscatter).

[48] These two profiles, with varying S, are used to estimate the error in the retrieved extinction using the DH08 algorithm. The two extinction profiles retrieved assuming S constant are displayed in Figure 2b. For profile 1, the root-mean-squared difference in ln S between the retrieved constant profile and truth is 0.24, corresponding to a 24% error in retrieval S. The fractional root mean squared error in retrieved extinction is 46%.

[49] The black dashed line (profile 2) aims to represent an extreme case of S variation, corresponding to specular reflection below 7 km. Specular reflection occurs when horizontally oriented ice crystals reflect light as a mirror and strongly increase the lidar signal while keeping the visible extinction unchanged [Sakai et al., 2006] compared with aggregates with same projected area. This occurs particularly when the lidar is pointed directly at nadir. Once the DH08 algorithm is applied to profile 2, the root-mean-squared difference in ln S between the constant profile retrieved and truth is 1.7 above 7 km and 2.9 below. Visible extinction is therefore underestimated of 2 orders of magnitude. The fractional error in retrieved extinction are 242%, 25% below 7 km and 330% above.

[50] To overcome the problem of specular reflection, the CALIPSO lidar pointing was moved forward from 0.3 to 3.0 degrees forward from nadir at the end of November 2007.

4.2.2. CALIPSO Instrumental Error

[51] As for CloudSat, there are two major types of instrumental lidar error: systematic error (such as calibration error) and random error. Hogan et al. [2006a] showed that lidar calibration has no effect on the retrieval since only relative changes in β are used in the algorithm. Therefore the absolute value is not important and bias in the CALIPSO calibration will not affect the retrieval. The CALIPSO signal is currently calibrated using nighttime profiles by normalizing the high-altitude return signal to a molecular model and daytime calibrations are interpolated from adjacent nighttime calibrations [Winker et al., 2009].

[52] In order to estimate the error due to the noise in the measurement, we use the approach of Liu et al. [2006]. Random errors in measured backscatter (sr−1m−1) at 532 nm due to shot noise, according to Liu et al. [2006], can be derived from:

equation image

where NSF is the Noise Scale Factor representing the effect of the photomultiplier tube for 532 nm channels to increase to the noise above what would be expected purely from Poisson statistics, r is the distance in meters of each lidar gate from the satellite and C is the lidar calibration constant such that the signal power is V = Cβr−2. All these quantities are included in CALIPSO level 1B product. ΔVb and Δequation image are respectively the standard deviation of the background signal-power, estimated by us using N samples above 30 km, and the standard error of the mean background signal, which is given by

equation image

Δβ is first obtained for the highest 30 m vertical resolution of the lidar and when backscatter values are averaged on to a lower resolution grid (e.g. 60 m for this paper), low resolution random errors are derived from:

equation image

where Δβ30i is the random error in measured backscatter at 30 m vertical resolution and n the number of gates averaged. When Δβ30i is constant with height, the high resolution error is simply divided by the square-root of the number of gates averaged. The main effect of applying (11) is for much higher error in the day.

4.3. Infrared Radiance Errors

4.3.1. Radiance Forward Model Errors

[53] In this section we show how errors in retrieved infrared radiances are estimated. The error in the forward-modeled radiance depends on cloud thickness, surface temperature and error in meteorological parameters such as the temperature profile. The infrared radiance forward model used here is detailed in DH08, where each individual radiance calculation employs the “two-stream source function technique”. Comparisons with the 16-stream DISORT code (Discrete Ordinates Radiative Transfer Program; [Stamnes et al., 1988]) demonstrated that for zenith radiances our code is accurate to better than 1% (see DH08). However this study did not include uncertainties in the input parameters to the radiance code. In the literature, different sources of uncertainties have been explored, including the error due to different particle habit assumptions (1.5 K according to Cooper et al. [2003]) and errors in humidity and ozone profiles. Errors due to other input parameters; (skin temperature, surface emissivity and air temperature) must be also taken in account.

[54] Skin and air temperature errors have an effect on observed top-of-atmosphere radiances that is dependent on the optical depth of the intervening cloud, and consequently need to be considered carefully. This radiative model first uses a two-stream calculation to estimate the upwelling and downwelling monochromatic fluxes F±, which are then used as the source function in a radiance calculation for the radiance measured by the MODIS or IIR infrared channels. Unfortunately, this model is too complicated to rigorously work out the radiance error associated with a particular temperature error. Therefore, a much simpler model of infrared radiative transfer is assumed for the purpose of calculating error propagation, although we stress that in the subsequent forward modeling of radiances, the full two-stream model is used. In the absence of scattering and gaseous absorption, and assuming a single layer of physically thin cloud overlying a surface with an emissivity of unity, we may write the zenith radiance as

equation image

where εc is the emissivity of the cloud, B is the Planck function, and Tc and Ts are respectively cloud and skin temperatures. For a radiance in the zenith direction, cloud emissivity can be calculated from the infrared absorption optical depth (τλ):

equation image

Infrared absorption optical depth can be well approximated as half of the visible optical depth and the cloud emissivity becomes:

equation image

[55] Since the radiances are only introduced into the retrieval after the radar-lidar part of the algorithm has been run to convergence, we may use the visible optical depth derived by radar and lidar here. In practice it is found that the radar and lidar provide an optical depth that is close to the value using all three instruments, and therefore the use of this optical depth does not introduce substantial uncertainty in the calculation of the error in Iλ. We take the partial derivative of (14) with respect to Tc and Ts, and by assuming each error is independent may sum the squares of the results to obtain the error variance of the radiance:

equation image

where ΔTs is the error in skin temperature, which is assumed to be 3 K for ECMWF forecasts [Morcrette, 2001]. ΔTc is the error in the cloud temperature; we use a value of 0.6 K, which was the error estimated for ECMWF temperature forecasts by Benedetti [2005]. The gradients of the Planck function are straightforward to calculate at the Ts and Tc (taken to be the cloud-top temperature as detected by the lidar). Note that it is not necessary to consider random error in cloud emissivity, since we are calculating the error in radiance due to parameters in the forward model that are held constant during the subsequent retrieval process. Since the cloud optical depth, and hence the cloud emissivity, will be varied in order to better match the observed radiance in the subsequent retrieval, there is no need to include this error in (17).

[56] It is clear that the errors in radiance are strongly dependent on the visible optical depth retrieved in the first part of the algorithm: optically thin clouds let through a substantial amount of radiation from the surface, and therefore the surface temperature error contributes significantly to the error in the radiance forward model. For optically thick clouds, εc is close to unity, and so almost all the measured radiation is emitted by the cloud, and hence the errors in forward modeling the radiance arise entirely from the lower error in the temperature profile.

4.3.2. Infrared Measurement Errors

[57] An error estimate in the radiance measurements for each MODIS wavelength is given as a percentage by the “Cloudsat MODIS-AUX Auxiliary Data”, with a typical value of 0.5%. This error estimate is combined with the errors due to the microphysical model [Cooper et al., 2003].

4.4. Radar-Lidar Colocation Errors

[58] Another source of error arises due to the mismatch of the radar and lidar beams. Illingworth et al. [2000] investigated this effect, and they estimated an error of 0.1 dB when the lidar samples through the middle of the radar footprint, increasing to 0.7 dB for a separation of radar and lidar footprints of 1 km. These values correspond to the RMS difference in reflectivity that would be found if the lidar measured the same quantity as the radar. Accordingly, we usually consider data to be acceptable when the separation distance is less than 1 km, otherwise the alignment cannot be trusted and no retrieval is performed. Fortunately, in 97% of the time this distance does not exceed 1 km.

5. Results

[59] In this section, the algorithm is applied to case studies of A-Train data. The target categorization is obtained using the methodology described in section 2.2, and error calculations needed for the variational approach are performed as explained in the previous section.

5.1. Radar and Lidar Retrieval

[60] Figures 3a and 3e show an ice cloud sampled by the CALIPSO lidar and the CloudSat radar on 22 September 2006 at around 15:29 UTC. This is a good illustration of the complementarity between the lidar and the radar, since only the lidar detects the second part of the cloud between 9.5 and 12.5° latitude, but it cannot completely penetrate the cloud between 6 and 7.5°, while the radar can. They are able to work together when the cloud becomes thick enough to be detected by the radar down to the distance at which the lidar is completely extinguished. Figure 3b shows the categorization obtained using the method described in section 2.2, where the ice phase is represented in light blue and purely supercooled liquid in dark blue, and we can see that most of this cloud is composed of ice, without any supercooled water layers detected (green). Since the DH08 algorithm works only for the ice phase, the retrieved cloud properties will be restricted to the part where the cloud is represented in light blue although in a mixed-phase cloud, a radar-only retrieval is performed. We can also see that the lidar signal is completely extinguished from 6 to 7.5° and there no molecular echo detected below most of the cloud.

Figure 3.

Example of observations and retrieved ice cloud properties using the radar-lidar algorithm (the molecular lidar return beyond the cloud is not used) on 22 September 2006 at around 15:29 UTC: (a) CALIPSO lidar observations, (e) CloudSat radar observations of the same scene, (b) the categorization obtained using the method described in section 2.2, (c) the lidar forward modeled attenuated backscatter signal at the final iteration of the algorithm, (g) the radar forward modeled signal, (d) retrieved extinction coefficient of ice, (f) retrieved ice water content, and (h) retrieved effective radius.

[61] First we run the algorithm with only radar and lidar (no MODIS data assimilated) and without using lidar molecular scattering beyond the cloud as a constraint on optical depth. The lidar forward-modeled attenuated backscatter signal, at the final iteration of the algorithm, is represented by Figure 3c and is in a good agreement with the measured signal. The molecular signal is also simulated (but not assimilated). Figure 3g represents the radar forward-modeled signal, once again in a good agreement with the measurement indicating that the retrievals are well constrained by observations throughout the depth of the cloud. Figures 3d, 3f, and 3h represent, respectively, the retrieved visible extinction for ice, the ice water content and the ice effective radius. There are no obvious discontinuities between the regions where both instruments detect the cloud to regions detected by only radar or lidar. Although there are no validation data available (even if MODIS can be a partial validation; see section 5.2), retrieved properties are in valid ranges; for instance effective radius lies within the range 10 and 90 μm, and it tends to decrease toward the cloud top and does not exceed 30 μm at the cloud top. However, the effective radius appears to be very dependent on the altitude of the cloud, especially when only one of the instruments is available. This is partially due to the fact that when only one instrument is available, the particle size information originates primarily from the a priori constraint on N′, which is temperature dependent. Nevertheless, ice water content and visible extinction coefficient appear to be much more independent of the temperature, and the structures of the cloud observed by the instruments are still conserved in the retrieved variables.

5.2. Using the Lidar Molecular Signal Beyond the Cloud

[62] When ice clouds are sufficiently thin to allow the lidar signal to penetrate them entirely, the molecular signal can be detected beyond the cloud and used automatically as a constraint on optical depth [Young, 1995; Cadet et al., 2005]. The attenuation due to the ice cloud affects the molecular return and thus comparing the measured molecular signal to that expected in the absence of clouds gives an information on cloud optical thickness. To do this in a variational framework we simply add a few gates (about 10) beyond the far end of the cloud in y. The lidar forward model is able to simulate the molecular scattering, including the effect of multiple scattering, and can be included in the assimilation process.

[63] Figure 4 represents the effect of adding this information for the same ice cloud described in section 5.1, but selecting the optically thin part where the molecular return is detectable. This cloud was sampled during night-time and hence the attenuated backscatter signal is not too noisy, as exemplified by Figure 4a. Figure 4b represents the visible extinction retrieved by the algorithm without using the molecular signal, while in Figure 4c the molecular signal is used. This cloud is not well detected by the radar and therefore we would expect the retrieval to be susceptible to Hitchfeld-Bordan instability ([Hitchfeld and Bordan, 1954]) if no molecular signal is detected and the cloud is too optically thick. It can be seen that in some locations the retrieved extinction is much lower when the molecular signal is used. Figure 4d shows that optical depth is often much lower when the molecular signal is used and it tends to stabilize the retrieval especially around 11° and between 9.2° and 10° latitude. An independent validation is carried out in Figure 4e, where infrared radiances at 10 μm are simulated and compared to MODIS. Extinction around 11° and between 9.2° and 10° is too large in the case of the molecular signal not being used, and results in a radiance much lower than observed by MODIS. This example shows that making use of the molecular signal significantly improves the retrieved extinction. However, this new constraint is very dependent on the quality of the target categorization; we need to know accurately where cloud base lies and which pixels only contain molecular scattering. During day time, sunlight increases the noise in the lidar signal and the small molecular signal is swamped by the larger background solar signal. Therefore, we use it only during the night.

Figure 4.

Illustration of the effect of including the molecular return beyond the cloud on retrieved visible extinction, for part of the ice cloud represented in Figure 3. (a) Measured lidar attenuated backscatter. (b and c) Retrieved visible extinction respectively without and with using the molecular signal. (d) Retrieved optical depths. (e) Top-of-atmosphere radiance at 10 μm: measured by MODIS and forward modeled both with and without molecular assimilated (note that the MODIS radiances were not used as a part of the retrieval).

5.3. Impact of the Choice of Mass-Size Relationship

[64] As mentioned in section 3.1, our forward model requires lookup tables which are derived assuming the Brown and Francis [1995] relationships for spherical aggregates. These relationships have been derived using mid-latitude in-situ aircraft data and the radar reflectivity they predict is very close to observations [Hogan et al., 2006b], although may not be entirely suitable for tropical or polar clouds. Hogan et al. [2006a] estimated the impact of changing the relationship from Brown and Francis [1995] to Mitchell [2006] for two different radar-lidar algorithms, and found that the visible extinction was not affected. while IWC and re were changed by about 30%.

[65] In this section we estimate this impact on our radar-lidar algorithm. Validation campaigns, where in situ measurements are available under the track of the satellites, could be used to partially estimate this error. However, due to the complexity of this task and the fact that there is considerable uncertainty on mass-size relationships, even from aircraft, coincident in-situ data will not be used in this paper. Instead, we quantify the impact of a change in the assumed area-diameter and the mass-diameter relationships for the retrieval of cloud properties.

[66] To do so, we created new lookup tables assuming the mass-area-diameter relationships appropriate for bullet rosettes with 5 branches proposed by Mitchell [2006], instead of spherical aggregates. Bullet rosettes are common in midlatitude and polar cirrus clouds [Schmitt et al., 2006]. We compare the retrieved ice cloud properties using the DH08 algorithm and assuming that ice particles are either entirely bullet rosettes or entirely aggregates (except for the smallest particles that are solid ice spheres in both cases). We apply our algorithm on the case presented in Figure 1, using radar and lidar. However, in some areas, lidar and radar are not simultaneously available. The results of this comparison are shown in Figure 5, where we split the analysis into three instrument configurations: when lidar or radar are assimilated but only one detects a particular pixel in the profile, and when radar and lidar are used simultaneously. In a profile we can have all these configurations; therefore note that the retrieval is affected simultaneously by all regions due to using of a priori error covariances for spreading of number concentration information in height (see section 2.4 of DH08).

Figure 5.

Scatterplots of retrieved visible extinction and ice water content assuming spherical aggregates [Brown and Francis, 1995] versus bullet rosettes. The results (a and d) for pixels only detected by lidar-only; (b and e) for radar-only, and (c and f) for radar and lidar.

[67] Figure 5a represents the retrieved extinction assuming bullet rosettes (hereafter αBR) as a function of the retrieved extinction assuming Brown and Francis [1995] (hereafter αBF), for lidar-only pixels. Figure 5b represents radar-only pixels, and Figure 5c represents radar plus lidar. It can be seen that αBR values are larger than αBF with an overall bias of 45% and a spread over one order of magnitude. The numerical results are summarized in Table 1 and show a root-mean-squared spread of 60%. As shown by Figure 5b, the overall bias is mostly due to the radar-only contribution (−101%), for which retrieved extinction is very sensitive to the choice of ice particle type. Radar measurements alone are not enough to accurately retrieve extinction since the reflectivity is not directly linked to extinction; another variable is needed such as number concentration, which is obtained through an a priori relationship when lidar is not available. Figure 5a reveals that for lidar-only retrievals, visible extinction is only affected by the change in particle type by −20 ± 46%. In theory it should not be affected by the area-diameter and the mass-diameter assumptions, but this retrieval is affected by the radar only and radar-lidar part of the extinction profile. This has been verified using only the lidar for the entire retrieval (i.e. the radar is not used at all in the retrieval): the visible extinction retrieval does not need to assume any particle shape once the extinction-to-backscatter ratio is fixed (note that the extinction-to-backscatter is implicitly depending on particle shape), since in the geometric optics limit, extinction is directly linked to the lidar measurement (not shown).

Table 1. Mean Relative Difference and Root-Mean-Squared Difference for Retrieved IWC, Visible Extinction, and Effective Radius Between [Brown and Francis, 1995] and Bullet Rosettes [Mitchell, 2006] for the Case Shown in Figure 5
Instruments UsedαIWCre
Radar only−101 ± 69%−47 ± 117%28 ± 17%
Lidar only−20 ± 46%42 ± 37%52 ± 7%
Radar and lidar−28 ± 41%9 ± 73%30 ± 18%
Radar and lidar, radar only, lidar only−45 ± 60%−0.1 ± 37%32 ± 7%

[68] The regions detected by both radar and lidar are also affected by the rest of the profile, as shown by Figure 5c, for which the mean relative difference is about 28%. Since the visible extinction is retrieved simultaneously using radar and lidar, the radar contribution makes the retrieval dependent on the particle shape assumption. Hogan et al. [2006a] found that there is no effect on the extinction retrieval since the Donovan et al. [2001] and Tinel et al. [2005] algorithms that they analyzed used the radar signal to assist in the correction for lidar attenuation. In our case the radar is not only used to stabilize the retrieval, it also contributes significantly to the retrieved extinction. However, this is largely related to the ability of DH08 algorithm to act as a radar-only algorithm within the same profile, something not possible with earlier two algorithms that can only work where both instruments detect the cloud.

[69] The general behavior is slightly different concerning ice water content (IWC). We use the lookup tables to derive IWC, visible extinction and number concentration. The mean relative difference overall is almost equal to zero, as shown in Table 1. As shown in Figure 5d, lidar-only IWC assuming Brown and Francis [1995] (hereafter IWCBF) is greater than IWC retrieved using bullet rosettes (hereafter IWCBR). The opposite effect is shown for radar-only. When radar and lidar are used simultaneously, the difference in IWC is less apparent although IWCBF is slightly larger than IWCBR for the smallest IWC. The lidar-only IWC retrieval is more sensitive to the mass-diameter assumptions (42 ± 37%), since there is more dependence on the a priori assumption to get IWC, and the lookup table is mass-diameter dependent. Radar-only is also sensitive to density assumptions (−47 ± 117%) for the same reason. In the radar-lidar region (Figure 5f) the impact of the choice of particle habit is quite small in terms of mean relative difference since it does not exceed 10%.

[70] We do not plot the results for effective radius but we can see in Table 1 that the results are similar to those found by Hogan et al. [2006a], i.e the mean relative difference is about 30% for radar-lidar and radar-only. However the difference for lidar-only is higher, greater than 50%. The effective radius is proportional to the ratio of IWC to extinction, therefore for radar-only, since extinction and IWC retrievals are both density dependent (IWCBR > IWCBF and αBR > αBF), the effective radius is less affected. For lidar-only, the retrieved extinction is less dependent on the density assumptions than IWC.

[71] To conclude this section, the assumed area-diameter and mass-diameter relationships affect more the radar-only and lidar-only parts of the retrieval than the radar-lidar parts of the retrieval, although the lidar-only extinction retrieval is not really sensitive. Note that all remote-sensing retrievals are subject to similar sensitivities. Fortunately, other constraints, particularly radiances, should help to reduce the uncertainties.

5.4. Retrieval Using Radar, Lidar, and Infrared Radiometer

[72] When infrared radiances are available, from MODIS or IIR, they can be used as an extra constraint. To demonstrate the impact of assimilating infrared radiances with radar and lidar and with making use of radar, lidar and radiometer, we run the algorithm on an ice cloud sampled on 8 July 2006 by CloudSat, CALIPSO and MODIS. Figures 6a, 6b, and 6m depict the radar reflectivity, the attenuated backscatter and infrared radiance respectively. This ice cloud is relatively optically thin and can be penetrated by the lidar after −69° latitude, when ice is precipitating and only the radar can fully penetrate the cloud. Between −70 and −60°, only the lidar can detect the top of the cloud. Since this ice cloud is over the sea, the forward modeled radiance is not affected by surface emissivity variations. However, some supercooled water layers have been identified in the boundary layer after −60°, and therefore the radiances are not be assimilated at these times.

Figure 6.

Illustration of the impact of assimilating infrared radiances in the retrieval process. Latitude-height representation of an ice cloud observed by both (a) CALIPSO lidar and (b) the CloudSat radar on 8 July 2006. (c, f, and i) Visible extinction, number concentration parameter (N*0), and effective radius retrieved using only radar and lidar, respectively. (d, g, and j) The effect of assimilating the infrared radiances at 10 μm and the difference in radiance between 8 μm and 12 μm when S is assumed constant. (e, h, and k) The impact of assimilating radiances when S is allowed to vary with height. (l) Retrieved visible optical depth. (m) Simulated and observed radiances for the three experiments at 10 μm.

[73] Retrieved ice cloud properties are shown in Figure 6, where Figures 6c, 6f, and 6i are respectively, visible extinction, number concentration parameter (N*0) and effective radius retrieved using only radar and lidar (we also make using the molecular signal beyond the cloud as described in section 5.2). As illustrated by Figures 6d, 6g, and 6j, the effect of assimilating the infrared radiance at 10 μm and the difference in radiance between 8 μm and 12 μm is clearly to increase αv and N*0 and reduce effective radius at cloud base. In order to match the colder brightness temperature measured by MODIS, the algorithm must increase the extinction somewhere in the profile and therefore the optical depth. We would expect radiance assimilation to increase extinction and number concentration at cloud top and not at cloud base, since this is the part of the cloud that the radiances are most sensitive to. This does not occur due to the fact that we do not allow S to vary. Because the lidar signal strongly constrains the extinction at the cloud top, the only way the algorithm can match the measured radiance is to increase αv at the cloud base where it is less constrained by the lidar (it also needs to increase αv much more than it would if αv was being increased at cloud top but still cannot reduce the forward-modeled radiances all the way to the observations).

[74] To solve this problem, we allow extinction-to-backscatter ratio to vary linearly with height (as mentioned in section 3), and Figures 6e and 6h show that αv and N*0 increase not only at cloud base but also at cloud top, and results also in a lower optical depth than in the case of constant S. Effective radius (panel k) decreases at the cloud top compared to radar-lidar only and assimilating radiances with S constant. Figure 6m shows that forward-modeled radiance with S variable in blue is closer to the observed radiance in grey. Note that the change in S, when it is allowed to vary linearly with height, is about 20% with respect to S constant.

[75] This section has shown how infrared radiances can be assimilated in the radar-lidar algorithm. However, to evaluate whether or not assimilating improves the retrieval we would need to compare the result to independent data. Since the use of radiances should improve the retrieved properties, we plan to carry out an evaluation using our retrievals to compute longwave and shortwave fluxes at the top of the atmosphere and compare them to Clouds and the Earth's Radiant Energy System (CERES).

6. Conclusion

[76] In this paper, the DH08 algorithm has been modified and applied to A-Train data, using spaceborne radar, lidar and infrared radiometers to retrieve ice cloud properties. The first step needed is to merge the radar and lidar profiles since these two instruments are not on the same platform or reported on the same grid. Since the algorithm works only for ice cloud, we first implemented a cloud phase identification method similar to Hogan and O'Connor [2004], including identification of supercooled water layers using the lidar signal and temperature. In terms of the variational algorithm itself, a change to the original algorithm was to add the capability for extinction-to-backscatter ratio to vary with height when infrared radiances are available. We also included calculation of both observational and forward model errors.

[77] The algorithm has recently been run on a large volume of A-Train data and these retrieved cloud properties are currently being used to evaluate the clouds in the forecast models of the UK Met Office and ECMWF (T. Stein et al., A comparison between different retrieval methods for ice cloud properties using data from the CloudSat and A-Train satellites, manuscript in preparation, 2010; J. Delanoë et al., Evaluation of ice cloud representation in ECMWF and UK Met Office models using CloudSat and CALIPSO data, manuscript in preparation, 2010).

[78] The flexible algorithm presented here is very suitable for the future mission EarthCare [ESA, 2004], which will include a High Spectral Resolution Lidar [Shipley et al., 1983] that can observe the molecular signal even within cloud. A Doppler radar will add the capability to characterize the vertical motions in clouds and providing information on particle size via the measured fall speed [Matrosov et al., 2002; Delanoë et al., 2007]. It is a relatively straightforward matter to add forward models for these additional measurements within the same retrieval framework, and we have recently done this for the High Spectral Resolution Lidar of the forthcoming EarthCare satellite [Delanoë and Hogan, 2008b].

Acknowledgments

[79] The input data were obtained from the NASA Langley Research Center Atmospheric Science Data Center and the NASA CloudSat project. This work was supported by NERC grant NE/C519697/1 and European Space Agency grant 20990/07/NL/EL.

Ancillary

Advertisement