By continuing to browse this site you agree to us using cookies as described in About Cookies
Notice: Please be advised that we experienced an unexpected issue that occurred on Saturday and Sunday January 20th and 21st that caused the site to be down for an extended period of time and affected the ability of users to access content on Wiley Online Library. This issue has now been fully resolved. We apologize for any inconvenience this may have caused and are working to ensure that we can alert you immediately of any unplanned periods of downtime or disruption in the future.
 The measurements from microwave sounding unit (MSU) on board different NOAA polar-orbiting satellites have been extensively used for detecting atmospheric temperature trend during the last several decades. However, temperature trends derived from these measurements are under significant debate, mostly caused by calibration errors. This study recalibrates the MSU channel 2 observations at level 0 using the postlaunch simultaneous nadir overpass (SNO) matchups and then provides a well-merged new MSU 1b data set for climate studies. The calibration algorithm consists of a dominant linear response of the MSU raw counts to the Earth-view radiance plus a smaller quadratic term. Uncertainties are represented by a constant offset and errors in the coefficient for the nonlinear quadratic term. A SNO matchup data set for nadir pixels with criteria of simultaneity of less than 100 s and within a ground distance of 111 km is generated for all overlaps of NOAA satellites. The simultaneous nature of these matchups eliminates the impact of orbital drifts on the calibration. A radiance error model for the SNO pairs is developed and then used to determine the offsets and nonlinear coefficients through regressions of the SNO matchups. It is found that the SNO matchups can accurately determine the differences of the offsets as well as the nonlinear coefficients between satellite pairs, thus providing a strong constraint to link calibration coefficients of different satellites together. However, SNO matchups alone cannot determine the absolute values of the coefficients because there is a high degree of colinearity between satellite SNO observations. Absolute values of calibration coefficients are obtained through sensitivity experiments, in which the percentage of variance in the brightness temperature difference time series that can be explained by the warm target temperatures of overlapping satellites is a function of the calibration coefficient. By minimizing these percentages of variance for overlapping observations, a new set of calibration coefficients is obtained from the SNO regressions. These new coefficients are significantly different from the prelaunch calibration values, but they result in bias-free SNO matchups and near-zero contaminations by the warm target temperatures in terms of the calibrated brightness temperature. Applying the new calibration coefficients to the Level 0 MSU observations, a well-merged MSU pentad data set is generated for climate trend studies. To avoid errors caused by small SNO samplings between NOAA 10 and 9, observations only from and after NOAA 10 are used. In addition, only ocean averages are investigated so that diurnal cycle effect can be ignored. The global ocean-averaged intersatellite biases for the pentad data set are between 0.05 and 0.1 K, which is an order of magnitude smaller than that obtained when using the unadjusted calibration algorithm. The ocean-only anomaly trend for the combined MSU channel 2 brightness temperature is found to be 0.198 K decade−1 during 1987–2003.
 The microwave sounding unit (MSU) on board the National Oceanic and Atmospheric Administration (NOAA) polar-orbiting satellite series was designed to measure the atmospheric temperature from the surface to the lower stratosphere under all weather conditions, excluding precipitation. MSU has four channels with center frequencies at 50.30, 53.74, 54.96, and 57.95 GHz, respectively. The weighting functions of these channels peak roughly near the surface, the middle troposphere, the upper troposphere, and the stratosphere, respectively [e.g., Christy et al., 1998], and the corresponding signal roughly corresponds to temperature changes near where the weighting functions peak. However, in addition to the channel frequency, the weighting function also depends on the earth incident angle. At the extreme viewing angle, the pressure level at which the weighting function peaks can be reduced by 25%.
 The first MSU was launched in 1978 on the payload called the Next-generation Television and Infrared Operational Satellite (TIROS-N). Since the launch of the first satellite, NOAA has launched eight other satellites carrying MSU instruments, which include NOAA 6, 7, 8, 9, 10, 11, 12, and 14 (NOAA 13 was launched in August 1993, but no data are available because of a circuit failure). These satellites circle the earth in sun-synchronous orbits at heights around 830–870 km. Of them, the NOAA 6, 8, 10, 12 satellites are launched so that the equator crossing times occur at 0730 and 1930 LT and thus called “morning satellites,” while TIROS-N, NOAA 7, 9, 11, and 14 cross the equator at 0230 and 1430 LT and are referred to as “afternoon satellites.” During the life cycle of a given satellite, the actual local equator crossing time (LECT) gradually changes because of orbital drift. Although the lifetime of a given MSU is generally considered to be a few years, each of these satellites overlaps other satellites to form a continuous time series of observations.
 Beginning in 1998, the MSU has been replaced by the advanced microwave sounding unit (AMSU) on board the NOAA K, L, and M series. However, the last MSU on board NOAA 14 is still operating in parallel with the AMSU satellites. AMSU contains more channels than MSU, and some of the AMSU channels have similar frequencies to that of the MSU. As such, the MSU and AMSU provide a continuous time series of observations from the past 27 years to the present and onward. Although the MSU and AMSU instruments were primarily designed to provide atmospheric temperature on a daily basis, because of their continuity and long time stability, they are equally applicable for monitoring changes in climatic temperature.
 During the last 15 years, various studies have been performed to determine the long-term atmospheric temperature trend from MSU measurements. However, the trends derived from these measurements are under significant debate, with different groups yielding different trends. Spencer and Christy [1992a, 1992b] investigated the tropospheric temperature trend by analyzing the monthly mean data using MSU channel 2 (53.74 GHz) observations from the 12-year period comprising the TIROS-N to NOAA 11 satellites. They constructed two different temperature products from the MSU channel 2 observations, T2 and TLT, using different combinations of viewing angles. The T2 is an average of the MSU channel 2 data over the near-nadir views and it measures the mean temperature of the midtroposphere from the surface to about 15 km. The TLT, abbreviation for Temperature Lower Troposphere, is a weighted difference between the near limb and near nadir views. The weighting function of TLT peaks in the lower troposphere which contains much less contribution from the stratosphere than the T2 product. The trends for T2 and TLT were found to be 0.02 and 0.04 K decade−1, respectively, for the 1979–1990 period in the work by Spencer and Christy [1992a, 1992b]. In merging multiple MSU satellites to construct single time series, however, only constant offsets due to calibration errors were removed in their study. This initial study was followed by a series of modifications in which several processes were recognized to be important in the analyses [Christy et al., 1998, 2000, 2003]. In particular, they found that the brightness temperature difference between overlapping satellite pairs was empirically related to the variations in the blackbody target temperature used for calibration. Therefore, in addition to the offset, a second empirical coefficient was needed that multiplies the instruments target temperature. Both coefficients were obtained using overlapping satellite measurements. However, prior to obtaining these calibration adjustments, the overlapping MSU measurements had to be adjusted for changes in LECT due to satellite drift. This adjustment was obtained using MSU measurements at different scan positions, corresponding to different local clock times. After applying all of these adjustments (calibration offset, target temperature effect and changes due to LECT), the updated global trend results for T2 and TLT were still small, being 0.02 and 0.06 K decade−1, respectively, for the 1979 to 2002 period [Christy et al., 2003]. The small trends were inconsistent with the larger trends of about 0.17 K decade−1 obtained from surface observations [Folland et al., 2001]. To help explain the inconsistency, Fu et al.  and Fu and Johanson  suggested an alternative approach to remove the stratospheric cooling contributions in T2 by combining MSU channel 2 with channel 4 (57.95 GHz) observations. In this approach, Fu et al.  estimated the stratospheric cooling effect to be about 0.08 K decade−1, resulting in the channel 4-adjusted T2 trend much closer to the surface trend.
 In a different study, Prabhakara et al.  also found a significant correlation between the LECT evolution and the warm target temperature trend for all satellites, suggesting that the target temperature effect in the MSU brightness temperature time series is orbital-drift related. They used a constant slope to determine the relationship between the brightness temperature differences and the warm target temperature differences for all satellite pairs. After removing the target temperature effect based on this relationship, they obtained a trend of 0.13 K decade−1 for a monthly mean MSU channel 2 time series from 1980 to 1999 [Prabhakara et al., 2000].
 In another independent study, Mears et al.  reexamined the MSU channel 2 trend using the same empirical equation developed by Christy et al. [2000, 2003] to account for calibration errors. However, rather than use MSU measurements at different scan positions to adjust for changes in the satellite LECT, they use simulated diurnal cycle climatology from the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM3) to convert observations of different time at different pixels to a common observational time to remove the diurnal cycle effect. The global trend in their study is about 0.10 K decade−1 for T2 from 1979 to 2001. Shortly afterward, Mears and Wentz  conducted a parallel study examining the trend differences between the Mears et al.  and Christy et al.  studies. They found that for the T2 data set, most of the differences can be explained by different target temperature coefficients used for the NOAA 9 and NOAA 11 instruments. For TLT data, however, more than half of the differences can be attributed to the different treatments in the diurnal cycle correction.
Vinnikov and Grody  obtained the largest global trend, 0.26 K decade−1 from 1978 to 2002, from the MSU channel 2 observations when only using a constant offset to intercalibrate different satellites. Shortly afterward, Grody et al.  developed a nonlinear correction approach to account for the warm load target effects. This nonlinear correction was quite different than the approaches used by Christy et al.  and Prabhakara et al. . After using the Grody et al.  nonlinear adjustment, Vinnikov et al.  modified their trend result to 0.20 K decade−1 for the T2 data from 1978 to 2004. In their studies, however, pixels of different observational time were not converted to a common time before they were used to generate gridded data set.
 All the studies mentioned above started basically from the NESDIS operational 1b data, [Kidwell, 1998, section 1.4], although different quality control procedure may be applied. The NESDIS 1b data are calibrated independently for each satellite on the basis of prelaunch chamber test data sets [e.g., Mo et al., 2001]. Because of the independency, the prelaunch calibration left biases on the order of about 0.5 K between some satellites and these biases, along with nonlinear changes, manifested in the gridded time series. The purposes of different calibration adjustment approaches developed by Christy et al.  and Grody et al.  were to remove these biases. However, the diurnal cycle correction had to be made before conducting the bias adjustments in those studies. Therefore errors in the diurnal cycle correction were brought into the calibration adjustment procedure which may amplify the trend uncertainties. For instance, Mears and Wentz  showed that when using the same target factors but different diurnal cycle corrections, the TLT trend difference for the 1979–2001 period for the tropical region (20S to 20N) is about 0.085 K decade−1. However, if different diurnal cycle corrections and different bias adjustment procedure were applied, the trend difference went up to 0.15 K decade−1 for the same period of time and the same region. It is desirable to develop an intercalibration procedure that is independent of diurnal cycle corrections to reduce these errors.
 Recently, Cao et al.  developed a method to predict the time and location of the simultaneous nadir overpasses (SNO) for the different NOAA satellites. Such simultaneous observations in time and space between different satellites are free of errors caused by differences in local time measurements, thus offer a unique opportunity for intercalibration that is independent of the diurnal cycle corrections. This study exploits the potential of the SNO matchups and uses them to develop a procedure to accurately calibrate the MSU observations at level 0, where it does not depend on the diurnal cycle correction. The calibration algorithm is physically based and the procedure generates new sets of calibration coefficients, which are different from prelaunch calibration coefficients, but result in well-merged and -calibrated new 1b data sets. The anomaly trend of the time series generated from the new 1b data is investigated. In addition, how the calibration removes the warm target temperature contamination in the brightness temperature differences between satellite pairs is also investigated.
 Our main focus in this study is on the calibration procedure. However, we also intend to link the resulting trend to the actual tropospheric temperature change. To avoid uncertainties in the calibrations caused by small SNO samplings involving NOAA 9, we use observations only from and after NOAA 10. In addition, only the global-ocean averages are investigated so that the diurnal cycle effect can be ignored [Mears et al., 2003]. We will focus on MSU channel 2 calibrations, but the procedure is equally applicable to other channels such as channel 3 and 4 observations.
 The next section describes the calibration algorithm that converts MSU raw counts to radiance. Section 3 describes the characteristics of the SNO data sets while section 4 describes the detailed calibration methodology and procedure. Section 5 discusses the relationship between the calibration and trends of the resulting time series and provides the final calibration coefficients and trend results. Section 6 contains a summary and conclusion.
2. Nonlinear Calibration of the MSU Measurement
 The calibration of a satellite radiometer at level 0 converts the output of the instrument (raw counts) to the Earth emitted radiance seen by the instrument. This process is expressed by the transfer function (calibration algorithm)
where Re represents the Earth radiance and Ce the digitized raw counts. Since the accuracy of this relationship can have a large impact on the derived temperature trend, the calibration algorithm used in this study is described first. We begin by describing the internal blackbody target used for calibration.
2.1. Internal Blackbody Target
 It is a design goal to make the radiometer instruments as linear as possible. In a perfectly linear case, only two calibration points would uniquely determine the calibration equation (1). For MSU instrument, the radiometer responds nearly linearly to the incoming thermal energy so that two calibration points are used. These consist of the on board “warm blackbody target” and the cosmic space “cold target.” In each scan cycle, the MSU antenna looks at the cold space, the blackbody target and the radiation emitted by the underlying Earth. To convert the Earth view counts to the Earth radiance, the physical temperature of the cold space and the blackbody target needs to be determined a priori. The cold space radiation temperature is fixed at 4.78 K, which includes the actual cold-space temperature of 2.73 K plus an increase of about 2 K due to the stray radiation entering the antenna sidelobes by active and passive sources on board the satellites. The physical temperature of the internal blackbody target is measured by Platinum Resistance Thermometers (PRTs) embedded in the blackbody target. There are two PRTs in the blackbody and the average of these two measurements is used to reduce the noise and improve the accuracy of the resulting temperature. Outputs from the two PRTs are listed as digital counts in the MSU 1b data files. These counts are converted to physical temperature using an algorithm described in NOAA technical report NESS 107. The blackbody target radiation measured by each MSU channel is computed using the Planck function together with the temperature measurements. Details describing the conversion are provided in Appendix A and parameters required for this calculation are contained in the NOAA Polar Orbiter Data User's Guide [Kidwell, 1998, section 1.4].
2.2. Calibration Algorithm in This Study
 In this study, we adopt the calibration algorithm suggested by Mo  which takes into account the instrumental nonlinearity. This algorithm is written as
where R and C are the respective radiance and raw counts and the subscripts e, w and c refer to the Earth-view scene, onboard warm blackbody target, and the cold space, respectively; μis the nonlinear coefficient and is the slope. This form of calibration equation can be derived by considering an “imperfect” square-law detector or when considering a nonlinear gain in the instrument [Mo, 1995].
 The cold space radiance Rc is specified to be 9.6 × 10−5 mW (sr m2 cm−1)−1 for all scan lines. This is the same value given by Kidwell [1998, section 1.4], corresponding to a brightness temperature of 4.78 K. This use of a single value of cold space radiance is equivalent to assuming that the cold space measurement contains no errors. Actually, however, different scan lines may yield a slightly different cold space temperature. Thus, using fixed, single cold space radiance will transfer all the cold space measurement errors to the earth view radiance. As will be demonstrated later, this treatment will not cause problems on climate analyses since we will eventually calibrate the Earth view radiance using the SNO matchups.
 The calibration equation (2) may contain errors in the form of a bias as well as random errors (noise) due to random fluctuations of the instrument characteristics. In this study, we will correct the bias errors using the SNO matchup data so that only random errors remain. Error sources leading to a bias are a result of errors in the calibration targets, ΔRc, ΔRw and nonlinearity, Δμ. The bias in the Earth radiance (2) is therefore given by,
Subtracting (3) from (2) one can obtain the error free calibration equation that contains both the K and Z terms. It is desirable to obtain the unknown coefficients for the K and Z terms from regressions using the SNO matchups or other data sets. However, as will be discussed in detail later on, one must consider the colinearity between the K and Z terms in order to reliably obtain the coefficients from regressions. By definition, the K and Z terms have a quadratic relationship K(K − 1) = Z/(Rw − Rc)2. In reality, however, Cc is much smaller than Ce and Ce is closer but smaller than Cw, thus the K values vary between 0.7 to 0.9 [Grody et al., 2004]. In this range, the K–Z relationship is dominated by linear terms plus a weak nonlinear correction (the expression can be obtained using a Taylor expansion around K = 0.85 and Tw = 280 K [Grody et al. 2004]), i.e.,
After using (6) in (3), the colinearity problem in the K–Z relationship will be removed. We will further show later that the effect of the higher-order terms is small so only the linear terms in (6) need to be kept.
where δR = ΔRc + a(ΔRw − ΔRc) and δμ = Δμ + b(ΔRw − ΔRc) + represent the offset and error of the nonlinear coefficient, respectively. The estimation of their magnitudes will be discussed in detail later on when the MSU observations are calibrated using SNO data sets.
 Subtracting (7) from (2), the error free measurement, denoted as R′e = Re − ΔRe, takes the form,
where the linear part of the calibration algorithm is
Equations (5), (8) and (9) compose the basic calibration algorithm used in this study. The linear part of the Earth-view radiance, RL, is computed from the MSU raw counts plus the warm load temperature computation described in the previous sections, while the coefficients δR and μ − δμ in (8) will be determined using the SNO matchup data set as described in the following sections. Once the radiance is known, the brightness temperature is computed using the Planck function. However, in the microwave region, the Planck function is given by the Rayleigh-Jeans Approximation which results in a linear relationship between radiance and brightness temperature. Therefore, in the following analyses, the results are expressed in brightness temperature units rather than radiance. For example, we use Tb to represent the brightness temperature corresponding to Re′ and TL to represent the brightness temperature corresponding to RL.
3. Error Analysis of the MSU SNO Matchups
3.1. SNO Data Set
 As mentioned earlier, the NOAA polar-orbiting satellites include TIROS-N, NOAA 6, 7, 8, 9, 10, 11, 12, and 14. Each of these satellites has a limited operating time and overlaps with other satellite over a limited period to form a continuous series of Earth observations. Figure 1 shows the LECT for these satellites, from which one can see the overlapping periods between satellite pairs. During the overlapping periods, two satellites may periodically cross each other, viewing the same location on earth at nadir at nearly the same time. When that occurs, a SNO event is found and a SNO data pair is generated. A schematic viewing of the SNO event is shown in Figure 2, where two satellites approach each other. In practice, however, the frequency of occurrence of SNO events depends on the criteria of simultaneity. Although SNO events occurring within a few seconds or less occur regularly for succeeding satellites [Cao et al., 2005], the number of nadir viewing samples can be small depending on the temporal range and spatial distance criteria.
 The method for finding the SNO event includes two steps. The first one is to determine the orbits that contain satellite intersections using Cao et al.'s  method, in which the orbits and time when two satellites meet each other are predicted using orbital perturbation models and associated two-line-elements for the satellites. The second step uses the time and location information in the 1b file for each pixel in these orbits to determine their separation in time and space. In this study, the spatial distance is set to be equal to an earth ground distance of 111 km, which is the same as the MSU footprint, or nadir field of view. The impact of this distance setting on the error characteristics of the SNO data set will be discussed in detail in the following subsections. Also note that the footprint size refers to the main beam area which contributes about 90% of radiation measured at the MSU sensor [Mo et al., 2000]. The remaining 10% sidelobe radiation may come from an area a few hundreds of kilometers away from the main beam. However, as will be shown later, the relative positions of the SNO matchups are randomly distributed for most satellite pairs, thus it is expected that the sidelobe effect is small on the bias analyses.
 Once the allowed spatial distance is specified, an “optimal” time window for simultaneous observation can be found. Here “optimal” means that the time difference should be small enough so that the errors between two observations can be considered simultaneous. On the other hand, the time window ought to be large enough to allow a sufficient number of SNO events. For the 111 km spatial distance, it is found that when the time window is smaller than about 100 s, the total SNO number increases as the time window increases (Figure 3). However, when the time window exceeds 100 s, the total SNO number almost remains the same, suggesting that 100 s represents the maximum time window needed in our analysis. Furthermore, it will be shown later that brightness temperature errors caused by this time difference is small.
 Using this space/time criterion, all pixel pairs in two overpass satellites are collected for each of the overlapping periods of NOAA satellites. The NOAA MSU instruments view Earth at 11 beam positions for each scanline. However, in this study, only the 6th beam position, which is at nadir, is used in the intersatellite calibration. Table 1 lists the overlap period for different satellites and the total SNO numbers of the nadir pixels found during each period. Because of the orbital characteristics, most of these SNO data pairs are located over the 70°N to 90°N or 70°S to 90°S latitudinal bands [Cao et al., 2004].
Table 1. Overlap Periods and SNO Numbers of the Nadir Pixels for Different Overlap Satellitesa
The time difference is 100 s and spatial distance is 111 km for the criteria of simultaneity.
3.2. Error Structure of the SNO Matchups
 Before calibrating the MSU observations using the SNO matchups, the error structure of the SNO data needs to be analyzed. For any SNO data pairs, the radiance observed by two overlapping satellites, denoted as satellites j and k, can be expressed as
where R(t, X) is the satellite-observed Earth radiance while R′(t, X) is the calibrated radiance at time, t, and spatial location, X; and ɛ is an instrumental random noise. By definition, R′(t, X) may contain errors (e.g., calibration errors) other than the instrumental random noise since the latter is explicitly expressed by ɛ. Since observations are not taken at the same time for the same location, tk and Xk are different from tj and Xj. Taking the difference between equations (11) and (10) and expanding Rk′(tk, Xk) in Taylor series with respect to the point (tj, Xj), one obtains
where ΔR = Rk(tk, Xk) − Rj(tj, Xj), ΔR′ = Rk′(tj, Xj) − Rj′(tj, Xj), Δt = tk − tj, ΔX = Xk − Xj, and ΔR(Δt, ΔX) is the radiance difference caused by the temporal and spatial differences in the observations. This is expressed as
where O(Δt2, ΔX2, ΔtΔX) represents the higher-order terms. For consistency, all differences will be defined as satellites k minus satellite j in the following. Note that the calibrated radiance R′(t, X) of both satellites are now written with reference to the same time and spatial location, so ΔR′(tj, Xj) represents the calibration bias between two satellites.
 Assuming that the instrumental noise has zero mean and all the errors are independent of each other, the covariance is zero, so that the bias and standard deviation (unbiased) of the SNO matchups can be expressed as
where N is the total SNO number and an overbar represents an average over the available SNO observations (i.e., , where i is an index).
3.3. Instrumental Random Noise
 The MSU instruments have a random noise equivalent brightness temperature of 0.2–0.3 K (e.g., MSU Instrument Guide Document, available at http://ghrc.msfc.nasa.gov:5721/sensor_documents/msu_instrument.html). This noise has zero mean since systematic errors are absorbed in the calibration equation (8). In an idealized situation where the space-time observational errors and calibration errors are zero, (14) and (15) would suggest a zero bias and standard deviation of = 0.28 ∼ 0.42 K. This is theoretically, the smallest standard deviation one can expect for the SNO matchups.
3.4. Nonsimultaneously Observational Errors
 Since ΔR(Δt, ΔX) contains errors from both the time and spatial differences, it is difficult to distinguish them by directly looking at the SNO data. However, the errors from the time differences can be estimated from weather observations. For most weather phenomena that have a life cycle larger than a few hours, two observations within 100 s can be considered as simultaneous observations, since the structure of these weather systems do not change much during this short time period. To estimate this error, we assume a 10 K temperature drop in a 12-hour period (between day and night). This variation represents an extreme situation since MSU channel 2 roughly represents the averaged midtropospheric temperature whose variation is much less than at the surface (at which a 20 K temperature drop may occur for a 12-hour period over the polar region). Using this assumption, the averaged temperature difference over 100 s is ΔTb(Δt) = ≈ 0.02 K. This error is much smaller than the tolerance adopted in this study (0.1 K, see next section for discussion) and thus can be ignored. However, as shown in the next section, temperature differences caused by the spatial differences are on the order of 0.5 K to 5 K. This suggests that the errors in ΔR(Δt, ΔX) in (13) is dominated by the terms associated with ΔX, i.e., ΔR(Δt, ΔX) ≈ (∇Rk′) • ΔX + O(ΔX2). Therefore, in the following analyses ΔR(Δt, ΔX) will be replaced by the symbol ΔR(ΔX).
3.5. Errors in Spatial Displacement
 Assuming the center distance of two SNO pixels is ∣ΔX∣ = d, then
where Tb is the combined atmospheric and surface temperature gradient in the ΔX direction and is a unit vector. Figure 4 shows the scatter relationship between ΔTL and the center distance of the SNO nadir pixels for all available satellite overlaps. Note that these brightness temperatures in ΔTL are obtained using just the linear calibration equation (equation (9)) since the coefficients for nonlinear calibration are not known at this moment. Also note that in order to more clearly identify the relationship between ΔTL and d, the distance criterion for the SNO data sets has been extended to 222 km although the time window is still set to 100 s. It is seen from Figure 4 that the maximum ΔTL is on the order of 4 K when d is near 200 km so the corresponding maximum gradient is about 2 K/100 km. In addition, ΔTL is distributed nearly symmetric around a mean line close to zero.
 The temperature gradient shown in Figure 4 is “transient,” which is a combination of different types of weather and climate phenomena. When no large or small-scale systems exist, ∇Tb may represent a spatial temperature gradient of seasonal climate. Typically, the climatic temperature gradient is small. In particular, the zonally averaged, vertically integrated annual mean meridional gradient in the northern hemisphere is about 20 K/10000 km = 0.2 K/100 km [Peixóto and Oort, 1992]. The meridional gradient at the southern high latitudes is usually larger, but the zonal gradient of climatology in both hemispheres is usually considered to be small. In addition, MSU channel 2 contains about 10% contribution form the surface, thus the variability in surface temperature and emissivity may also contribute to the temperature gradients. For larger temperature gradients such as 2 K/100 km as observed in Figure 4, we speculate that they are from baroclinic weather systems. For these systems, scale analyses (thermal wind relationship) gives a value about 0.5 K/100 km to 5 K/100 km temperature gradient, depending on the strength and vertical position of the storm system. However, case studies in the future are needed to actually identify the reasons that cause these temperature gradients.
 To further understand the statistical characteristics of ΔTL, Figure 5 shows a scatterplot of the latitudinal differences versus longitudinal differences in spatial distance for the SNO data sets of all satellite overlaps. Unlike Figure 4, only the SNO data within the spatial distance of 111 km are used in Figure 5. It is seen that most SNO data pairs are uniformly distributed about the origin. However, the SNOs of NOAA 10 and NOAA 12 obviously prefer a nonuniform distribution. Examination of the SNO events (plots not shown) shows that this occurs because at a few times during their overlap period, the orbits of NOAA 10 and NOAA 12 stay close for a relatively longer period so that many SNO data pairs are found for these particular orbits (the small Right Ascension of Ascending Node case [Cao et al., 2005]). For the same reason, the SNO data pairs between NOAA 11 and NOAA 14 are also not spatially uniform. The random distribution in the SNO's relative positions may help to eliminate the sidelobe effects. This is because the sidelobes of a crossing SNO can be a few hundreds of kilometers away from the main beams, potentially causing larger temperature differences in the SNOs. Having a random distribution may help these temperature differences to cancel out.
Figures 6a–6d show the relationship between the mean biases of ΔTL and the separation distance, d, as well as between STD and d, for all available SNO pairs of different satellite overlaps. Note that the STD and bias computations at any distance include all SNO data within that distance. Figure 6e shows the corresponding SNO numbers used for computing the biases and STD. It is shown in Figures 6b and 6d that when d is larger than about 40–50 km, the STD increases approximately linearly with the distance. When d is near 50 km, the STD approaches 0.4–0.5 K for all the SNOs. This is close to the instrumental noise. This represents the distance where instrumental noise begins to dominate over the distance impact. Note that the biases may also affect the STD values in the case of linear calibration. For nonlinear calibration, however, the biases can be reduced to near zero for d >111 km and to around 0.1 K for d < 111 km (see Figure 10), so they will not contribute to the STD significantly. For this reason, only the random noise and spatial distance are considered important for STD here. For d < 50 km, the satellite pairs in Figures 6b and 6d show smaller or larger STD than that of d = 50 km. However, as discussed in the following, the STD values may be inaccurate for most pairs because of smaller SNO numbers.
 The STD values greatly affect how we select a distance in statistical analyses. In practice, we use two criteria to choose the distance: the first is to require the sampling size to be larger than a minimum value for a given tolerance and confidence level, and the second is requiring the biases to stabilize when the distance is large enough. In particular, the minimum sampling number, Nm, required for the bias statistics to be significant in a normal-curve test can be expressed as Nm = , where σ = STD is the standard deviation, zη/2 is a confidence coefficient corresponding to the significance level of η, and Ω is a tolerance (allowed maximum error) for the bias statistics. For 95% confidence (η = 0.05), zη/2 = 1.96. We assume a tolerance of 0.1 K throughout this study. This is significantly smaller than the requirement for weather analyses (0.5–1 K). Smaller tolerance than 0.1 K may be desirable for climate trend analyses, however, with the given SNO samplings, it results in lower confidence in the analyses.
 The variation of Nm with d is shown in Figure 6e by the red curve, where an averaged STD over all satellite pairs shown in Figures 6a and 6b is used in computing the Nm. At d = 50 km, σ ≈ 0.48 K and Nm approximately equals 89 for a 95% confidence level. At this distance the SNO numbers for all satellite pairs except for N11–N14 shown in Figure 6b satisfy the minimum sampling requirement. On the other hand, all the SNO pairs shown in Figure 6d do not satisfy this requirement. We note from Figure 6e that the SNO numbers increase exponentially with d, while Nm is proportional to the square of σ, thus increases quadratically with d; therefore, when d is large enough, the SNO numbers will eventually catch up with Nm. For instance, at d = 111 km, the averaged σ equals 0.76 K, thus Nm is around 221. At this distance, the SNO numbers for N6–N7, N7–N8, N6–N9, and N11–N14 catch up with Nm and thus approximately satisfy the minimum sampling requirement (Table 1). However, the SNO numbers for the satellite pairs TN–N6, N8–N9 and N9–N10 are still much smaller than Nm at d = 111 km. Actually, these satellite pairs do not satisfy the minimum sampling requirement for the entire distance range (10–222 km) considered in this study (Figure 6e), resulting in much less confidence in analyses if these satellite pairs were used. In principle, the SNO numbers for those satellite pairs may eventually catch up with Nm and satisfy the minimum sampling requirement when d becomes much larger than 222 km. However, this study does not use these satellite pairs for the considered distance range. As a result, we only calibrate the satellites after NOAA 10, since the connection to satellites before NOAA 10 is broken when N8–N9 and N9–N10 overlaps are not used.
 Although the SNO numbers for most satellite pairs after NOAA 10 satisfy the minimum sampling requirements when d >50 km, examination of Figure 6c shows that the biases become stable only after 110 km for most satellite pairs except for N10–N12. The unstable N10–N12 biases are probably related to the asymmetric spatial distribution of their SNOs as observed in Figure 5. This asymmetric spatial distribution may result in an asymmetric sidelobe effect, causing the biases to depend on the relative positions of the satellite pairs and thus are difficult to stabilize. For this reason as well as others (will be discussed in the next section), we will not use the N10–N12 SNO matchups.
 From the above analyses, we select d = 111 km as the distance criterion in our analyses. At this distance, both requirements of the minimum samplings and stable SNO biases are satisfied for most of the SNO pairs after NOAA 10. For the distance larger than this value, the statistical biases should be mainly caused by the calibration errors in the instruments, not by the spatial distance. This occurs because, as observed from Figure 4, the quantity ΔR(ΔX) obeys a symmetric distribution with respect to zero so that its expected mean is zero in a specified spatial distance. This observation is referred as the spatial error symmetry assumption in this study and it serves as a foundation for successful MSU intercalibration using SNOs.
4. MSU Calibration Using the SNO Matchups
4.1. Ordinary Least Squares Regression With High Degree of Colinearity Between Satellite Pairs
 We consider cross calibration between two satellites. Introducing the MSU calibration equation (8) into the right hand side of the SNO radiance error equation (12), one obtains
where ΔRL = RL, k − RL, j, ΔδR = δRk − δRj, U = μ − δμ, and E = k − ɛj + ΔR(ΔX) is the residual error. In (17), ΔRL, Zk and Zj are a function of the measurements which are computed for the SNO data pairs, while ΔδR, Uk and Uj are unknown coefficients. In this study, these coefficients are obtained from the SNO matchups using ordinary least squares regressions (OLR) in which the sum of the squared errors, (ΔR)i2, is minimized. On the basis of the spatial error symmetry assumption on ΔR(ΔX) as well as the assumptions on the instrumental random noise, the residual error E is considered as uncorrelated noise with an expected mean of zero. Thus it is omitted when solving the coefficients [e.g., Neter and Wasserman, 1974].
 Although it is straight foreword to use OLR procedure to solve for the unknown coefficients, the reliability of these coefficients largely depends on whether or not the predictors Zk and Zj have high degrees of colinearity [Neter and Wasserman, 1974]. When there is a high degree of colinearity between Zk and Zj, the covariance matrix for OLR solutions will be ill conditioned, though nonsingular. In this situation, many different linear combinations of Zk and Zj may work almost equally well in fitting the observed predictand ΔRL and different sample data sets can give quite different sets of regression coefficients [Neter and Wasserman, 1974]. To have an understanding on this issue, we first examine if there is a high degree of colinearity between Zk and Zj. Figure 7 shows a scatterplot of the Z variables between NOAA 10 and NOAA 11 for their SNO matchups, from which one can see that the correlation coefficient between ZN10 and ZN11 reaches 0.95, suggesting a high degree of colinearity. This high degree of colinearity occurs because the two MSU instruments have similar nonlinearity and they look at nearly the same position at nearly the same time. Similar high degree of colinearity between other satellites is also found for their SNO matchups. With this colinearity, a linear transition can be constructed to express the relationship between Zk and Zj,
where α and β are OLR regression coefficients obtained from the SNO matchups, and ζ is a residual error. By definition, this residual error has a zero mean for the SNO matchups and is uncorrelated with Zk (Figure 7b). Table 2 provides the values of α and β for N10–N11, N11–N12, and N12–N14 obtained from regressions of their SNO data pairs. It is seen that the values of β are all close to 1, further showing the high degree of colinearity of the Z predictors between different satellites.
Table 2. Linear Regression Coefficients for the Z–Z Relationship Between Different Satellites Obtained From Regression of the SNO Matchupsa
 The linear transition of equation (18) does not change the regression solutions so the coefficients obtained from equations (17) and (19) are the same. However, equation (19) provides more insights on the characteristics of the regression coefficients for high degree of colinearity between Zk and Zj. To see this, we explicitly write down the OLR solutions for equation (19). For symbolic simplicity, equation (19) is rewritten as
where a0 = ΔδR + αUj, a1 = −Uk + βUj, and a2 = Uj. OLR obtains the coefficients by minimizing the sum of the squared errors
where i is an index representing each of the SNO data pairs and N is the total SNO number.
 From the discussions of Figure 7b we see that ζ has a zero mean, i.e., ζi = 0. Also, ζ is not correlated with Zk so their cross terms disappear. With these considerations, equations (22)–(24) become
The solutions from the above equations have the following features. First, a2 (= Uj) is determined only by equation (27), Uj = . It can be shown that this is the same as directly solving equation (17) in multiple regressions. Because of the high degree of colinearity between the Z terms in equation (17), this solution is not reliable [Neter and Wasserman, 1974].
 Second, solutions of a0 (= ΔδR + αUj) and a1 (= βUj − Uk) do not depend on the solution of a2; they are completely determined by the regressions between ΔRL and Zk. This regression does not have colinearity problem so a0 and a1 can be reliably determined. This is equivalent to eliminating the residual term ζ in equation (18) so that the introduction of the linear relationship between Zk and Zj serves as a process to reduce the independent variables in equation (17). The result is that only one of the Z terms is needed to fit the predictand ΔRL.
 Finally, the two parameters a0 and a1 are linear combinations of the two offsets and two nonlinear coefficients for the two satellites k and j. Since α is small and β is close to 1, they roughly represent the differences of the offsets and nonlinear coefficients between the two satellites. However, information on these two parameters is insufficient to determine the four coefficients for the two satellites. Therefore one of the two satellites should be chosen as a reference satellite and its δR and U values need to be determined from independent information.
4.2. Offset and Nonlinear Coefficient for the Reference Satellite
 Since the constant offset value does not affect the trend analyses, we simply assume δR = 0, as in previous studies [e.g., Grody et al., 2004], for the reference satellite. However, the nonlinear coefficient U has a large impact on the trend analyses; therefore its value should be carefully examined.
 To have an appropriate estimate on U for the reference satellite, we examine the U values obtained by Mo et al.  from the prelaunch calibration. In the work by Mo et al. , U values for three given body temperatures of the MSU instrument are provided for 8 NOAA satellites on the basis of prelaunch chamber test data sets. The instrumental body temperature of MSU is called the Dicke temperature, which is also provided in the MSU 1b file and is different from the blackbody target temperature. On the basis of the U values given by Mo et al.  for the three Dicke temperatures, the U values for any Dicke temperature in the SNO data set are computed using interpolations from the Dicke temperatures provided in the MSU 1b file. Table 3 provides the range of U's computed from these reference values for NOAA 10, 11, 12 and 14 using their SNO matchup data sets. Table 3 suggests that U for NOAA 10 changes from 4.9 to 5.05 (sr m2 cm−1) (mW)−1, a 3% variation. However, U for NOAA 11 has a range from 6.7 to 7.7 (sr m2 cm−1) (mW)−1, a 14% variation. Since both satellites have a range of Dicke temperature variation about 2.5 K (not shown), this comparison indicates that the U value of NOAA 10 is less dependent on the MSU body temperature variations. Similarly, the U value for NOAA 12 and NOAA 14 have a 6% variation, both are larger than NOAA 10.
Table 3. Nonlinear Coefficients Obtained From Interpolating the Reference Values of the Prelaunch Calibration Given by Mo et al. a
Unit is (sr m2 cm−1) (mW)−1.
 The variation of the Dicke temperature is not the only source that causes variations in U. It is seen from equation (7) that the error in U, δμ = Δμ + b(ΔRw − ΔRc) + , is composed of two parts: Δμ and b(ΔRw − ΔRc) + , where the former is the error caused by the change of the instrumental body temperature and the latter is an error associated with the errors in the warm load target and cold space. The magnitude of the latter can be estimated. In particular, since ≈ , which is on the order of 0.1% (ΔTw and ΔTc are on the order of 0.2 K), is much smaller than Δμ. In addition, on the basis of the SNO matchups between NOAA 10 and NOAA 11, we have obtained regression coefficients a = 1.041 and b = 2.819 × 104 (sr m2 cm−1)2 (mW)−2 for the K–Z relationship equation (6) when only the linear terms remain. Figure 8 provides the scatterplot between K and Z for NOAA 10 and shows how the linear fitting looks like. These same procedure are applied to all different SNO data pairs and the obtained values of a and b are approximately similar for all satellites. Using these values in equation (7) we find δR = ΔRc + a(ΔRw − ΔRc) = ΔRc + 1.041 × (ΔRw − ΔRc) ≈ ΔRw, suggesting the offset is an error mainly due to the calibration error of the blackbody target. In addition, assuming the combined error of the brightness temperatures from the warm load target and cold space is on the order of 0.5 K, then (ΔRw − ΔRc) is on the order of 1.0 × 10−5 mW (sr m2 cm−1)−1. This leads to b(ΔRw − ΔRc) ≈ 0.3 (sr m2 cm−1) (mW)−1, which is about 5 to 10% of the U values shown in Table 3.
 We also examine the quadratic fitting for the K–Z relationship in Figure 8. The fitting coefficients in this case are a = 0.956, b = 0.519 × 104 (sr m2 cm−1)2 (mW)−2, and c = −1.507 × 109 (sr m2 cm−1)4 (mW)−4. These coefficients provide a much better fitting than the linear relationship. In this case, the error term b(ΔRw − ΔRc) is replaced by (b + cZ) (ΔRw − ΔRc). With the given range of Z, we obtain cZ(ΔRw − ΔRc) ≈ 0.06 − 0.18 (sr m2 cm−1) (mW)−1. This is only 5% variation of the prelaunch U values. Thus ignoring the higher-order terms in the K–Z relationship will not affect calibrations.
 At this point, it is not clear which satellite has smaller calibration errors in the blackbody target and cold space, so here we just use the variations of U due to the Dicke temperature as a criterion to choose the reference satellite. On the basis of the comparisons in Table 3, NOAA 10 has the smallest range of U variation, therefore it is chosen as the reference satellite. Because of the small range of variation, we simply set UN10 = 5 (sr m2 cm−1) (mW)−1 in the following.
 The above analyses suggest that the blackbody target calibration errors may cause U to change about 10% while the Dicke temperature causes U to change from 3% to 14%, depending on different satellites. However, as will be shown later, the obtained nonlinear coefficients from the SNO intercalibration can be 100% different from the prelaunch values for some satellites. So neither the blackbody target calibration errors, ΔRw, nor the Dicke temperature variations can explain this large difference. It is likely that the sensor decay causes the large differences in the nonlinear coefficients between this study and the prelaunch calibration of Mo et al. . In this sense, choosing NOAA 10 as the reference satellite is somehow arbitrary since there is no guarantee that its nonlinear coefficient will not change significantly after the satellite is launched. However, a sensitivity study will be conducted to address how the nonlinear coefficient of the reference satellite can be obtained by evaluating if it effectively removes the contamination of the warm target temperature from the calibrated brightness temperature time series.
4.3. Calibration Results
 Given δR and U for NOAA 10 and having ignored the E term, the regression equation (17) for NOAA 10 and 11 can be written as
Equation (28) contains only two unknown coefficients δRN11 and UN11 that can be reliably obtained from the regression procedure. One may also solve equations (25) and (26) to obtain a0 and a1 first, then, from the definitions of a0 and a1, obtain δRN11 = a0 − δRN10 − αUN10 and UN11 = βUN10 − a1. The results with this procedure are the same as directly solving equation (28) using OLR. Once the δRN11 and UN11 are solved from the SNO regression, δRN12 and UN12. can be obtained in a similar fashion using the SNO data pairs between NOAA 11 and 12. Finally, coefficients for NOAA 14 are obtained using the SNOs between NOAA 12 and NOAA 14. This is referred as a sequential adjusting process. Note that the SNO matchups between NOAA 10 and NOAA 12 and between NOAA 11 and NOAA 14 are not used in this study. To use these overlaps, one needs to solve multiple least squares equations that can treat multiple overlaps. Mears et al.  used such schemes to solve multiple-overlaps problem in their bias removal process. However, for easily understanding the relationship of coefficients between different satellites, this study uses the sequential adjusting process that only handles single overlap. Also, as mentioned earlier, one needs to be cautious about using the N10–N12 SNOs since they do not stabilize with the spatial distance criterion, possibly because of asymmetry in spatial distribution.
Table 4 provides the values of δR and U for these satellites obtained from this succeeding adjusting process. These will be referred as case 1 coefficients. We first show how the nonlinear calibration with these coefficients improves the SNO matchups with respect to the linear calibration and we use NOAA 10 and 11 as an example. Figure 9a shows scatterplot of the SNO matchups between TL(N10) and ΔTL = TL(N11) − TL(N10), where TL represents the brightness temperature for linear calibration. It is seen that the bias and RMS error for all the available 560 SNO matchups are −0.30 K (NOAA 11 minus NOAA 10) and 0.84 K, respectively. In addition to this mean bias, there also exists a temperature-dependent nonuniformity in ΔTL. In particular, the absolute statistical errors in ΔTL for a specific brightness temperature TL(N10) tend to be larger when TL(N10) becomes larger. This nonuniformity in ∣ΔTL∣ is best illustrated by a linear fitting line to the scatterplot which has a slope = −0.0206.
Table 4. Calibration Coefficients for Different Satellites Obtained by the Sequential Adjusting Process Using the SNO Matchups When the Nonlinear Coefficient for NOAA 10 Is Set to Its Prelaunch Valuea
Case 1: U for NOAA 10 Is Set to Its Prelaunch Value
Units for δR and U are 10−5 (mW) (sr m2 cm−1)−1 and (sr m2 cm−1) (mW)−1, respectively.
 Applying the calibration coefficients in Table 4 to equation (8) for NOAA 10 and 11, a nonlinear corrected SNO data set for NOAA 10 and 11 is obtained. Figure 9b shows the scatterplot between Tb(N10) and ΔTb = Tb(N11) − Tb(N10) for the SNO matchups, where Tb represents the brightness temperature for the nonlinear calibration. Since the regression is done in the radiance space, by definition, the mean bias of R′(N11) − R′(N10) should be zero now. Because of the highly linear relationship between R′ and Tb, the mean bias of Tb(N11) − Tb(N10) is also zero, as shown in Figure 9b. The RMS is now 0.83°K, slightly improved from the linear calibration of the SNO data pairs.
 The most striking feature in the nonlinear calibration is that the temperature-dependent nonuniformity in ∣ΔTL∣ have been very well corrected by the nonlinear term. It is seen that the larger absolute errors in ΔTL with higher brightness temperature in Figure 9a nearly disappear in Figure 9b, as illustrated by the linear fitting line. The slope for this linear fitting line is = −0.00614, nearly two times smaller than in Figure 9a. Figure 9c further shows a scatterplot between ΔTb − ΔTL and TL(N10), from which one can clearly see how the nonlinear correction gets larger with higher brightness temperature.
 It is important to note that the set of coefficients in Table 4 (case 1) satisfy a strong constant constraint a0 = (ΔδR + αUj) and a1 = (βUj − Uk), where a0 and a1 are provided in Table 5. As mentioned earlier, solutions of a0 and a1 in equations (25) and (26) do not depend on the given value of Uj. In other words, no matter what is the value of Uj, a0 and a1 will always remain the same and are determined only by the regression between ΔRL and Zk. With these nonvariant quantities as well as the values of α and β in Table 2, all calibration coefficients for the subsequent adjusting satellites are completely determined once the calibration coefficients for the reference satellite are given. Different sets of coefficients that satisfy a0 and a1 constraints can work equally well in removing the mean biases and the temperature-dependent nonuniformity observed in Figure 9a. Therefore, in the following trend and sensitivity studies, what only matters is the specification of the offset and nonlinear coefficient values for the reference satellite. In other words, after using SNO, the calibration process has been reduced to the reference satellite problem.
Table 5. Nonvariant Quantities a0 = ΔδR + αUj and a1 = βUj − Uk Obtained From Regression of SNO Matchupsa
Units for a0 and a1 are 10−5 (mW) (sr m2 cm−1)−1 and (sr m2 cm−1) (mW)−1, respectively.
 Another interesting point is that the U values in Table 4 for NOAA 12 and 14 nearly double the prelaunch values given in Table 3. This shows that there are large differences in the U values between the prelaunch and postlaunch calibrations. This large difference, which cannot be explained by the factors discussed in the last subsection, is very likely caused by the MSU sensor decay. However, the mechanism describing the changes of the nonlinear coefficient with sensor decay is not clear yet. Since the postlaunch U values remove the biases and nonuniformity found in the linear calibration of the SNO data sets, they should be treated as having more reliability. However, this is true only under the assumption that the U value for the reference satellite is accurate. The U values of NOAA 11, 12, and 14 will be certainly different if U for NOAA 10 is assumed differently. The impact of these uncertainties on the calibrated time series will be discussed in a sensitivity study.
 Finally, in complying with Figure 6c, Figure 10 shows the variation of the mean biases of the SNO matchups for the nonlinear calibrated satellites versus the pixel center distance between satellites pairs. Comparing to Figure 6c, it is seen that the biases are now close to zero for the nonlinear calibrated satellite pairs when d is larger than 111 km. This suggests the calibration coefficients obtained with d = 111 km are robust; that is, similar coefficient values should be obtained when larger d than 111 km is selected. It suggests that the requirements (minimum sampling numbers and stable biases) that are used for selecting the spatial distance in our analyses are adequate.
5. Trend of the MSU Channel 2 Brightness Temperature
5.1. Trend With Linear Calibration
 Since nonlinear calibration can be considered as a small correction to the linear calibration, the trend associated with the linear calibration may serve as a foundation for understanding the impact when nonlinear calibration term is included. Therefore, for the purpose of sensitivity study, we first compute the trend associated with the linear calibration. To obtain the trend, a 5-day averaged (pentad) data set is generated from the MSU 1b data with level 0 calibration by the linear algorithm equation (9). The pentad data are generated in grid boxes with a spatial resolution of 2.5° in both longitudinal and latitudinal directions. There are different ways in computing the 5-day averages in each grid box with regard to the ascending and descending orbits. For instance, one may compute the 5-day averages of ascending and descending orbits separately, and then grid box means are obtained from the averages of the pentad ascending and descending data sets. In this study, however, the averages containing both ascending and descending pixels are computed first for each grid box, and then global averages are computed from the grid box values. We use pixels 5, 6, and 7 in computing the averages. A typical distribution for the pentad data is shown in Figure 11.
 Note that, in this study, different times of pixels were not converted to a common time using diurnal cycle climatology during the averaging process. At this point we only focus on sensitivity study to understand the role of nonlinear coefficients on the trend, ignoring the diurnal cycle correction will not affect our analyses. Figure 12 shows the time series and trend for the global ocean-averaged pentad MSU channel 2 brightness temperatures for the individual satellites NOAA 10, 11, 12 and 14 with the linear calibration algorithm. The time series for these satellites show large biases up to 0.6–0.7 K during their overlap periods. The mean intersatellite biases are listed in Table 6. Biases with the NESDIS operational calibration algorithm are on the same order as the linear calibration (not shown). Note that these biases are obtained by first computing the global ocean average and then computing the averages from the ocean mean time series (spatial-first and time-second). Our experiments suggest that this is close to the biases with computing the spatial and time averages simultaneously. However, if one first computes the time averages for each grid box and then computes the global mean from these time-averaged grid boxes (time-first and spatial-second), the biases will be different from what have been listed in Table 6. In this case, the afternoon satellites (NOAA 11 and 14) are warmer than the morning satellites (NOAA 10 and 12) by an extra of about 0.2 K. For instance, the biases between NOAA 11 and 10 in Table 6 is −0.605 K, while the bias computed by the time-first and spatial-second method is −0.416 K. These differences occur because of the missing data (gaps) in the pentad data set (Figure 11). To understand this, one may examine the simple matrix, where −99 represent a missing data. It is easily seen that averaging the rows first and columns second gives a mean value of 1.75, while averaging the columns first and rows second yields a mean value of 2.
Table 6. Mean Biases of the Pentad Global Ocean Averages Between Two Satellites During Their Overlap Periods for Both the Linear Calibration and the Case 1 Nonlinear Calibrationa
Pentad Data Number
Bias (K) for Linear Calibration (k–j)
Bias (K) for Case 1 Nonlinear Calibration (k–j)
Biases are obtained with the spatial-first and time-second method in computing the global mean.
 It is seen from Figure 12 that the trends of individual satellites differ significantly, from −0.35 K decade−1 for NOAA 10 to 1.05 K decade−1 for NOAA 11, a range of 1.4 K decade−1. To obtain the combined trend, the biases between different satellites have to be removed. Different techniques have been developed by different authors in the past [e.g., Christy et al., 2000; Prabhakara et al., 2000; Mears et al., 2003; Grody et al., 2004] to remove those types of biases. However, in this study, only the simplest constant correction method is applied to remove the satellite biases. We will justify this method later on. In this approach, the mean bias values in Table 6 are directly used for adjustment. Using NOAA 10 as a reference satellite and the symbols next to the bias values listed in Table 6 to represent the biases between different satellites (e.g., b1 for bias between NOAA11 and 10), then the adjusted time series would be
where T represents the time series before the adjustment and T′ denotes the adjusted time series. After the adjustment, a single time series is obtained by simply averaging all available observations from different satellites. Note that this bias removal does not guarantee that the biases for any overlap period between two satellites are zero because of the nonlinear nature of the time series and also because not all overlap data were used. For instance, the mean bias between NOAA 12 and NOAA 10 during the period of June 1991 to August 1991 is not zero because this value is not used.
 With this bias removal, the obtained trend for the combined anomaly time series is 0.362 K decade−1, here the anomalies are computed by removing the 17-year averaged seasonal cycle of the combined time series. This trend may serve as an upper limit since, as shown below, nonlinear calibration will always reduce this trend.
5.2. Trend With Nonlinear Calibration
 Although the SNO matchups are confined to the Polar Region, their brightness temperature range (200–250 K, Figure 9) covers most of the global range of the MSU channel 2 brightness temperature except for a small range in the Tropics (250–260 K, Figure 11). Therefore the calibration coefficients in Table 4 are assumed to represent the MSU nonlinearity of the entire observations and thus they are applied to calibrate all the MSU channel 2 data at level 0 for NOAA 10, 11, 12 and 14. After the calibration (applying the coefficients to equation (8)), a new 1b data set and associated pentad data set are generated again as in the linear calibration. The global-ocean biases for the pentad data between different satellite pairs for this nonlinear calibration are also shown in Table 6. It is seen that the nonlinear calibration reduces a bias of −0.605 K and 0.646 between NOAA 11 and NOAA 10 and between NOAA 11 and NOAA 12, respectively, in the linear calibration to 0.062 K and 0.059, respectively. This is an order of magnitude improvement. The new bias between NOAA 14 and 12 is also significantly reduced compared to the biases before the correction, though it is twice as large as the bias between NOAA 10 and 11. The bias between NOAA 12 and 10 becomes only marginally smaller, and bias between NOAA 11 and 14 becomes even larger. This occurs mostly likely because the coefficients obtained from SNOs may only represent the averages over the overlapping period of the SNO matchups. If we assume that sensor decay causes changes of the nonlinear coefficients, then these coefficients may have different values at different period of time during the life cycle of a satellite. The nonlinear coefficients obtained from our selected overlaps do not necessarily represent the values during the shorter overlapping periods between NOAA 10 and NOAA 12 and between NOAA 11 and NOAA 14, thus not necessarily reduce the biases of the pentad data set between them.
Figure 13a shows the time series for the individual satellites. Because of the small biases, these time series stay much closer to each other compared to Figure 12 for the linear calibration. In addition, the nonlinear calibration significantly affects the trend of individual satellites. All individual trends with the nonlinear calibration, except for NOAA 12, have decreased from the trend of the linear calibration. For instance, the trend of NOAA 10 is −0.35 K decade−1 for the linear calibration while the nonlinear calibrated trend is −0.39 K decade−1. The same situation occurs for NOAA 11 and NOAA 14, but with a much larger decrease in the trend. For NOAA 12, however, the nonlinear calibration leads the trend increase to 0.43 K decade−1 from 0.16 K decade−1 in the linear calibration. Note that if we compare to the NESDIS operational calibration algorithm where nonlinear coefficients from 1b files and the coefficients for NOAA 12 from Mo  were used, the impact will not be so large. The NESDIS operational algorithm uses a calibration algorithm different from equation (2) [Mo, 1995; Mo et al., 2001], but its time series should be similar to using equation (2) with the prelaunch calibration coefficients shown in Table 3 since they are all based on prelaunch chamber test data sets. However, these coefficients do not satisfy the nonvariant constraints of Table 5, thus result in nonzero biases and temperature-dependent nonuniformity in the SNO matchups. In this sense, the NESDIS operational algorithm is treated as similar to the linear calibration, although certain amount of nonlinearity have already been included in it.
 The changes in the individual satellite trends are closely related to the trends of the nonlinear term Z, which is shown in Figure 13b. The Z trends do not depend on the calibration coefficients, but only depend on the raw counts and blackbody target temperature. From the nonlinear calibration equation (8), we see that the trend with the nonlinear calibration should be the summation of the trend of the linear calibration plus the trend of the Z term multiplied by the coefficient U, i.e., . This suggests that when U are positive values, including the nonlinear term will result in a decrease of the trend when Z trend is negative (NOAA 10,11, and 14), and a increase of the trend when Z trend is positive (NOAA 12).
 The trend for the combined monthly mean anomaly time series for Figure 13a is 0.214 K decade−1; here the small constant biases between satellite pairs have been removed and again the anomalies are computed using the 17-year averaged seasonal cycle of the combined time series.
5.3. Sensitivity Experiments
 As mentioned earlier, for UN10 taken from its prelaunch calibration, the “optimal” UN12 and UN14 obtained from the SNO regression nearly double their prelaunch calibration values. In addition, UN11 obtained from the SNO regression is also slightly larger than its prelaunch value. If we assume another satellite as the reference satellite, the calibration coefficient and trend values will be completely different. For instance, if NOAA 12 is assumed to be the reference satellite and its nonlinear coefficient is fixed to be the prelaunch calibration value, 3.2 (sr m2 cm−1) (mW)−1 (Table 3), then on the basis of the coefficient constraints in Table 5, the nonlinear coefficient for NOAA 11 can be obtained as UN11 = (UN12 + 2.254)/βN11,N12 = (3.2 + 2.254)/0.941 = 5.80 (sr m2 cm−1) (mW)−1. Similarly, UN10 and UN14 are obtained as 2.556 and 4.267 (sr m2 cm−1) (mW)−1, respectively. We see that this has led to a smaller nonlinear coefficient than case 1 (Table 4) for each satellite. Also, this process is equivalent to assuming the NOAA 10 to be still the reference satellite but with a smaller nonlinear coefficient UN10 = 2.556 (sr m2 cm−1) (mW)−1. Therefore, instead of selecting different satellite as the reference satellite, we conduct a series of sensitivity studies by changing the NOAA 10 nonlinear coefficient from 0 to 12.5 (sr m2 cm−1) (mW)−1 with the interval initially set to 2.5 (sr m2 cm−1) (mW)−1 (50% of its prelaunch value). However, the interval is not limited to this fixed value; when there is a need for understanding the trend behavior, more experiments may be conducted at between the intervals. Again, the coefficients for different satellites must satisfy the nonvariant constant constraints in Table 5. So with the given NOAA 10 coefficient for each experiment, one can obtain all coefficients for other satellites from Tables 5 and 3 in a succeeding adjusting process. As an example, the coefficients for UN10 = 6.25 (sr m2 cm−1) (mW)−1 (25% larger than NOAA 10's prelaunch value) are shown in Table 7. We will show shortly that these coefficients provide a trend that most likely represents the real trend of the MSU channel 2 brightness temperature time series.
Table 7. Calibration Coefficients for Different Satellites for the Case When UN10 = 6.25 (sr m2 cm−1) (mW)−1a
This set of coefficients minimizes the contaminations by the warm load temperatures in the brightness temperature difference time series during overlapping observations. Units for δR and U are 10−5 (mW) (sr m2 cm−1)−1 and (sr m2 cm−1) (mW)−1, respectively.
 With the different set of coefficients for each experiment, new MSU 1b and pentad data sets are generated as discussed in section 5.2. The intersatellite biases for the ocean-only averaged pentad time series are similarly small as in the case of UN10 = 5 (sr m2 cm−1) (mW)−1. Note that for the case of UN10 = 0 in the sensitivity experiments, the coefficients for other satellites are not zero since they are obtained from Tables 5 and 3. These coefficients also result in small intersatellite biases for the ocean-only averaged pentad time series as in case 1. This is different from the linear calibration discussed in 5.1 where the nonlinear coefficients for all satellites are zero and the intersatellite biases are large.
 The combined ocean-only anomaly trends for these experiments are shown in Figure 14. Again, only the mean constant intersatellite biases were removed in constructing the single time series in all cases. It is seen that the anomaly trend peaks at UN10 = 0 with a value of0.290 K decade−1 and decreases nearly linearly to 0.101 K decade−1 at UN10 = 12.5 (sr m2 cm−1) (mW)−1. The trends for the time series with seasonal cycle included are about 0.03 K decade−1 smaller than the corresponding anomaly trends for all cases (not shown). This behavior of trends is related to trend of the Z terms. Since the Z trends for most of the satellites are negative, when UN10 increases, and thus the U values of all other satellites, the combined trend becomes smaller.
 In order for the obtained trend to be capable of explaining the atmospheric climate, the MSU brightness temperature time series should not contain signals from the warm load temperature, Tw [Christy et al., 2000; Prabhakara et al., 2000]. This is because Tw has its own trend similar but opposite in sign to the nonlinear Z term (Figure 13b), which may contaminate the Tb trend if it is not removed from the Tb time series. This is the reason that Christy et al.  had developed empirical formulas in the bias removal processes. In their studies, a best fit of the warm load temperatures of two satellites, denoted as Tw(j) and Tw(k), to the intersatellite temperature differences, represented by ΔTj,k, was obtained from overlapping observations; and then the best fit was removed from the entire time series to eliminate possible target temperature contaminations. In our sensitivity experiments, their methods are not used. However, we examine this issue by computing the coefficient of determination, r2, which is the square of the correlation coefficient between ΔTj,k and Tw(j) or Tw(k). This quantity measures how much of the variance in ΔTj,k can be explained by Tw(j) or Tw(k) in the overlapping period. It is also an indicator of the strength of correlation between ΔTj,k and Tw(j) or Tw(k). The variation of r2 between ΔTj,k and Tw(j) with changes of UN10 for the satellite pairs N10–N11, N11–N12, and N12–N14 are plotted together with the anomaly trend in Figure 14. The averages over these individual coefficients of determination are also shown in Figure 14. Note that the r2 between ΔTj,k and Tw(k), which is the warm load temperature of the k satellite in the satellite pairs, is not included. The reason is that the k satellite is at the beginning period of its life cycle during which the variation of Tw(k) is generally much smaller than Tw(j) (Figure 15 shows the example for NOAA 11 and 10). Large Tw contamination occurs mainly during the later period of the life cycle of a satellite when orbital drifts have the largest effect. We find that the percentage of variance in ΔTj, k that can be explained by Tw(k) (not shown) is much smaller than by Tw(j). Therefore they are not good predictors in examining how the nonlinear terms effectively remove the Tw contaminations. Ignoring the Tw(k) will not affect our discussions below and thus they are not included in Figure 14. In addition, the N10–N12 and N11–N14 overlaps are not included neither because their ocean-averaged pentad data numbers are not large enough (degree of freedom only about 5) for a correlation test to be significant. On the other hand, the degree of freedom for the plotted satellite pairs is about 30, so the correlation is significant at 95% confidence level when r2 > 13%.
 It is seen from Figure 14 that r2 basically follows a parabolic shape. For each satellite pairs, r2 is large at UN10 = 0 and UN10 = 12.5 (sr m2 cm−1) (mW)−1 and reaches a minimum that is close to zero between them. Each different curve has a different UN10 for its r2 to reach minimum. For the obtained trend to be able to represent the real climate signals, we require the contamination by the warm target temperatures to be as small as possible, e.g., less than 10%, for all satellite pairs. Figure 14 suggests for only a very narrow range of UN10, i.e., from 6.25 to 7.5 (sr m2 cm−1) (mW)−1, that all r2s satisfy this requirement. The anomaly trend corresponding to this range of UN10 is from 0.198 to 0.177 K decade−1. Furthermore, the averages of r2 over the three curves behave similarly and reach its minimum (0.04) at UN10 = 6.25 (sr m2 cm−1) (mW)−1, therefore we consider the corresponding anomaly trend of 0.198 K decade−1 as most likely representing the actual trend of the MSU channel 2 time series.
 To further understand this behavior of r2, Table 8 shows the correlation coefficients between ΔTj,k and Tw(j) for some selected UN10 values for the satellite pairs examined in Figure 14. The case of the linear calibration is also included. It is seen that the ΔTj,k and Tw(j) experience a trend from a large negative correlation to a large positive correlation as UN10 increases from 0 to 12.5 (sr m2 cm−1) (mW)−1 for all the satellite pairs examined, showing a complete reverse in the correlation process between the two variables. As a example, Figure 15 show the time series of ΔTN10,N11 and Tw(N10) for their negative, near zero, and positive correlations. These figures provide viewing support on the reversion process of their correlations. The Tw(N11) time series is also shown for comparison. Note that these Tw and ΔTN10, N11 are scaled so that different curves are close to each other for easy viewing. As mentioned earlier, the variance of Tw(N11) is much smaller than Tw(N10) and its correlation barely changes with the UN10 values. So it is not a good predictor for selecting the best UN10 value in the trend studies.
Table 8. Correlation Coefficients for Selected Sensitivity Experiments Between TB(j−k) and Tw(j) for Global Ocean-Averaged Pentad Time Series During Overlap Observations
UN10 = 0
UN10 = 5
UN10 = 6.25
UN10 = 10
UN10 = 12.5
 The sign changes of the correlation coefficients between ΔTj,k and Tw(j) occur because the balance between the linear term and nonlinear term in the calibration algorithm equation (8) changes with the increase of the U value. For linear calibration or small U values, the linear calibration dominates the correlation behavior between ΔTj,k and Tw(j), which results in a negative correlation. When U becomes large enough, the effect of the nonlinear term surpasses the linear term and then dominates the correlation behavior, causing a reverse in the correlations between ΔTj,k and Tw(j).
5.4. Comparisons With Other Studies
 Our main purpose in this study is to provide a procedure to recalibrate the MSU observations using SNOs. At this point, we have basically reached our goal. In particular, we have developed a procedure through which a unique set of calibration coefficients are obtained. This set of coefficients, which are shown in Table 7, has the following features. First, they satisfy the strong nonvariant constant constraints given by Table 5. These constraints, obtained from SNO regressions, link calibration coefficients of different satellite together. Coefficients that satisfy these constraints remove the biases and the temperature-dependent nonuniformity in the SNO matchups, thus results in a well-merged new 1b data set. These constraints have made the calibration process to become only a reference satellite problem. Second, this set of coefficients maximally remove the contamination by the warm target temperatures in the ΔTj,k time series for the satellite overlaps examined in this study, resulting in the percentage of variance in ΔTj,k that can be explained by the warm target temperatures is close to zero. This feature ensures the uniqueness of the reference satellite coefficient that cannot be completely determined from the SNO regressions. In other words, it is the combination of the SNO bias removal with eliminating the contamination from the warm target temperatures that determine a unique set of calibration coefficients at level 0. Because there is nearly no contamination from the warm load temperatures for this particular set of calibration coefficients, constant intersatellite bias removal is sufficient in constructing single time series for climate trend studies. The corresponding ocean-only anomaly trend for this set of coefficients is 0.198 K decade−1 for the period of 1987–2003. Note that for nonlinear coefficients that are outside of the region of UN10 = 6.25 ∼ 7.5 (sr m2 cm−1) (mW)−1, the trend obtained from the constant bias removal cannot be used to explain the climate signals since there are relatively large contaminations from the warm target temperatures in the brightness temperature time series. Third, the values of this set of nonlinear coefficients for all satellites, including the reference satellite, are consistently larger than their prelaunch values, suggesting a common feature in the nonlinear gains in the MSU instrument while aging. Because of these features, when diurnal cycle correction is ignored, the obtained trend value for this set of coefficients can be considered as most likely representing the actual trend of the MSU channel 2 observations of the ocean-only time series.
 It is interesting to compare this trend result to previous investigations to see the impact of the SNO calibration. Specifically, the Remote Sensing System (RSS) group [Mears et al., 2003; Mears and Wentz, 2005] and the University of Alabama in Huntsville (UAH) group [Christy et al., 2003] have made their MSU channel 2 products, T2 and TLT, freely available to the public. The construction of the MSU temperature time series in this study is similar to the RSS T2 and UAH TMT (Temperature Middle Troposphere) products, though different numbers of the near-nadir pixels were used. The UAH TMT product is similar to the T2 product, except that the observations from the AMSU channel 5, which has a frequency very close to the MSU channel 2, have been combined with MSU channel 2 in deriving the product. Those temperature products are available globally from 1978 to the present in gridded format with 2.5° × 2.5° spatial resolution. To have a parallel comparison, the RSS T2 and UAH TMT are averaged only over oceans and the 1987–2003 anomalies are computed using the seasonal cycle of the same period. Note that both the RSS T2 and UAH TMT products have diurnal corrections in them. However, Mears et al.  showed that the diurnal cycle correction may cause a trend difference only about 0.01 K decade−1 for the ocean-only time series. Therefore not including a diurnal cycle correction in this study will not significantly affect the comparison.
Figure 16 shows the comparison of the anomaly time series between RSS T2, UAH TMT, and this study. It is seen that there is an overall similarity in the three products in describing many individual climate events. For instance, all three products show a large temperature jump in 1998, which corresponds to the 1998 El Nino event, with the magnitude and phase nearly identical. The anomaly differences between this study and the RSS T2 as well as UAH TMT also show similarities between these products.
 Despite the similarity in describing individual events, the climate trends of the three studies are quite different. The UAH TMT has a smaller trend of 0.069 K decade−1, the RSS T2 has a moderate trend, 0.114 K decade−1, while this study yields a larger trend of 0.198 K decade−1. Those trend differences are most likely caused by the differences in calibration procedures, since diurnal cycle effect on the ocean-only averages is small [Mears et al., 2003]. These different values may suggest different scenarios about the tropospheric changes relative to the surface.
 The trend value obtained in this study suggests a warming in the tropospheric temperature during the 17-year period from 1987 to 2003. Fu et al.  suggest that stratospheric cooling may cause the T2 product to underestimate the tropospheric temperature trend by 0.08 K decade−1. Spencer et al.  argue that this number is subject to large uncertainties because the overlaps of weighting functions between MSU channels 2 and 4 are insufficient to provide an accurate estimate on the stratospheric cooling effect in T2 product using channel 4 observations. In any case, however, if stratospheric cooling effect is considered in T2, the tropospheric warming is expected to be larger than what is obtained in this study. This will be larger than the observed surface temperature trend, 0.17 K decade−1. This warming occurs mainly during the later period of 1987–2003 since the trend of the first ten years within this period is small even after the adjustment of the stratospheric cooling effect obtained by Fu et al. . In an Intergovernmental Panel on Climate Change (IPCC) project on model comparisons, Santer et al.  found a tropospheric warming faster than the surface in the tropical oceans in all 19 models involved in their study. Thermal dynamic theory also suggested a tropospheric warming larger than the surface in the tropical region [Santer et al., 2005]. To compare with these model results, regional climate trends needs to be derived from the new 1b data set developed in this study. This will be a subject of future studies.
6. Summary and Conclusions
 This study has developed a procedure to recalibrate the level 0 MSU channel 2 observations using the SNO matchups. The procedure is independent of the diurnal cycle corrections and results in a set of calibration coefficients that are different from prelaunch calibrations. The new calibration coefficients yield well-merged multisatellite MSU channel 2 data at both the 1b level and gridded level.
 The calibration algorithm consists of a dominant linear response of the MSU radiometer raw counts to the Earth-view radiance plus a weak quadratic nonlinear term caused by an “imperfect” square-law detector. By analyzing the error structure of this calibration equation, a constant offset and an error in the nonlinear coefficient for the quadratic term are incorporated to represent the uncertainties of the calibration algorithm. The SNO matchup data set used in this study contains simultaneous observations for the nadir pixels over the Polar Regions for all overlaps of nine NOAA satellites. To ensure large number of SNO events for reliable statistical analyses, the criteria of simultaneity for the SNOs are chosen to be less than 100 s and within a ground distance of 111 km. A radiance error model for the SNO pairs is developed and then used to determine the offsets and nonlinear coefficients through regressions of the SNO matchups. Since the SNO matchups are confined to the Polar Region, their temperature range is about 10 K short compared to the global range of 200–260 K in the MSU channel 2 brightness temperatures. However, the calibration coefficients obtained from SNO regressions are applied to calibrate the entire MSU observations for NOAA 10, 11, 12 and 14 to generate well-merged 1b data set. Using beam positions 5, 6, and 7 of this 1b data set, a gridded pentad data set with resolution of 2.5° latitude by 2.5° longitude is generated for climate trend studies. Analyses of the SNO matchups, the new calibrated MSU 1b data set, as well as the trend results have led the following conclusions:
 1. The new calibration with the SNO matchups has resulted in a well-merged MSU 1b data set. The mean biases for the SNO matchups between satellites are zero and the temperature-dependent nonuniformity in the unadjusted brightness temperature differences in the linear calibration also disappear. In addition, the pentad data set generated from this new 1b data set has global ocean-averaged intersatellite biases between 0.05 to 0.1 K for the calibrated satellites NOAA 10, 11, 12, and 14. This is an order of magnitude smaller than that obtained when using the unadjusted calibration algorithm (sections 4.3 and 5.2).
 2. MSU channel 2 brightness temperature differences in the SNO matchups increase with increasing of the spatial distance between overpass pixels of different satellites. However, the mean biases tend to stabilize when this spatial distance is larger than about 110–120 km for the satellite pairs involving NOAA 10, 11, 12, and 14. Though the calibration coefficients are derived from a SNO data set with specified distance criterion of simultaneity, the coefficients appear to be robust since they also remove biases of the SNO matchups with larger spatial distance (sections 3.5 and 4.3).
 3. The SNO matchups can accurately determine the differences of the offsets as well as the nonlinear coefficients between satellite pairs, providing a strong constraint to link calibration coefficients of different satellites together. However, because of a high degree of colinearity between SNO observations, SNO matchups alone cannot determine the absolute values of the calibration coefficients. Absolute values of calibration coefficients are obtained through sensitivity experiments, in which the percentage of variance in the brightness temperature difference time series that can be explained by the warm target temperatures of overlapping satellites is a function of the calibration coefficients. These percentages of variance measure the degree of contaminations by the warm target temperatures in the brightness temperature differences between overlapping satellites. By minimizing these contaminations for overlapping observations, a unique set of calibration coefficients are obtained from the SNO regressions. Essentially, SNO regressions have made the calibration procedure to become a reference satellite problem; and minimizing the contamination from the warm target temperature in the brightness temperature differences of overlap satellites determines the uniqueness of the reference satellite coefficient. It is the combination of these two processes that determine the final set of calibration coefficients (sections 4.1 and 5.3).
 4. For this particular set of calibration coefficients, because of near-zero contaminations by the warm load temperatures, using constant intersatellite bias removals is sufficient in constructing single time series for climate trend analyses. The MSU channel 2 trend obtained in this study is 0.198 K decade−1 for 1987–2003 for a merged pentad, ocean-only anomaly time series containing NOAA 10, 11, 12, and 14 (sections 5.3 and 5.4).
 In summary, with accurate intercalibration of the MSU channel 2 observations at level 0 using the SNO matchups, a well-merged brightness temperature time series is obtained at both the 1b and gridded levels with near-zero contaminations by the warm target temperatures. This yields a high confidence on the trend result for the MSU channel 2 global ocean-averaged anomaly time series in explaining climate signals.
 Though MSU channel 2 is used to demonstrate the calibration procedure developed in this study, the approach is expected to be equally applicable to other channels such as MSU channel 3 and 4. This will be the focus of future studies.
Appendix A:: Blackbody Target Temperature Calibration Algorithm
 There are two in-flight calibration targets for each MSU. Target number 1 is viewed by channels 1 and 2 and target number 2 is viewed by channels 3 and 4. Each target has two embedded PRT's (PRT A and B) to measure its temperature, which is computed as the averages of the two PRT's measurements. The PRT measurements are reported as count values in the 1b file. There are two steps to convert the count values into temperature. The first is to convert the counts to resistance (R) and the second to convert resistance to temperature (K). The algorithm for the first step is
where the subscript X equals to A or B, referring to PRT A or B for each target, respectively; R is the resistance, CX is the count value, and K0 and K1 are the resistance conversion coefficients supplied in the NOAA Polar Orbiter Data User's Guide [Kidwell, 1998, section 1.4]. The quantities TX(low) and TX(high) are the high and low calibration reference points for PRT A and B, respectively, and they are provided in the 1b file.
 The second conversion algorithm is
where T is the temperature of the blackbody target as derived from the resistance R, and ei are the conversion coefficients for each PRT. Values of ei are also supplied by [Kidwell, 1998, section 1.4].
 The authors would like to thank Robert Iacovazzi for reviewing the paper and providing helpful comments. The authors also thank Carl Mears for his useful comments during the studies. The authors thank the three anonymous reviewers whose comments greatly helped improve the manuscript. Some programming support from Mei Gao and Chuangyu Xu is gratefully acknowledged. The views, opinions, and findings contained in this report are those of the author(s) and should not be construed as an official NOAA or U.S. Government position, policy, or decision.