Journal of Geophysical Research: Atmospheres

Archival precipitation data set for the Mississippi River Basin: Algorithm development

Authors


Abstract

[1] The goals of the Global Energy and Water Cycle Experiment Continental-Scale International Project (GCIP) point to the need for high-resolution data sets on all elements of the land surface and atmospheric hydrologic cycle. A high-resolution precipitation data set has been derived from radar reflectivity observations taken from the National Weather Service WSR-88D radars in the continental U.S. The data set is available for a continuous five-year period (1996–2000) at an hourly, 4 × 4 km2 resolution for the Mississippi River Basin. Development of the data set involved data management and quality control of input radar-reflectivity, parameter estimation for radar-reflectivity transformation, and product accumulation and quality control of the precipitation product. Quality control algorithms for the input radar-reflectivity included procedures to deal with radar calibration differences, an especially important problem in developing a long-term, continental-scale data set for diverse hydroclimatological applications. Rainfall estimation was based on a Z-R conversion algorithm that involved an optimization technique to determine the parameters for the transformation of radar-reflectivity to rainfall. Rainfall accumulation involved integrating to hourly, 4 × 4 km2 resolution and then visually inspecting the final product. Some limitations of the algorithm are presented and suggestions are proposed for improving the development of a long-term, large-scale precipitation product. Initial comparisons of the radar-based product with a rain gauge based product after a quality analysis of both products show good agreement in the Mississippi River Basin.

1. Introduction

[2] As part of the Global Water and Energy Cycle (GEWEX) Continental-Scale International Project (GCIP), we have developed an archival precipitation data set for use in a wide range of hydroclimatological analyses. The overall goal of GCIP, which is based in the Mississippi River Basin (MRB), is to demonstrate skill in predicting changes in water resources on seasonal and annual timescales [Coughlan and Avissar, 1996]. To aid in achieving this goal, we have developed this precipitation data set at a high-resolution, 4 × 4 km2 spatial and hourly temporal, for a 5-year period (1996–2000). The principal observations for the development of the archival precipitation data set are radar reflectivity data from the network of WSR-88D (Weather Surveillance Radar – 1988 Doppler) radars in the Continental United States (CONUS).

[3] Quantitative precipitation estimation algorithms based on radar reflectivity data have been in existence for some years. The spatial and temporal resolutions of the output from these algorithms, as well as the scale of the products, have undergone adaptations based on operational and research experience. The WSR-88D network provides for almost continuous spatial coverage over the CONUS, and algorithms implemented at each radar site have existed since their deployment. For example, the Precipitation Processing System (PPS) develops rainfall products at varying resolutions by incorporating the volume scan radar-reflectivity information and in-situ measurements for the specific radar coverage area, known as stage I [Fulton et al., 1998]. On a wider spatial scale, the River Forecast Centers (RFC) produce a stage II multisensor product that is adjusted locally [Seo, 1998] and a stage III product that is a mosaic of the stage II product on a regional scale. The stage IV composite, generated by the National Center for Environmental Prediction (NCEP), is a mosaic of the stage III products from the RFC. Each of the products has its advantages and disadvantages with respect to developing a precipitation product at 4 × 4 km2 spatial and hourly temporal resolution, for a 5-year period over the MRB. For instance, the stage I, II, and III products undergo rigorous development and evaluation of their algorithms with several steps for quality control, bias adjustment, rain-rate conversion, and accumulation. However, they are regional products and have not been continuously available until recently. The stage IV product is a national product but has only become operational in 2002. Other national products developed commercially have a variety of resolutions, both temporally and spatially, but do not undergo rigorous algorithm development. The availability of a national radar-reflectivity product that is a composite of the WSR-88D radars in the CONUS and has a fine spatial and temporal resolution has allowed us to develop a quantitative precipitation estimation algorithm to produce precipitation estimates at 4 × 4 km2 spatial and hourly temporal resolution, for the 5-year period over the MRB.

[4] To implement an algorithm for estimating precipitation at high spatial and temporal resolutions, it is necessary to organize and manage the input data efficiently. We presented the issues related to data format, organization, management, and visualization in Nelson et al. [2003]. In this paper, we describe the algorithm for developing a high-resolution precipitation data set for the MRB. First we address issues of the quality of the radar-based input composite data. The input data are contaminated with erroneous values due to physical effects such as anomalous propagation of the radar beam, highly biased radar returns from the melting layer, and differences in the calibration of one radar versus another radar. We then improve the quality of the input data statistically. The statistical analysis involves a reflectivity adjustment technique and a small window averaging. Next we calibrate the Z-R power law relation by using some high quality and high-density rain gauge networks in the CONUS. Some issues we address are the effect that advection has on accumulating precipitation, and the difficulty in dealing with snow. We discuss the many limitations we identified while developing this data set and suggest ideas for an improved estimation algorithm. Finally we present an initial comparison of the radar-based product with a rain gauge based product. We report a comprehensive error assessment of the product in a separate publication. The product we have developed is available through the University Corporation for Atmospheric Research's (UCAR) Joint Office for Scientific Support (JOSS). The final distribution of the precipitation data set, a precipitation data browser and a Programmer's Application Interface is available through the JOSS GCIP website http://www.joss.ucar.edu/gcip/legacy.html).

2. Data Sources and Methods

2.1. Input Radar-Reflectivity

[5] The principle inputs for the development of the precipitation data set are the national composite radar reflectivity maps produced by Weather Services International (WSI) Corporation based on data from the national Next Generation Weather Radar (NEXRAD) network operated by the National Weather Service, the Federal Aviation Administration, and the Air Force Air Weather Service and Naval Oceanography Command. These composite maps are a national product termed NOWRad. We describe the product in Nelson et al. [2003]. The highlights of these products are the 15-minute temporal resolution and the 2 × 2 km2 spatial resolution for the CONUS, and there is a continuous record for the 5-year period.

[6] The algorithm used to produce these composite NOWRad maps of reflectivity for the CONUS is deemed proprietary, but we were able to deduce some information about the product. The mosaiced maps of reflectivity are produced from the 142 WSR-88D radar sites in the CONUS. The Level III data for each radar [Fulton et al., 1998] are collected by WSI and used to produce a composite reflectivity map at each radar site as the maximum reflectivity from the four lowest antenna elevation angles. The NOWRad product is a mosaic of all the radars in the CONUS, and in many places of the CONUS there are several radars “covering” the same location. The deduced method for assigning a reflectivity value in these locations is to take the highest value from any of the radars and assign them to a fixed Earth, 2 × 2 km2, grid. Most likely WSI uses the nearest neighbor method for assigning values to this grid. Then the reflectivity values for each pixel in each map are binned to a small number of categories. The categories of data range from 0–64 but only the first 16 are used to represent the values of the measured reflectivity. Table 1 details the meaning of each level, but the basis is that levels 0–15 represent 5 dBZ of reflectivity each, and levels 16–64 represent some meta-information. The result is a mosaiced map of precipitation for the U.S. at a 2 × 2 km2 resolution. The specific steps in the above algorithm and their order may differ somewhat as we were not able to obtain detailed documentation despite numerous attempts and inquiries with the company.

Table 1. Levels of Data in NOWRad Reflectivity Maps
LevelInformation
00–5 dBZ
15–10 dBZ
210–15 dBZ
315–20 dBZ
420–25 dBZ
525–30 dBZ
630–35 dBZ
735–40 dBZ
840–45 dBZ
945–50 dBZ
1050–55 dBZ
1155–60 dBZ
1260–65 dBZ
1365–70 dBZ
1470–75 dBZ
15>75 dBZ
16–31Present site indicator over modulus(level, 16)
32–47Absent site indicator over modulus(level, 16)
48–63Indeterminate data box border over modulus(level, 16)
64Indeterminate data box inside over level 0

[7] Figure 1 shows the overlapping of the radar coverage areas for the CONUS. The range rings extend 230 km from each radar site. In many locations of the CONUS, there are 2, 3, and 4 radars “covering” the same location, and in a few places there are as many as 7 or 8 radars “covering” the same location. So the reflectivity maps are a merged product of the maximum reflectivity from any of the radars covering a certain pixel. The above procedure has certain implications with respect to the quality of the product and our goals of converting it to a precipitation product. We will subsequently refer to this product as the merged-maximum reflectivity.

Figure 1.

NEXRAD radar overlap coverage (230 km from radar site) for the CONUS and rain gauge network locations used in this study; (a) Oklahoma Mesonet; (b) Georgia Automated Environmental Monitoring Network; (c) Iowa City Airport Piconet; (d) Goodwin Creek Research Network.

2.2. Rain Gauge Data

[8] We used rain gauge networks in the U.S for calibration of parameters for transformation of the radar-reflectivity to a rainfall product and for evaluation of this precipitation product (section 3.3). One network we used was the Oklahoma Mesonet for the period 1996–2000 that consists of 111 weather stations, which produce several meteorological variables. The data are maintained and quality controlled by the Oklahoma Mesonet [Shafer et al., 2000]. Precipitation measurements are available every 5 minutes at almost every gauge for the study period. Another network we used is the Automated Environmental Monitoring Network in Georgia. There are 47 rain gauges with 15-minute data available for 1996–2000. We also used smaller networks for calibration from the Goodwin Creek Research network [Steiner et al., 1999] and the Iowa City Airport Piconet [Krajewski et al., 1998]. The Goodwin Creek Research network has 32 rain gauges available for 1996–2000 in a research basin in Northern Mississippi. There are 10 rain gauges available with one-minute data for 1999–2000 for the Iowa City Airport Piconet. The location of the rain gauge networks can be seen in Figure 1 and Table 2 provides information about these networks. For initial comparisons of the radar-rainfall product, we used rain gauge data from first order stations from the Surface Land Daily Cooperative Summary of the Day Data set TD3200 (NCDC 2000).

Table 2. Information on Rain Gauge Networks Used for Z-R Parameter Calibration
Gauge NetworkNumber of GaugesTypeTemporal ResolutionYears
Oklahoma111Tipping Bucket5-minute1996–2000
Georgia49Tipping Bucket15-minute1996–2000
Iowa City10Dual Tipping Bucket1-minute1999–2000
Goodwin Creek32Tipping Bucket15-minute1996–2000

3. Estimation Algorithm

3.1. Input Data Quality

[9] The radar reflectivity data that are used at WSI to produce the NOWrad product originate at the specific NEXRAD site. Reflectivity data at the site are computed, calibrated, and clutter signals are suppressed. Clutter suppression involves automated removal of non-meteorological targets based on clutter maps and manual removal of clutter where it is not normally performed [Chrisman et al., 1994]. Data are then transmitted to WSI for processing and undergo some quality control at WSI. The quality control consists of identifying radar artifacts and removing them before distribution of the product. Still, the input merged-maximum reflectivity data have many limitations that can affect the accuracy of the final estimates of precipitation. As the reflectivity values are binned in levels each representing 5 dBZ of reflectivity, it is difficult to assign a specific dBZ to the data point. The option is to assign the data point to the low end or high end of the bin, or somewhere in between. After some initial investigations into the transformation of dBZ to rain rate (mm/hr), we decided to assign the dBZ to the low end of the bin. Our decision is based mainly on the fact that the merged-maximum reflectivity is likely to overestimate the true reflectivity due to the method used for its development. The overestimation arises from the likely inclusion of bright band, ground echoes, radar calibration, etc. This procedure is reminiscent of the bi-scan maximization used in an early version of the Precipitation Processing System used in NEXRAD that was demonstrated by Smith et al. [1996] to lead to overestimation of rainfall and was abandoned in 1996 [Baeck and Smith, 1998]. Thus our decision results in the value of reflectivity in increments of 0, 5, 10 … 75 dBZ. This increment of 5 dBZ represents nearly a factor of three error in Z (mm6/m3) and the potential for a factor of two error in rainfall.

[10] The next step we apply is to threshold the input reflectivity data. Reflectivities in the range of 0–10 dBZ are predominantly ground clutter or produce no measurable rainfall [Steiner and Smith, 2002; Krajewski and Vignal, 2001]. Fulton et al. [1998] and Baeck and Smith [1998], as well as others, define the hail threshold between 51–55 dBZ. Further, we investigated the frequency of occurrence of values greater than 55 dBZ, and we found that this frequency was very low. The majority of times that values greater than 55 dBZ were measured over the 5-year period was between 1 and 5 occurrences. So we apply a lower threshold of 10 dBZ and the upper threshold of 55 dBZ. This reduces the number of reflectivity categories to 9.

[11] The first diagnostic we test on the input merged-maximum reflectivity data is a study of the probability of detection. We define here the probability of detection as the estimated probability that the radar signal return in a certain range of values is detected. In the absence of radar artifacts the probability of detection should represent the probability of rainfall. In fact, an analysis of the probability of rain (not shown) on the final precipitation data set shows maps of probability that have similar patterns to the maps of climate normals as shown in a later figure (Figure 6a). However, we know that radar artifacts exist in the input data set, and the idea behind using this probability of detection is to identify these artifacts in the input radar reflectivity data set. On the basis of the initial threshold of the data, we define the probability of detection as.

equation image

We show the POD map for the CONUS for the five-year period (Figure 2a). In this figure we present the POD for each pixel in the MRB along with the distribution of this estimated probability for each pixel. This figure demonstrates the non-uniformity of the POD for the input reflectivity data. Figure 3 shows regions illustrating these data quality problems based on the five year period. There are clear instances of partial beam blockages and radar calibration differences. Most of these problems in the data are well known. Partial beam blockages are due to structures and mountains as in Figure 3a [Young et al., 1999]. Areas of high POD centered at a radar can be due to anomalous propagation (AP) of the radar beam, and bright band (BB) as in Figure 3b [Baeck and Smith, 1998; Smith et al., 1996]. Discontinuous POD close to the radar (within ∼35 km) can be due to the hole effect caused by the range dependent tilt selection as in Figure 3c [Smith et al., 1996], the cone of silence due to the fact that the radar is limited close in by its inability to scan directly overhead, or the clutter suppression that does not remove clutter in some instances or removes more intense precipitation in other instances.

Figure 2.

Probability of detection of reflectivity for MRB; (a) from the NOWRad product; (b) after applying reflectivity adjustment technique.

Figure 3.

Examples of data quality problems in NOWRad product; (a) beam blockage from mountains; (b) high POD due to dominant AP centered at a radar; (c) hole effect due to range-dependent tilt selection; (d) radar calibration differences in central United States.

[12] However, the most important problem in this data set is the increase in the probability of detection in areas of multiple radar coverages. This problem is due mostly to two effects. First, by selecting the merged maximum reflectivity there is a higher probability that radar artifacts such as AP and BB are included in the estimates. Second, radar calibration differences can cause abrupt changes in the POD in areas of adjacent radars. Radar calibration differences are due to the parameters at a certain radar being set different than at another radar or problems with the hardware (i.e. a failing component). Differences in these settings from one radar to an adjacent radar will lead to overestimation or underestimation of a precipitation field by one radar versus another radar seeing the same precipitation field. Smith et al. [1996] showed significant differences in rainfall estimates for the overlapping coverage area between different radar sites. They attributed these differences to systematic differences in radar calibration. Ulbrich and Lee [1999] show that a factor of 2 or more underestimation from some WSR-88D radars cannot be explained by variations in the Z-R law parameters, and they suggest the cause is due to radar calibration differences. Similarly, Anagnostou et al. [2001] report systematic differences in radar systems from +2 to −7 dB. They compared several WSR-88D radars in the southern U.S. with the TRMM precipitation radar (PR) and concluded that systematic differences in calibration cause this bias. We observe these systematic differences in radar calibration in the input merged-maximum reflectivity product as well. The POD as in Figure 3d shows the marked differences in adjacent radars. These differences result in “circles” at the edges of radars and abrupt changes in POD. The implication of these “circles” is that biased values of precipitation will be produced at short timescales in climatologically similar areas. These biased values will be manifested in climatological analysis of the radar product for the MRB.

3.2. Reflectivity Level Adjustment

[13] We analyzed the merged-maximum reflectivity data, and we found differing distributions of data depending on the number of radars covering a certain pixel. We determined the frequency of each reflectivity level in each radar overlap region, and then we normalized this frequency by the area (km2) of the corresponding overlapping region. Figure 4 shows an example of the increase in this value (frequency/km2) based on the number of radars covering a region. In the figure, we show only the values for the coverage areas of 2–5 radars, and there are significant increases in the frequency of the reflectivity level per km2 as the number of radars covering a region increases. The reason for this increase is that in areas of one radar covering a location, only one value of reflectivity can be assigned to the pixel as opposed to taking the merged-maximum reflectivity from any of the several radars covering the pixel. By taking the merged-maximum reflectivity, there is more possibility that erroneous values of reflectivity due to AP, Bright Band, etc. are inadvertently incorporated in the pixel value in regions of more than one overlap area. The larger increases at the higher reflectivity levels are due to the fact that these levels occur infrequently causing large percent increases for a slight increase in the frequency/km2.

Figure 4.

Increase (%) in the frequency of reflectivity level per km2 from the coverage of one radar for overlap regions of 2, 3, 4, and 5 radars. The frequency of reflectivity level is determined in each overlap region (1, 2, 3, 4, or 5). This frequency is normalized by the area (km2) of each coverage region, and the percent increase is determined for the coverage regions of 2, 3, 4, and 5 radars.

[14] The significant increase in the reflectivity frequency per number of radar coverages lead us to search for a method to adjust the input reflectivity levels. We investigated methods to adjust the reflectivity levels in areas of more than one radar to the reflectivity levels in areas of only one radar. Previous studies have applied a probability matching technique to determine the optimal parameters for a R-Z relation. [Haddad and Rosenfeld, 1997; Crosson et al., 1996; Calheiros and Zawadzki, 1987]. These studies attempted to match the probability density function of Z (mm6/m3) to the probability density function of R (mm/hr) and then applied a best-fit equation to determine the relation between Z and R. Krajewski and Smith [2002], Ciach et al. [2000], and Ciach and Krajewski [1999] discuss the consequences of such approaches. We investigated matching the probability distribution of the reflectivity levels in areas of more than one radar to the probability distribution of the reflectivity levels in the area of only one radar based on our hypothesis that the selection of values in the merged-maximum reflectivity product leads to overestimation. We found that the probability matching of the reflectivity levels, as described by Calheiros and Zawadzki [1987] and others, lead to a large adjustment of reflectivity levels at the high end. We then attempted to implement an ad hoc method to adjust the reflectivity levels based on the probabilities of nonexceedence to see if we could reduce the effect of the adjustment factor at the high-end reflectivities. The method applies a factor to the reflectivity level based on the ratio of the probability of nonexceedence for each radar coverage region as compared to one coverage region.

equation image

where RL is a reflectivity level from 2–10, and i is the number of the radar overlap regions from two to eight. Figure 5 shows the differences between the two methods for the same example as in Figure 4. The figure shows the original reflectivity level and the corresponding adjusted reflectivity level for both methods. The difference in the methods occurs mainly at the high-end reflectivities where the probability matching method adjusts the reflectivity level more than the ratio method does. We investigated both methods and found that adjusting the reflectivity levels based on the ratio of the probability of nonexceedence provided the best results.

Figure 5.

Original reflectivity level versus adjusted reflectivity level for probability matching (open circle) and ratio of probability of nonexceedence (+).

[15] The reflectivity adjustment based on the ratio of the probability of nonexceedence has five main steps: (1) Determine the histogram of data levels in each overlap region. (2) Separate the histograms by climate zone and season. (3) Fit a distribution to the histogram of data levels. (4) Determine a new data level based on the fitted distribution of data in each overlap region and for climate and season. (5) Apply small window averaging for smoothing. To be able to separate the histograms by climate zone, we first mapped the 30-year climate normals from the period 1961–1990 for the CONUS, and then we mapped the location of the radar locations along with the climate normals to determine which radars are located in each climate zone (Figure 6). Figure 6a shows the climate normals and Figure 6b shows the radars that fall in the zone as determined by the mapping. The shades of gray in Figure 6b show the radars and their coverage in the corresponding climate zone of Figure 6a. Next we fit a cumulative distribution function to the histograms of the frequency of reflectivity level per km2 corresponding to the overlap region for the specific climate zone. Finally, we adjust the reflectivity level based on the ratio of the probability of nonexceedence for these cumulative distribution functions.

Figure 6.

(a) 30-year climate normals from 1961–1990 (NCDC). (b) Radar coverage areas that fall in the specified climate region.

[16] The steps involved in the reflectivity adjustment technique each attempt to identify and alleviate natural and non-natural effects in the input data. Generating the histograms of data levels in each overlap area, we see the effect of the merged-maximum reflectivity. As the number of overlap regions increases, the frequency of measurements in each data level increases. Separating the histograms by climate zone accounts for the effect of climate differences from the Rocky Mountains to the Appalachians. Fitting the cumulative distribution function to the data provides us with the probabilities that we can use for the adjustment technique. Ultimately, the reflectivity adjustment is applied on a map-by-map basis to reduce or eliminate the bias associated with taking the merged-maximum reflectivity in areas of more than one overlap region. The performance of the technique can be seen in Figure 2b. The figure shows the MRB coverage of POD, and certain artifacts that are apparent in Figure 2a are reduced. Similarly, the histogram of the POD for the MRB after applying the technique shows less spread and a shifted distribution indicating the reduction of the artifacts.

3.3. Parameter Estimation

[17] The determination of a Z-R (reflectivity factor-rainfall rate) relation that can be used to estimate precipitation is not straightforward. Researchers used different approaches for determining a Z-R relation; see Krajewski and Smith [2002] for discussion of these approaches. Some relations have been determined by studies that measured drop-size distributions [Jameson and Kostinski, 2002; Battan, 1973]. Certain studies used the probability matching methods as described in the previous section, and some methods involve optimization procedures. [Anagnostou and Krajewski, 1998; Ciach et al., 1997]. For the sake of simplicity we use here a two-parameter power law,

equation image

where, Z is reflectivity in mm6/m3 and R is rain rate in mm/hr, and we apply an optimization procedure to determine the parameters. We performed the optimization on radar-rainfall (Rr), rain gauge (Rg) pairs for the rain gauge networks presented in section 2.2.

[18] Although rain gauge data have well-recognized problems [Ciach and Krajewski, 1999; Zawadzki, 1975] they represent unbiased estimates of the grid values. The Rr estimates are taken from the 15-minute, 2 × 2 km2 probability shifted reflectivity maps, and the Rg estimates are taken from the one-minute, five-minute, or 15-minute measurements for the specific network as described in section 2.2. The optimization criteria we used was the Root Mean Square Error (RMSE) between Rr and Rg at the 15-minute timescale for the available time period for the given network. The conditional bias (CB) is another optimization criterion that can be used to determine the parameters of (3). The behavior of the criteria based on the RMSE versus the CB can be different and even oppose each other in certain ranges. Ciach et al. [2000] presented an analytical model to study the behavior of the CB. They compared the CB with the RMSE and showed that removing the CB from estimates increases the RMSE, but minimizing the RMSE results in large CB that manifests in underestimation of strong rainfalls. We investigated the CB criterion and found similar responses that showed conflicting behavior. Therefore we selected the RMSE criterion as our optimization procedure as it appeared to be more stable in our instance.

[19] We first determined the A parameter that corresponds to a given b value. The A parameter accounts for the bias in the pairs and is determined from:

equation image

where, Rr is Z(1/b), and Rg is rain rate from gauge measurement. The summation is taken over N radar pixels and n rain gauges. In this formulation, N = n as the optimization is carried out for radar, rain gauge pairs. Then we determine the RMSE as a function of the b parameter. The RMSE is determined from:

equation image

where, Rg are rain gauge measurements, and Rr is (Z/A)(1/b), A is from (4).

[20] Figure 7 shows the result of the optimization for Rr and Rg pairs for each rain gauge network. The figure shows the parameter b as a function of RMSE and it shows the contours for the values of the A parameter. The contours correspond to the multiplicative bias values as determined from (4). For each of the rain gauge networks shown in the figure, the function of RMSE shows a fast decay until a value of about 1.0 and then there is a minimum near 2.0. The function is relatively flat in the range greater than about 1.5, and the optimization is similar for the different networks. This allows some flexibility in determining the b parameter. We selected values of b = 1.7 and the corresponding no bias condition of A = 270. This relation is similar to the traditional Marshall-Palmer relation of b = 1.6 and A = 200, but is different from the operational NEXRAD parameters of b = 1.4 and A = 300. As discussed by Morin et al. [2002], Jameson and Kostinski [2002], and Ciach et al. [2000] the higher value of b is a consequence of the RMSE criterion's adjustment to errors resulting from Z measurement, quantification, inadequate sample size, etc.

Figure 7.

Optimization of Z-R parameters based on radar-rainfall (Rr), rain gauge (Rg) pairs for; (a) Oklahoma Mesonet; (b) Georgia Automated Environmental Network; (c) Iowa City Airport Piconet; (d) Goodwin Creek Research Network. The figure shows the b parameter value versus the RMSE and the corresponding A parameter value for each radar rain gauge pair (multiple gray lines) for the 5-year period. For each network, the contoured value of the bias (0.5, 1.0, 2.0) is shown for the b parameter and resulting A parameter. The parameters we selected (A = 270, b = 1.7) and the default NEXRAD Z-R parameters (A = 300, b = 1.4) are shown on the figure.

3.4. Rainfall Accumulation

[21] We accumulated the precipitation estimates from the 15-minute, 2 × 2 km2 adjusted maps of reflectivity transformed by the Z-R relation using the optimum parameters from section 2.3. We then produced an hourly, 4 × 4 km2 precipitation estimate for the entire MRB for the study period. If any of the 15-minute reflectivity maps are missing for the hour, we produce a null reflectivity map and we then provide a quality control file that list the missing hour. There are only 100 missing hours for the five-year period: approximately 0.2%.

[22] Some studies have shown the need to correct rainfall estimates for the advection of storms [Anagnostou and Krajewski, 1999; Liu and Krajewski, 1996; Fabry et al., 1994; Asem, 1992]. An advection correction technique can account for temporal sampling effects of the radar. The WSR-88D radars scan at 5–6 min in precipitation mode and for many instances part of the storm will be missed. A method that accounts for the velocity of the storm and then uses this information to interpolate estimates to fill in the gaps can be applied before accumulating precipitation. Fabry et al. [1994] showed that interpolating between 15-minute estimates to 5-minutes provides better accuracy in the precipitation estimates (∼25% error reduction). Liu and Krajewski [1996] showed that an advection correction method had the smallest errors when the storm velocity was high as compared with simple averaging and space-time krigging, and Anagnostou and Krajewki [1999] showed that correction for rain field advection moderately improved estimation accuracy. We felt that it was necessary to investigate the need to apply an advection correction method to the 15-minute, 2 × 2 km2 precipitation estimates before accumulating to hourly precipitation maps. The problem we came across is that the input merged maximum reflectivity maps are an integrated and coupled product of all the radars in the CONUS that involves timing issues. In order to apply an advection correction method, we needed reflectivity maps that are decoupled from this integrated product. As the input reflectivity maps to our algorithm are produced from several radars, and the specific time of each radar composite is unknown, it is impossible to decouple the reflectivity maps. Therefore, we did not apply an advection correction method before accumulating to hourly, 4 × 4 km2 precipitation estimates.

3.5. Additional Quality Control

[23] After applying initial thresholding, reflectivity adjustment, spatial windowing, parameter optimization, and estimating precipitation at an hourly, 4 × 4 km2 resolution, there still existed examples of bad data. The bad data escaped the initial quality control performed by WSI and us. The early quality control suffers from huge volumes of data and difficulty of physical interpretations of the reflectivity values. The additional quality control is performed on values that have physical meaning with respect to hourly rainfall. We performed a manual inspection of each hourly file for the entire study period (5 years). This consisted of using the data browser as detailed in Nelson et al. [2003] to look at each map of precipitation. By only having 2-dimensional maps of precipitation, we could only determine areas of bad precipitation through animation and geographic location. Without the use of a third dimension, or volume scan [Klazura and Imy, 1993] of reflectivity, it is difficult to determine specifically if the areas of precipitation are estimated from false echoes.

[24] Figure 8 shows examples of precipitation that we considered bad or questionable based on the manual inspection of the data. For instance, areas of precipitation that did not “move” for several time steps were highlighted as possible bad data. Further, if these areas were centered at the location of a radar, this provided a better indication that the area of precipitation was most likely AP. Figure 8a shows one area of precipitation that is centered on a radar and did not advance spatially for 4 hours. Other areas of bad precipitation could be caused from BB echoes. As an example, the region of arc-shaped high estimates of precipitation near the radar as in Figure 8b is an indication of BB, which we then checked against the original finer resolution input reflectivity data.

Figure 8.

Examples of data quality in final precipitation estimates: (a) anomalous propagation at KNQA (Memphis, TN) radar on 27 June 27 2000; (b) bright band contamination at KUDX (Rapid City, SD) radar on 4 October 1998.

3.6. Limitations and Suggested Improvements

[25] The radar-based precipitation data set provides researchers with high-resolution estimates of precipitation for a continuous 5-year period. However, we experienced limitations during the development of the estimation algorithm. A major limitation during the development of the data set was working with the NOWRad composite reflectivity product. One of the major problems of the NOWRad product is that we did not know the specific algorithm used to create the composite reflectivity maps. We did determine that the composite is a merged maximum of reflectivity from any of the radars in an area. However, we did not know the degree of quality control that was performed on each of the input reflectivity maps. Further, we determined that the 15-minute composites of reflectivity do not allow for investigation at smaller scales as information from different sources and scales are included in the development. Finally, the NOWRad product is quantized in levels of 5 dBZ. This coarse resolution initially only allows for 16 values of reflectivity, which subsequently would only allow for 16 values of precipitation using the direct Z-R transformation and could potentially result in a factor of two error in rainfall as mentioned previously.

[26] Accounting for snow in the algorithm is not straightforward. Some dedicated research projects into the use of WSR-88D radars for the estimation of precipitation from snow were headed at the U.S. Bureau of Reclamation [Hunter and Holroyd, 2002]. Their research illustrated the problems associated with estimating the parameters of the snowfall rate power law during a dedicated research project. We expected even more problems for this data set as we did not have in-situ measurements of snow for the determination of snow density or snow water equivalents. Further, we did not have information in the input reflectivity data set beyond the composite merged maximum reflectivity. Thus we were unable to determine precipitation type, correct for range dependencies, or correct for higher tilt selection. Therefore we did not apply a specific power law relation for snow.

[27] One option available to produce a national precipitation product could be to obtain all of the stage II (HDP array) products for each radar and merge and adjust the products from all of the radars in the MRB. The advantage of this method is that the precipitation product is already developed based on a tested algorithm. Some disadvantages would be the amount of input data that would need to be organized, quality controlled, and managed would be extensive. Further, merging data from each of the radars in the MRB would be a large task, and the resulting product would still contain many of the problems that we came across while developing our algorithm. Adjusting the rainfall products with rain gauge data for each stage II product at a specific radar, similar to the PPS algorithm, is a possibility, but the result would not be much different than the stage III operational product. Mosaicing of a regional stage III product is also a possibility, but the resulting product would still have similar problems to those that we have presented. Recently the stage IV product has been made available that is the mosaic of the stage III product and could be a possible starting point for developing a new national product.

[28] Although the input NOWRad reflectivity data set has limitations, it is the only radar-reflectivity data set of national coverage at the high resolution, as of this publication, that provides a continuous spatial and temporal coverage of radar-reflectivity of the MRB for the 5-year period of 1996–2000. The development of the composite radar-reflectivity is not a trivial task. There are some 140 NEXRAD radars in the CONUS. The calibration of each radar site can result in different parameters of the radar equation and these data are not always synchronized in time. With this in mind, we believe that certain algorithms should be implemented in the development of the composite reflectivity maps. The addition of an AP detection algorithm would greatly enhance the quality of the radar-reflectivity data. In addition, algorithms that detect bright band and correct for range could be implemented. Some studies propose the implementation of these algorithms as well for estimating precipitation [White et al., 2002; Gourley et al., 2002], but to incorporate these algorithms into the development of composite reflectivity maps is a large task. This can only be done with access to the volume scan data for each NEXRAD site and it can only be done with the aid of powerful tools such as Geographic Information Systems (GIS) and supercomputing facilities. The volume scan data provides three-dimensional information at the highest spatial and temporal resolution, as of yet, that could be used to improve the quality of a precipitation product by implementing these algorithms. Advancements geared toward easier facilitation of the large volumes of available radar data are continuing. For example, the Collaborative Radar Acquisition Field Test (CRAFT) project [Droegemeier et al., 2002] demonstrates the possibility for obtaining radar data in real time through data compression and transmission for multiple radars.

4. Initial Evaluation

[29] Initially we compare the precipitation product for the MRB with the first order stations from the Surface Land Daily Cooperative Summary of the Day Data set TD3200 [National Climatic Data Center, 2000]. We will present detailed comparisons of the MRB data set with rain gauge networks, gridded precipitation data sets, and specific NEXRAD radar locations in a future publication. Here we provide an overview of the MRB data set development and its initial comparison with the NCDC data.

[30] The TD3200 data set consists of summary of the day meteorological elements including precipitation. Data from first order stations from the TD3200 data set are compiled from National Weather Service stations and other federal agencies. The stations are mainly climate stations and are maintained by experts. We determined 112 first order stations that are located in the MRB and have a continuous daily record for 1996–2000 (Figure 9), and we initially compare the 5-year accumulation for the warm season (April–October) of the 4 × 4 km2 precipitation product with the rain gauge estimate. Figure 10a shows the comparison of the 5-year accumulation of precipitation for the NCDC rain gauge versus the 5-year precipitation accumulation for the corresponding 4 × 4 km2 pixel. We identified data pairs that correspond to overestimation or underestimation in the radar product caused by physical effects, and they are indicative of problems that may still exist in the MRB precipitation product. However, other sources of error are still possible in both the radar-based product and rain gauge estimates. Figure 11 shows examples of the cases causing the over or underestimation by the radar. We used the probability of detection map as described in section 3.1 to determine the physical effect causing the over or underestimation. In Figure 11a, the gauge(s) is located in an area of beam blockage causing underestimation by the radar. Figure 11b shows the location of a gauge that is far from the radar. At far distances, the radar does not scan the full vertical extent of the cloud and thus underestimates precipitation. Figure 11c shows the location of a gauge that is located close to the radar. Instances of ground clutter close the radar that we were not able to filter cause a systematic overestimation by the radar. The data points labeled in Figure 10a correspond to the rain gauge labels in Figure 9, and Table 3 shows the rain gauge corresponding to the labeled data points in Figure 9 and the cause for over or underestimation.

Figure 9.

First order rain gauges from TD3200 NCDC data set overlaid on probability of detection map for Mississippi River Basin. Labeled gauge locations correspond to labeled data points in Figure 10 and Table 3.

Figure 10.

Comparisons of first order rain gauge versus corresponding 4 × 4 km2 pixel value: (a) accumulation for 5-year warm season (April–October) period. Labeled data points correspond to labeled gauge locations in Figure 9 and Table 3. White circles correspond to labeled gauge locations in Figure 9 and Table 3. Black circles correspond to remaining data points used in analysis; (b) yearly warm season accumulation for 5-year study period. Labeled gauges from Figure 10a not included; (c) monthly warm season accumulation for 5-year study period. Labeled gauges from Figure 10a not included.

Figure 11.

Rain gauge location for 3 cases of over or underestimation of radar; (a) Beam blockage as compared to rain gauge(s) locations (KMRX, Knoxville, TN; KGSP, Greer, SC); (b) Rain gauge located far from radar (KMBX, Minot, ND; KGGW, Glasgow, MT); (c) Rain gauge located close to the radar (KIND, Indianapolis, IN).

Table 3. Rain Gauge Locations and Causes for Radar Over or Underestimation
Rain GaugeWbnIdCause (Figure 11)a
  • a

    Cause A indicates beam blockage as compared to rain gauge locations (KMRX, Knoxville, TN; KGSP, Greer, SC). Cause B indicates Rain gauge located far from radar (KMBX, Minot, ND; KGGW, Glasgow, MT). Cause C indicates rain gauge located close to the radar (KIND, Indianapolis, IN).

13812A
23859A
33872A
43928C
512930C
613729A
713872A
813897C
913964C
1013966C
1114895A
1214897B
1323047C
1423065C
1524012B
1624025B
1724028B
1824052B
1924090A
2093817B
2193819C
2293876A
2394014B
2494967B
2594984C
263947C
273952C
283983C
294834C
3012916C
3113963C
3213985C
3313995C
3413996C
3514842C
3614923C
3724011C
3853813C
3994823C

[31] In Figure 10 we also show the comparisons of the yearly warm season accumulation (Figure 10b) and the monthly warm season accumulations (Figure 10c) for the rain gauge, radar pixel pairs after removing the pairs identified in Figure 9 and Table 3. Table 4 identifies the bias (ΣRr/ΣRg), range of values for the rain gauges, root mean squared difference (equation image) (rmsd), relative rmsd (rmsd/μg), and correlation between the rain gauge, radar pixel pairs for the 5-year, yearly, and monthly analysis. The bias, rmsd, and correlation in the 5-year accumulation before identification of pairs from Figure 9 are slightly higher as we expect. After removal of these pairs, the comparison of the radar, rain gauge pairs shows an unbiased comparison at the monthly, yearly, and 5-year timescale. The rmsd decreases and the correlation increases after removal of these pairs. Further, the comparison of the temporally integrated radar, rain gauge pairs shows an increase in the correlation. The slight decrease in the correlation of yearly comparison versus the correlation of the monthly comparisons is due to the smaller sample size in the yearly radar, rain gauge pairs and the fact that we use the Pearson product-moment correlation coefficient to estimate the population coefficient which has the limitation that it is influenced by outliers and skewed distributions [Habib et al., 2001].

Table 4. Bias (ΣRr/ΣRg), Range of Gauge Values (Rg), Root Mean Squared Difference (rmsd), Relative rmsd (rmsd/μg), and Correlation for Rain Gauge, Radar-Rainfall Pixel Pairs for Increasing Temporal Scales
 BiasRange Rg, mmRMSD, mmRelative RMSDCorrelation
  • a

    5-year accumulation from warm season (April–October) before quality control analysis of Table 3.

  • b

    5-year accumulation from warm season after quality control analysis.

  • c

    Yearly accumulations from warm season after quality control analysis.

  • d

    Monthly accumulations from warm season after quality control analysis.

5-Year Accuma1.0201432–5021575.30.180.79
5-Year Accumb0.9941746–5021250.70.080.94
Yearly Accum (ws)c0.994183–123397.20.150.84
Monthly Accum (ws)d0.9940–57030.60.330.86

[32] The comparison of the radar based product and the rain gauge estimates shows good agreement on a basin wide comparison after removal of the radar, rain gauge pairs from Figure 9 and Table 3. The comparison of the 4 × 4 km2 precipitation product with the rain gauge estimate provides for an initial look at many locations in different climate zones. By selecting first order gauges that fall in the MRB and have a continuous record for the 5-year period, we compare locations in a random sense. We apply no other criteria for selecting gauge locations. Initially, the comparisons are good, but there still may exist sources of error. Random effects can still be prevalent in the radar based product, instances of radar data quality may still exist, and rain gauges have errors associated with gauge undercatch due to wind, measurement of rain versus snow, and obtaining an accurate location of the gauge (ex: the location of the TD3200 data set gauges are given to the nearest 1-minute latitude and longitude).

5. Conclusions

[33] We have developed a large precipitation data set for the MRB. The data set is produced at 4 × 4 km2 spatial and hourly temporal resolution, for a 5-year period, and it should serve as an important resource for hydroclimatological analyses in the MRB. We developed the precipitation estimates through an algorithm that involved starting with composite maps of radar-reflectivity that were derived from the WSR-88D sites in the CONUS. Our algorithm involved improving the data quality of the input radar-reflectivity products through a reflectivity adjustment method and a small window averaging. We then determined the parameters of the Z-R relation through optimization techniques by comparing rain gauge data and radar-reflectivity estimates. Browsing the data showed the need for a quality control check to indicate areas that may still have been of questionable data quality, and finally, we provide estimates of precipitation at the 4 × 4 km2 spatial and hourly temporal resolution.

[34] The development of this data set required management of large amounts of data and the use of specific tools for visualizing and storing data. We identified and presented these issues in Nelson et al. [2003]. In this paper we have presented the algorithm used to develop the precipitation estimates, but we also want to suggest further developments in hydroclimatological analysis using weather radar observations. The importance of a higher quality input data set is essential. We feel that more information is needed in order to identify areas of suspect data quality. In particular, AP is a problem in the input radar-reflectivity data, and information from the volume scan of the radar would aid in determining areas of AP. Incorporating the volume scan data from all of the radars in the U.S. would be a daunting task, however, projects such as the Collaborative Radar Acquisition Field Test (CRAFT) could be used to test the possibility of using the volume scan data for development of a future national precipitation data set. With almost 60 radars in the CONUS disseminating data via Unidata Local Data Manager, the opportunity to obtain the volume scan (Level II) data and create a precipitation product is only now becoming a possibility.

Acknowledgments

[35] Thanks are due to Dr. Gerrit Hoogenboom of The University of Georgia for providing us with the Georgia Automated Environmental Monitoring Network data. We also appreciate the thoughtful comments of Dr. Grzegorz Ciach and three anonymous reviewers. We gratefully acknowledge the support of NOAA Office of Global Programs through grant NA96GP0417.

Ancillary