SEARCH

SEARCH BY CITATION

Keywords:

  • human-automation interaction;
  • forecaster judgement;
  • weather radar;
  • severe weather;
  • warning decision;
  • confidence

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References

Experimental weather radars are being developed that could enhance the severe weather warning process by providing higher resolution data sensed closer to the ground and with faster update rates. Because wind speed is an important criterion in the issuance of severe thunderstorm warnings, this research investigates the impact of adding these new data to the forecaster decision-making process. In a static case review setting, 30 National Weather Service (NWS) forecasters evaluated six convective weather cases under two conditions: (1) using (conventional) WSR-88D weather radar data, and, (2) using both WSR-88D and additional data from an experimental four-radar network. Forecasters' predictions of ground level wind gusts, 2–5 min into the future, were compared to measurements from ground-based wind sensors. When provided with the additional radar data participants significantly improved the accuracy of their wind speed assessments (absolute error reduced from 5.9 m s−1 to 4.0 m s−1; p < 0.001), increased their assessment confidence ratings (p < 0.001), forecasted significantly greater wind speeds (20.4 m s−1 as opposed to 17.1 m s−1; p < 0.001), and increased the number of affirmative decisions to warn from 15 to 35 (p = 0.001). While the addition of high resolution, low altitude, rapidly updating radar data is shown to have both qualitative and quantitative benefits, training and warning policy implications for the incorporation of new technology must also be carefully considered as increased accuracy, confidence and higher wind speed estimates may lead to more warnings. Copyright © 2011 Royal Meteorological Society


1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References

Severe weather such as hail, high winds, flooding and tornadoes threatens lives and property nearly every day. In 2008 alone, hail caused $ 464 million in property damage, high winds from thunderstorms caused 28 fatalities and $ 1.26 million in property damage, and flash floods caused 58 fatalities and $ 1267 million in property damage (NOAA, 2009). Losses from weather hazards can be reduced if the public is given sufficient warning to take protective action.

Forecasters use remote sensing technologies including radar, satellite and ground sensors to assess and predict hazards, and then to issue and cancel related weather warnings. In the United States, the National Weather Service (NWS) operates 159 Doppler weather radars called Weather Surveillance Radar 1988 Doppler or WSR-88D (Klazura and Imy, 1993) in order to supply data including reflectivity (which relates to precipitation rate) and velocity (which indicates radial wind speed). When operating in a severe storm environment, WSR-88D radars generally perform a complete multiple vertical tilt scan every 4–5 min with a spatial resolution of 1–4 km (Klazura and Imy, 1993). These radars generate both reflectivity and velocity products out to a 230 km radius in range but have more sparse coverage below 2 km (above ground level or AGL (Maddox et al., 2002)).

NWS forecasters make weather hazard assessments and warning decisions using weather products and procedures that help them to maintain a ‘big picture’ awareness, to build conceptual models and to update them with small scale details from radar product interpretation (Andra et al., 2002). Forecasters primarily rely on WSR-88D radar products for real-time weather hazard assessment (Quoetone and Huckabee, 1995; Andra et al., 2002). For example, a forecaster can determine whether a storm is severe based solely on radar products. In the United States, a storm is considered severe when at least one of three conditions is met: surface wind gusts exceed 25.7 m s−1 (50 kt) (determined via interpreting and integrating velocity data), hail exceeds 1.9 cm (3/4 in.) diameter (determined via interpreting and integrating reflectivity data), or tornado production (determined via interpreting and integrating reflectivity and velocity data plus developing a mental picture of storm structure and evolution) (Galway, 1989).

Ideally, forecasters use conceptual models to identify precursors in the radar data to provide proactive warnings. These conceptual models, along with past experience and knowledge of storm physics, allow a forecaster to project winds ‘seen’ in radar data down to the surface and into the near future. Using radar to assess severe surface winds can be challenging because of inherent limitations in data availability and precision. Due to sampling methods and radar spacing, the data are not available uniformly in space and, in some cases, not at all. Radar beams travel in a straight line which limits the coverage area of radar systems to objects on their horizon due to the curvature of the Earth. Radar beams are pointed at angles (tilts) above the horizon: therefore, the atmosphere low to the ground and far from the radar is not sampled. The radar beam spreads out as it travels, resulting in lower spatial resolution with increased distance from the radar. With respect to velocity, Doppler radars can only detect wind speed by the motion of water droplets or other airborne objects moving along the radar beam. Thus, velocity data show radial wind speed which depends on the wind-to-beam intersection angle. Radial wind speed is then negative (towards) and positive (away) relative to the radar, while winds travelling perpendicular to the radar beam are not detected.

Technological advances, such as the introduction of WSR-88D itself, have already made positive impacts on the probability of detection (POD), the false alarm rate (FAR), the critical success index (CSI), and on lead times (Polger et al., 1994; Bieringer and Ray, 1996). Outcome measures such as these are also influenced by procedures and definitions within the NWS. For example, severe thunderstorm warnings, which may cover multiple county areas, may be verified as accurate by a single point report of hail or wind anywhere within the warning area. Also, verification reports coming from the public are more likely to occur in heavily populated areas. Therefore, it can be a challenge to determine whether a warning is accurate.

New approaches to radar design and deployment, and new data dissemination techniques, could enhance the warning process by providing more precise data that are also more accurate. For example, increases in processing power should allow for more effective signal processing, which can create higher quality data with lower cost transmitters. The recently deployed WSR-88D ‘Super-Resolution’ is a two to four times improvement in its output resolution without a change in transmitter (National Weather Service Radar Operations Center, 2009). Smaller antenna designs and low cost transmitters can allow for multiple radar nodes to overlap coverage of an area, thereby helping to fill gaps in coverage and determine true wind velocities (McLaughlin et al., 2009). Also, phased-array antenna technology creates electronically directed beams with little or no moving parts allowing for faster scanning and therefore higher data update rates (Heinselman et al., 2008).

While such advances have the potential to improve the weather hazard assessment and warning process, their exact impacts should be quantified in order to influence training, decision support tool design, normative decision making processes and procedures as well as policy. With respect to resolution, Brown et al. (2005) indicate that radars with greater spatial resolution will report radial wind velocities with greater (absolute) magnitudes for the same volume of the atmosphere and will depict severe storm signatures more clearly than their lower resolution counterparts. With respect to lower troposphere observations, new weather features, such as misocyclones, downbursts, or rear inflow jets can be observed (Bluestein et al., 2007; Brotzge et al., 2010). This combination of changes in sampling may lead to higher forecaster wind speed assessments overall and differences in the number of wind-related warnings.

Systematically designed studies have not been designed to determine the impact of the lower troposphere observations on NWS forecaster decision-making, let alone detailed analyses of how specific weather features that develop and form in the lower atmosphere would impact the warning process. Thus, there is a need for radar system analyses that investigate the quantitative impact of improved design features such as spatial resolution and lower troposphere observations on forecaster decision making (Heideman et al., 1993; Doswell, 2004).

To evaluate the impact on the forecaster decision making process, quantitative outcome and process measures should be considered. For example, to evaluate hazard assessments, judgements can be compared to ground truth where available to determine accuracy. Qualitative measures, such as confidence (Murphy and Winkler, 1984; Nadav-Greenberg and Joslyn, 2009) can also provide insight into how data are affecting a forecaster's decision process. While accuracy is a straightforward measure, previous studies have shown that the additional information does not always increase skill (Stewart et al., 1992; Heideman et al., 1993). Also, many researchers have demonstrated overconfidence in self ratings (Oskamp, 1965; Einhorn and Hogarth, 1978; Fischhoff and MacGregor, 1982) but this has been shown to be a very complex issue (Klayman et al., 1999) and there is some evidence that forecasters are a group of experts who are better calibrated than most (Murphy and Winkler, 1977).

The Engineering Research Center for the Collaborative Adaptive Sensing of the Atmosphere (CASA) is creating a new paradigm for radar systems based on dense networks of low-cost Doppler radars (McLaughlin et al., 2009). CASA radars are designed with a shorter range than WSR-88D (40 vs 230 km) and they can be deployed with overlapping regions of coverage (30 km spacing). When compared to WSR-88D, these technological changes result in increased spatial resolution (median 0.5 vs 2.5 km), temporal resolution (update rates of 60 s vs 4–5 min), and more complete coverage at lower elevations (100% coverage below 1 km AGL vs 35% (McLaughlin et al., 2009)). To address challenges related to velocity determination, radars can be deployed closer together, thereby creating conditions where multiple radars can scan the same portion of the atmosphere. In addition to physical design and layout, a network of CASA radars automatically detect weather features, generate scanning priorities, and allocate sensor resources across the coverage domain (Pepyne et al., 2008; Zink et al., 2008). The dense network of sensors concept from CASA increases the opportunity for a variety of wind-to-beam intersection angles, further improving velocity detections.

CASA is currently operating a four node radar testbed in southwest Oklahoma (McLaughlin et al., 2009) (Figure 1). The cursor readouts of Figure 2 illustrate the increase in resolution and decrease in height coverage: 0.5 Kft (152 m) and 53.14 kt (27.3 m s−1) as opposed to 5.9 Kft (1.8 km) and 20.41 kt (10.5 m s−1) respectively. The effective view of Figure 2 is shown by a small rectangle, ∼1.1 km across, in Figure 1. Improvements in data fidelity alone are expected to improve performance (Stewart and Lusk, 1994) and by design, data from this testbed can be described as ‘more relevant’ and ‘high quality’ (due to filling the current sensing gap), attributes which are predicted to increase accuracy and reliability (or consistency) in forecasts (Stewart, 2001). Also, the data contain additional cues, such as very small-scale rotations and strong low-level winds (Brotzge et al., 2010), that are important to severe thunderstorm or tornado warning decisions.

thumbnail image

Figure 1. A 250 km wide map of southwest Oklahoma with county borders. Mesonet stations labelled in lower case with small square markers. Radar sites labelled in upper case. Grey shading indicates urban areas including Norman (near koun) and Oklahoma City (near kokc). A small rectangle approximates the viewing window used in Figure 2. CASA radar 40 km range rings also shown

Download figure to PowerPoint

thumbnail image

Figure 2. Radial velocity data from CASA KSAO (2° tilt) view (a) and from WSR-88D KFDR (0.50° tilt) view (b) for scenario 5. NINN and CHIC markers are OK Mesonet ground based sensors

Download figure to PowerPoint

As no studies have measured the impact of such gap filling radar on NWS forecaster severe storm warnings, this study focuses on one weather hazard: high winds. Other studies have investigated some aspects of performance or the impact of new data. However, they have not systematically controlled information sources and measured process and outcome measures of experienced practitioners. Doswell (2004) goes so far as to say, ‘To date, the process of weather forecasting by humans has not been subjected to a thorough and comprehensive study’. The present study measures the impact of the addition of CASA radar data (with its greater temporal and spatial resolution and lower troposphere coverage) on forecasters' assessment of near future winds (on the order of minutes) and related warning decisions. This is an extension of a pilot study (Rude et al., 2009) to include a total of 30 NWS forecasters. In a static part-task setting using a case review paradigm, impacts are measured via forecaster accuracy of predictions of ground level wind gusts, magnitude of these wind assessments, forecaster confidence, and the number of warning decisions.

Based on subject matter expert interviews, both operational and experimental observations of forecasters, and a review of NWS training materials, warning decisions are based on both assessment information (such as visual signatures, current understanding of storm structure, and expected trajectory) and forecaster confidence. When CASA data are provided, this research hypothesizes that surface wind speed assessments will be higher, assessment error will be lower, and forecaster confidence will be higher. This research also hypothesizes that the higher wind speed assessments paired with increased confidence will lead to more affirmative decisions to issue warnings.

2. Methods

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References

2.1. Participants

A convenience sample of 30 operational NWS forecasters (25 male, 5 female) with experience ranging from 3.5 to 34 years (M = 14.0, SD = 7.0) participated in the experiment. Sixteen participated at the 33rd Annual meeting of the National Weather Association (NWA) in Louisville, KY, USA, 13–15 October 2008. Fourteen participated as part of NOAA's Spring 2009 Experimental Warning Project (Stumpf et al., 2008) at the Hazardous Weather Testbed (HWT) in Norman, OK between 4 May and 5 June 2009.

2.2. Apparatus and materials

2.2.1. Weather scenario selection

The experimental task was to analyse radar data from archived weather events and to assess the near future maximum wind speed at the ground. Weather radar data were selected from among 17 thunderstorm days archived during CASA operations in spring 2008 (Philips et al., 2008) where high winds were detected by ground-based sensors. The NWS issued severe thunderstorm warnings in or near the CASA testbed on all 17 of these days. The Oklahoma Mesonet, a network of ground based sensors, provides data from a variety of sensors every 5 min. While the mesonet coverage is much more dense than NWS ground stations, one or two stations per county, only 10 (ACME, APAC, CHIC, FTCB, KETC, MEDI, MINC, WALT, and WASH) exist in the range of CASA radars. The station variable for wind gust measurements is WMAX, defined as ‘the maximum (or peak) 3 s wind speed observed during a 5 min interval at a height of 10 m above ground’ (University of Oklahoma Board of Regents, 2009).

In order to support forecaster understanding of the large scale weather environment, synoptic scale data and candidate scenarios were selected from the same day. Synoptic scale weather data are the broader context within which localized storms develop. Items in the packet included soundings, pressure and temperature maps, wind profiles, satellite and surface observations. On 7 May 2008 a convective cold pool generated discrete but similar thunderstorms which then produced straight line winds in the 10–26 m s−1 (20–51 kt) range.

For each candidate thunderstorm, a 12–15 min scenario time window was created such that: (1) one CASA radar with 1 min updates and one WSR-88D with three 5 min updates at the lowest (0.5°) tilt were available, and, (2) the final WSR-88D and CASA updates occurred 2–5 min before the station reading.

A single CASA radar was used for each scenario. In areas of overlapping coverage the CASA radar closest to the target was used, except in scenario 3 where KLWE was selected to reduce the number of radar changes between scenarios (e.g. KLWE was used in both the second and third scenarios). The Frederick, OK WSR-88D (KFDR) was used in all trials so that the WSR-88D radial velocity data interpretation was consistent across trials.

In the six scenarios, WSR-88D Level-II data from KFDR were available at 14 standard tilts in the 0.5–19.5° range (a storm mode called ‘VCP 12’). CASA data from radar nodes (KCYR, KLWE, KRSP or KSAO, depending on the scenario) were available at seven tilts: 1°, 2°, 3°, 5°, 9°, 11°, 14°. All CASA radars provide a full (360° azimuth) scan at the 2° tilt, while other tilts were partial sectors dynamically configured by automation (Philips et al., 2008). These CASA sources, on average, provided three to five times more data samples per square kilometre than KFDR for each target point. CASA sources provided even higher sampling for the fourth and fifth scenarios. Minimum height coverage for CASA sources was 0.9–1.9 km lower than WSR-88D, except for the second scenario which had very similar coverage.

The target mesonet readings are presented in the second column of Table I. A simple heuristic was used to estimate possible participant performance. Under the assumption that the lowest and most recent data closest to the area of interest could be simply ‘read off’ the radar display, the ‘max on display’ number was recorded. The six selected scenarios yielded a lower average ‘max on display’ value for the CASA data (14.1 m s−1 in Table I, column 7) as compared to WSR-88D (16.6 m s−1 in Table I, column 4). In general, the CASA data were further from the ground truth than WSR-88D (see Table I column 8 where the sum of absolute values of the differences for all weather scenarios was 48.4 m s−1 for the CASA data source while column 5 shows that the sum was only 27.4 m s−1 for KFDR).

Table I. Weather scenario sensor data summary
Scenario numberGround truth (m s−1)WSR-88DCASA
  Minimum height at target (km)Maximum on display within 1.9 km from target (m s−1)Absolute difference to ground truth (m s−1)Minimum height at target (km)Maximum on display within 1.9 km from target (m s−1)Absolute difference to ground truth (m s−1)
  • a

    Max on display within 5 km from target in this case due to missing data.

116.51.7421.14.60.679.37.3
211.60.676.74.90.5816.54.9
322.01.4620.61.40.5820.61.4
417.81.8013.44.40.124.613.2
524.01.9519.05.00.0626.82.7
626.12.6219.07.00.797.2a18.9
Mean19.71.716.64.60.514.18.1
Sum   27.4  48.4

Subject matter experts (SMEs) were consulted to interpret and evaluate the scenarios and provide their detailed understanding. SMEs had extensive experience with the local climatology as well as the interpretation of CASA radar data, having done previous case analyses with the authors. SMEs analysed the same weather scenarios first with WSR-88D only and then again with both CASA and WSR-88D. These were the same data sources as used by study participants. Additional details, including SME feedback, are presented with the experiment results.

2.2.2. Display and data collection software

As the current NWS operational forecasting decision support suite (AWIPS (Raytheon Company, 2009)) was unable to display archived CASA data, WDSS-II display software (Lakshmanan et al., 2007) was used to render CASA and WSR-88D data in a case-review mode, i.e. no simulation clock. Reflectivity and radial velocity data from each radar were provided with matching time windows for each scenario. No other forecast or sensor data were provided.

WDSS-II renders data (polar or Cartesian co-ordinate systems) into an Earth-centric (spherical co-ordinate) view window. The view window was in a ‘plan view’ (overhead) orientation, side view and virtual cross section functionality was not used. Each window panel displayed multiple weather products from a single radar over geo-political map backgrounds. When multiple panels were in use (e.g., one for a CASA radar and one for WSR-88D), they were synchronized with respect to the virtual camera view and product time-steps. The synchronization uses the top-left-most panel as the set-point for all other panels and any secondary panels were exchanged with the ‘main’ panel by using a context menu (via right mouse click). Other pertinent features include the cursor point sampling of data, which simplifies the identification of specific data values, and the continuous display of data source names and time stamps as a text overlay.

The WDSS-II display window was maximized with the control widgets hidden providing approximately 994 cm2 of display surface for radar data (Figure 2). The WDSS-II default colour tables were adjusted to ‘black out’ velocity data in the ± 2.6 m s−1 ( ± 5 kt) range, an ambiguous range for CASA sources in this data set. This custom colour table was used for both CASA and WSR-88D radial velocity products. Desktop visuals, mouse movements, and audio recordings were captured by ‘recordMyDesktop’ (Varouhakis and Nordholts, 2008), an open source software package.

Custom software was created to generate all required WDSS-II data indices and configuration files and to automate the experimental procedure. WDSS-II requires an XML index file which lists all product types, data files, and time-date stamps available from a data source. Scripts processed the data source directory tree on disk (for all radar sources) and generated the XML index limited by a configurable date range for each weather scenario. Shell scripts initialized each experimental task by selecting a display configuration file and the data source indices appropriate to each weather scenario. Next, the scripts started the desktop and audio recording package immediately prior to launching the WDSS-II radar display.

2.2.3. Workstation

Five identical workstations were placed in a dedicated room at the NWA annual meeting hotel. Each HP® brand desktop workstation was running Ubuntu® Linux® 64-bit on an Intel® Core 2 Duo 2.0 GHz CPU with 3 GB RAM and an 80 GB hard drive. An NVIDIA® GeForce® 8400-GS based video card was used for OpenGL® acceleration with a common 19″ (48 cm) LCD monitor running at 1280 × 1024 pixel resolution. In addition to a standard mouse and keyboard, each workstation was equipped with a small microphone for audio recording. Similar NWS computer hardware was used at the HWT in Norman: however, desktop and audio recordings were not collected. Instead, the proctor wrote down the participants' verbal comments.

2.3. Independent variables

2.3.1. Weather scenario

There were six weather scenarios (Table I).

2.3.2. Data source

Data source indicates if radar data were supplied from ‘WSR-88D only’ (W) or ‘CASA & WSR-88D’ (C) in a trial.

2.3.3. Task set

Participants experienced the six scenarios in their natural (time ordered) sequence. To address potential issues with meteorological differences between the scenarios, two sets of trials, called task sets, were created for counterbalancing purposes. If the trial in one task set included one data source, the same trial in the other task set included the other. In each task set, the data source alternated from trial to trial. Therefore, task sets were either (C,W,C,W,C,W) for those with both CASA and WSR-88D first or (W,C,W,C,W,C) for those with only WSR-88D first.

2.4. Dependent variables

2.4.1. Absolute wind speed assessment error

The error is the absolute value of the difference between the wind speed assessment and the automated ground sensor reading rounded to the nearest knot.

2.4.2. Assessment confidence

After providing their wind speed assessment, participants were asked ‘how confident are you in this estimate’ on a scale from 1-‘Not confident’ to 7-‘Very confident’. Responses marked between numbers on the scale were recorded as the lower integer.

2.4.3. Wind speed assessment

Wind speed assessment is the ground level wind gust speed forecasted for the target location by the participant to the nearest one knot. Most participants responded with a single integer value. When participants provided a range, the response was recoded as the mean of the range (e.g. 45–50 kt was recoded as 48 kt).

2.4.4. Warning decision

After providing their confidence rating participants were asked, ‘Do these radar based winds indicate a warning is needed?’ and a ‘Yes’ or ‘No’ response was recorded. Responses that did not include the exact term ‘Yes’ or ‘No’ were interpreted and recoded as ‘Yes’, ‘No’, or ‘Missing or ambiguous’.

2.5. Procedure

Each experimental session lasted approximately 90 min. Participants first received background information about the CASA organization and the four-node radar testbed: a brief lecture, including slides, followed by a demonstration of CASA and WSR-88D data. This provided an introduction to what data were provided by the radar and how they might appear in the display. After reading and signing the informed consent, participants then completed a demographics questionnaire.

Prior to working with data on the workstation, participants were given a packet of printed synoptic scale weather products to provide appropriate background information. These products were examined independently for several minutes, until participants felt they understood the synoptic situation. The participants were provided per-task procedures with associated questions. The per-task procedures began with a step-by-step guide to selecting, navigating and viewing data using the WDSS-II display. Operating the WDSS-II radar display tool required operations such as panning and zooming the data sources, using cursor value readouts, switching between radar sources or products, and stepping through the time-stamped products. Participants practiced with the display system for as long as they wanted.

To learn about the task, the radar display and the human-computer interface, each participant used a document containing a step-by-step guide to complete the same training weather scenario which provided both data sources. Each used the mouse or keyboard commands to view various reflectivity and radial velocity data time-stamped products at different tilts. Each participant wrote down a wind speed assessment, confidence rating and warning decision. The proctor then provided the relevant ground-truth reading as feedback.

After the training, each participant began the six experimental trials without guided instructions. Participants were asked to limit their radar interrogation to ‘about 12 min’ in order to represent the pressure of real-time events. However, no time limits were enforced due to procedure complexities. Participants interrogated the data and then wrote down their wind assessment, confidence rating, and warning decision. After completing a trial, they were shown the ground truth (measured wind speed) by the proctor.

2.6. Experimental design and data analysis

This study was a repeated-measures design with task-set as the between-subjects factor and data source as the within-subjects factor. Fifteen participants were assigned to each task set group. Each participant completed three replicates (three different weather scenarios) under each of the two between-subjects conditions. All participants completed the entire experiment yielding 180 completed trials.

Data source and task set effects on the wind speed assessment and the absolute wind speed assessment error were analysed using a repeated measures analysis of variance. Data source and task set effects on confidence were analysed using Friedman tests. Data source effects on warning decisions were analysed using a Pearson's chi-square test. Differences within individual weather scenarios were analysed using t-tests for wind speed assessment and absolute wind speed assessment error (degrees of freedom adjusted as necessary based on Levene's test for equal variance) and Pearson's chi-square tests for confidence ratings and warning decisions.

3. Results

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References

Results are reported as significant for α = 0.05 and trending significant for α = 0.10. In total, there were 180 wind speed assessments ranging from 2.6 to 31.9 m s−1 (M = 18.7, SD = 5.9). Overall, errors were underestimates (M = − 1.1 m s−1, SD = 6.1). The absolute value of the difference between the 180 wind assessments and ground truth resulted in errors ranging from 0.0 to 16.5 m s−1 (M = 4.9, SD = 3.7). There were only five occurrences of zero measurable error and all occurred when given both CASA and WSR-88D data. Confidence ratings ranged from 1-‘Not confident’ to 7-‘Very confident’ (Mode = 5, count = 58). However, five responses were missing, resulting in 175 ratings. One warning decision response was ambiguous and was removed resulting in 179 responses, 129 ‘No’ and 50 ‘Yes’.

3.1. Absolute wind speed assessment error

The effect of data source on assessment error is significant (F1, 56 = 14.7, p < 0.001). Mean assessment error with both data sources was 4.0 m s−1 whereas WSR-88D only was 5.9 m s−1 (Figure 3). When given WSR-88D alone the mean response was an underestimate (M = − 2.7 m s−1, SD = 6.5) but when given both data sources the mean response was a slight overestimate (M = 0.6 m s−1, SD = 5.1). The main effect of task set was not significant. The data source-task set interaction was significant (F1, 56 = 9.7, p = 0.003). This interaction shows little change for the WSR-88D-first task set across data sources (5.6 m s−1 for WSR-88D only and 5.2 m s−1 for scenarios with both sources), and a large change across data sources for the CASA-first task set (6.2 m s−1 for the WSR-88D data source and 2.6 m s−1 for scenarios with both sources). The lowest error occurs in the CASA-first task set with both data sources available.

thumbnail image

Figure 3. Mean and 95% confidence interval for absolute wind speed assessment error (m s−1) as a function of data source

Download figure to PowerPoint

3.2. Assessment confidence

For the 175 responses, the mode was 5 and the median was 4. Assessment confidence varied significantly between data sources (χ2 = 14.5, N = 86, df = 1, p < 0.001). The mode with CASA and WSR-88D data sources was 5 whereas WSR-88D only was 4 (Figure 4). Assessment confidence did not vary significantly between task sets.

thumbnail image

Figure 4. Histogram of confidence (a) with both data sources, (b) with WSR-88D only

Download figure to PowerPoint

3.3. Wind speed assessment

The main effect of the data source on wind speed assessments was significant (F1, 56 = 19.8, p < 0.001). Mean wind speed assessment with both data sources was 20.4 m s−1 whereas WSR-88D only was 17.1 m s−1 (Figure 5).

thumbnail image

Figure 5. Mean and 95% confidence interval for wind speed assessment (m s−1) as a function of data source

Download figure to PowerPoint

The main effect of task set was not significant. The data source-task set interaction was significant (F1, 56 = 14.1, p < 0.001). This interaction shows little change for the WSR-88D-first task set across data sources (18.0 m s−1 for WSR-88D only and 18.5 m s−1 for both), and a large change across data sources for the CASA-first task set (16.1 m s−1 for the WSR-88D data source and 25.4 m s−1 for the scenarios with both sources). The highest wind speed assessments occur in the CASA-first task set with both data sources available.

3.4. Warning decision

Data source had a significant effect on the proportion of Yes/No warning decisions (χ2 = 11.4, N = 179, df = 1, p = 0.001). Warning decisions with CASA and WSR-88D were 35 of 89 ‘Yes’ responses whereas with WSR-88D only 15 of 90 ‘Yes’ responses (Figure 6).

thumbnail image

Figure 6. Response counts for warning decision. Warning decision issued: equation image No; equation image Yes

Download figure to PowerPoint

3.5. Weather signatures and storm understanding

In this section weather scenarios are first described in general terms with information about radar coverage and ground truth reports (Table I). Participant performance analysis is presented (Table II) and followed up with analysis by SMEs.

Table II. Results summary
Scenario Mean (abs) error (m s−1)Mean prediction (m s−1)Mode confidence (1–7)‘Yes’ warning decision
  • a

    Significant difference.

  • b

    Difference trending significant.

PooledW-only5.9a17.1a4a15a
 C&W4.0a20.4a5a35a
1. Marginal eventW-only4.318.345
 C&W3.519.052
2. Non-event, no CASA advantageW-only6.316.14a2
 C&W6.718.05a4
3. Warnable near but not at targetW-only3.7b20.355
 C&W2.2b21.557
4. Non-event, no CASA advantageW-only6.711.340
 C&W5.213.550
5. Warnable, CASA advantageW-only8.8a15.3a4a0a
 C&W2.3a26.3a6a13a
6. Warning eventW-only5.621.0b2b3a
 C&W3.824.1b5b9a

Scenario 1 began at 1937 UTC and had a small, moderate strength, multi-cell structure pass over the targeted mesonet site (KETC) (Figure 7). Unknown to the participants, the NWS already had an active severe thunderstorm warning (SVR) in place. CASA (KRSP) data showed radial velocities in the 18–20 m s−1 range. However, at the targeted mesonet site the maximum winds captured by CASA were 9.3 m s−1 (see Table I column 7, ‘max on display within 1.9 km from target’). At the target, CASA coverage was available down to 0.67 km (see Table I column 6, ‘min height at target’ for CASA) whereas the WSR-88 (KFDR) data were at 1.7 km and higher (see Table I column 3, ‘min height at target’ for WSR-88D). At 1955 UTC the mesonet reported a 16.5 m s−1 wind gust (see Table I column 2, ‘Ground truth’). NWS archives indicate a verified hail report of 1.9 cm (3/4″) occurred at 1950 UTC to the southeast of the mesonet site.

thumbnail image

Figure 7. Scenario 1 at 1948 UTC. CASA reflectivity (a) and radial velocity (b) data shown on the left, WSR-88D reflectivity (c) and radial velocity (d) data shown on the right

Download figure to PowerPoint

When given both data sources participants reduced their mean error by 20%, mode confidence increased from 4 to 5, and went from five affirmative warnings to two. SMEs agreed that this was a ‘non-event’ for the mesonet target. They stated that WSR-88D had a bad viewing angle for winds but the CASA view was slightly better. However, the hail threat that existed outside the CASA range was of greater concern than severe winds and was well sampled by WSR-88D. The lower altitude CASA data did allow them to increase their gust assessment by 28%, from 18.3 to 19.0 m s−1. Lastly, it was noted that the faster updates increased confidence by showing greater time continuity of storm features even though there were very few classical visual cues.

Scenario 2, beginning at 2034 UTC, involved a broader storm cell that included high radial velocities (26–28 m s−1) south–southeast of the target (MEDI) and exhibited some bowing (Figure 8). Unknown to the participants, the NWS issued an SVR around 2041 UTC which included the mesonet station. The CASA radar shown was KLWE with data as low as 0.58 km, WSR-88D provided data at 0.67 km and higher. At 2050 UTC the mesonet reported an 11.6 m s−1 wind gust. The NWS verified hail and wind reports around 2100 UTC in the Lawton area (i.e. near CASA).

thumbnail image

Figure 8. Scenario 2 at 2045 UTC. CASA reflectivity (a) and radial velocity (b) data shown on the left, WSR-88D reflectivity (c) and radial velocity (d) data shown on the right

Download figure to PowerPoint

When given both data sources participants had similar error, slightly higher predictions, and increased warnings from 2 to 4. The mode of confidence ratings was 5 with CASA data added and 4 with WSR-88D only, this is a significant difference (χ2 = 11.8, N = 30, df = 3, p = 0.008). In written comments, several participants noted the storm cell to the southeast as a greater threat but not to the area around the mesonet station. SMEs agreed that this was another ‘non-event’ at the target and would focus on the southeast storm cell for both wind and hail threats. However, they still made note of the high winds visible in the CASA data.

Scenario 3, at 2122 UTC, included 26–31 m s−1 radial velocities in both radar sources that came within 3–5 km of the target (ACME) as a small storm line begins to bow-out (Figure 9). Minimum height coverage at the target was 0.58 km for CASA (KLWE) and 1.46 km for WSR-88D. At 2135 UTC the mesonet reported a 22.0 m s−1 wind gust. No relevant storm warnings or verified reports were recorded in the NWS archives near the target in this time period.

thumbnail image

Figure 9. Scenario 3 at 2133 UTC. CASA reflectivity (a) and radial velocity (b) data shown on the left, WSR-88D reflectivity (c) and radial velocity (d) data shown on the right

Download figure to PowerPoint

When given both data sources, assessments were similar, confidence remained the same, and warnings increased slightly (from 5 to 7). However, mean absolute error was lowered from 3.7 to 2.2 m s−1, trending to significance (t = 3.6, df = 28, p = 0.076, two-tailed). SMEs found the WSR-88D view to show a very disorganized storm of non-severe cells. However, they found visual features such as ‘a bow structure’, ‘signs of collapse’, and ‘a distracting hook feature to the west’. These positive indicators of severe weather, combined with the low level CASA data, convinced SMEs this was a warnable situation even if the severe winds did not reach the target.

Scenario 4, at 2137 UTC, included a storm cell that had no obviously high radial velocities at the target but likely produced outflow induced winds at the targeted mesonet site (NINN) (Figure 10). WSR-88D showed a large area of 0–2.6 m s−1 radial velocities adjacent to the target which were filtered out by the colour table. This cell passed near the mesonet site and over the CASA (KSAO) radome which provided data as low as 0.12 km whereas the WSR-88D was only available starting at 1.8 km. In a 1 km2 area around the target, CASA provided approximately 16 times more data points than WSR-88D. At 2150 UTC the mesonet reported a 17.8 m s−1 wind gust. No relevant storm warnings or verified reports were recorded in the NWS archives near the target in this time period.

thumbnail image

Figure 10. Scenario 4 at 2148 UTC. CASA reflectivity (a) and radial velocity (b) data shown on the left, WSR-88D reflectivity (c) and radial velocity (d) data shown on the right

Download figure to PowerPoint

When given both data sources assessments were 2.2 m s−1 higher, error was 1.5 m s−1 lower, and mode confidence increased from 4 to 5. The fourth scenario had complete warning decision agreement across data sources (no warning needed). SMEs recognized a bowing feature in WSR-88D data with heaviest reflectivity southeast of the target. The additional CASA data showed them that the storm cell misses the target site. SMEs therefore kept their estimates around 22.1 m s−1 but increased their confidence when given the additional data.

Scenario 5, at 2150 UTC, as with scenario 4, had a low velocity area near the target in the WSR-88D data (Figure 11). However, CASA (KSAO) data included velocities in the low 25.7–26.8 m s−1 range very close to the radar due to storm outflow or downburst. Again, the mesonet station (CHIC) was near the CASA radar which allowed for readings at 0.06 km whereas WSR-88D data were 1.95 km and higher. In a 1 km2 area around the target, CASA provided approximately 32 times more data points than the WSR-88D. At 2205 UTC the mesonet reported a 24.0 m s−1 wind gust. No relevant storm warnings or verified reports were recorded in the NWS archives near the CASA radar in this time period.

thumbnail image

Figure 11. Scenario 5 at 2202 UTC. CASA reflectivity (a) and radial velocity (b) data shown on the left, WSR-88D reflectivity (c) and radial velocity (d) data shown on the right

Download figure to PowerPoint

When given both data sources mean absolute error was significantly lowered from 8.8 to 2.3 m s−1 (F = 17.4, p < 0.001; t = 5.6, adjusted-df = 16.4, p < 0.001, two-tailed). When given both data sources, mean assessments were significantly raised from 15.3 to 26.3 m s−1 (F = 14.7, p < 0.001; t = − 9.3, adjusted-df = 17.3, p < 0.001, two-tailed). The mode of confidence ratings was 6 with CASA data added and only 4 with WSR-88D only, this is a significant difference (χ2 = 14.9, N = 27, df = 6, p = 0.021). Scenario 5, using WSR-88D only, had all ‘No’ warning decisions (15) whereas using CASA and WSR-88D data had 13 ‘Yes’ responses, 1 ‘No’, and one missing response, a significant difference (χ2 = 25.2, N = 29, df = 1, p < 0.001).

SME feedback indicates that the WSR-88D view showed a weakening storm with heavy precipitation and no severe cues. While the SME's confidence was low they committed to an estimate of 20.6 m s−1. When analysing the CASA data it was immediately clear from the rapidly updating reflectivity data that the wind threat was enhanced due to a (non-tornadic) rotation which was not evident in WSR-88D data. This feature, an example of new information at low levels, completely changed the SME's storm understanding. In addition, CASA increased confidence due to sampling at an altitude representative of surface winds and sampling radial velocities with many individual beams (each at slightly different angles). SMEs then assessed surface winds to be around 26.2 m s−1.

Scenario 6, at 2157 UTC, showed a cell as it approached the eastern edge of CASA (KSAO) radar coverage that included a small but possibly strengthening area of winds over 25.7 m s−1 at a height of 0.79 km related to a possible microburst (Figure 12). The WSR-88D view included a few gates (pixels) of data in this speed range, but not with the continuity shown in CASA data and at a much greater height, 2.62 km. Note that attenuation at the range limit of the CASA network required sampling further from the mesonet site (5 km) to acquire the ‘Max on display’ data (Table I column 7). At 2210 UTC the mesonet station (WASH) reported a 26.1 m s−1 wind gust. This is the only scenario where the ground truth value exceeds the severe winds warning criterion of 25.7 m s−1. No relevant storm warnings or verified reports were recorded in the NWS archives near the target in this time period. However, Oklahoma City (about 48 km to the north) was under an active tornado warning.

thumbnail image

Figure 12. Scenario 6 at 2208 UTC. CASA reflectivity (a) and radial velocity (b) data shown on the left, WSR-88D reflectivity (c) and radial velocity (d) data shown on the right

Download figure to PowerPoint

When given both data sources participants lowered error by 1.8 m s−1. Mean prediction went from 21.0 to 24.1 m s−1, trending to significance (t = − 1.8, df = 28, p = 0.080, two-tailed). The mode of confidence ratings was 5 with CASA data added and 2 with WSR-88D only, this difference is trending to significance (χ2 = 8.6, N = 29, df = 4, p = 0.072). Scenario 6 using WSR-88D only had three ‘yes’ warning decisions, whereas using both CASA and WSR-88D data had nine ‘Yes’ responses, a significant difference (χ2 = 5.0, N = 30, df = 1, p = 0.025).

SMEs indicated that the WSR-88D data were ‘too vague’ and there was ‘nothing special in the upper levels’. They said the severe signal attenuation and range limit of the CASA radar made interpretation difficult (‘complex visual interpretation’). However, they still felt the 0.79 km altitude data were more representative of surface winds than the 2.62 km WSR-88D data and increased wind estimates from 23 m s−1 to the upper 23–26 m s−1 range. In addition the CASA data provided a more consistent visual signature of the high winds, rather than the two or three point samples found in WSR-88D which could be dismissed as noise.

4. Discussion

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References

Because wind speed is a criterion in severe thunderstorm warnings, the purpose of this study was to measure the impact of the addition of high resolution, lower troposphere radar data on wind speed assessments. Operational forecasters were asked to make wind assessments under two data source conditions, WSR-88D only and WSR-88D with CASA. Forecasters who are provided with the additional CASA radar data significantly increased wind speed estimates by 20%, reduced wind speed assessment error by 30%, and increased confidence for wind speed assessments. In addition, 23 of 30 participants provided written feedback that the CASA data confirmed or refined their mental models of the atmosphere. High resolution lower troposphere radar data clearly had positive effects on forecaster performance. Further, it is very promising that forecasters with minimal training were able to integrate data effectively from an experimental radar system which does not have the same noise level and performance characteristic of a production WSR-88D. These results, and the evaluation method developed, are an important part of engineering a successful radar system.

The part-task case-review setting successfully engaged the interest and motivation of NWS forecasters. Even with a small number of cases forecasters found ‘real events’ to be engaging. Controlled addition of the new radar data enabled comparison with the conventional system. Qualitative and quantitative measures during the decision process were linked to outcomes, such as accuracy and warnings. The combined feedback gathered by this method is generated by practitioners and specific to a genuine sub-task of their process.

The increase in mean wind speed assessments, for forecasters using both WSR-88D and CASA data sources, agrees with prior work (Brown et al., 2005). The combination of both research efforts show that increased sampling generates the possibility to capture higher wind speeds which in turn leads to higher displayed wind speeds, leading to higher assessments of maximum winds. These higher values in the CASA data were visible in the display and observed by the forecasters resulting in wind speed assessments higher than with only WSR-88D data. However, some additional CASA data were available at lower altitudes where conditions may have differed. The mean of ‘max on display’ values (across all scenarios) for the CASA source is 14.1 m s−1, whereas WSR-88D is 16.6 m s−1. This shows that the forecasters did more than report the last display value from the lowest levels at the target location (otherwise wind speeds should have been lower when given CASA data). Forecasters may have been looking at data values further away from the target to compensate for storm motion and the 2–5 min forecast period.

These higher estimates were closer to the ground truth obtained from automated sensors, resulting in lower mean error. The results show that forecasters were able to reduce their wind speed assessment error using this additional data source. This implies that forecasters are able to sift through the extra data points from increased spatial resolution and find the data that are the most informative to their mental model. Forecasters often commented that they trusted the CASA data more because it was closer to the ground. While these results are promising, this research was conducted in simulated operations. Future work should investigate how these additional data impact on the job performance where information overload could potentially harm performance. In addition, future work should provide a wider range of task and scenario combinations in order to identify the impact of individual forecaster performance.

The shift in warning decisions across all scenarios, from 17% ‘Yes’ with WSR-88D only to 39% ‘Yes’ with WSR-88D and CASA data, is interesting, especially because only two scenarios (the first and second) were covered by an actual warning according to NWS archives. This shift may be related to the increase in wind speed assessments when given CASA data which are based on the higher radial velocity values in the display. However, whether this shift is beneficial is a matter of NWS policy, including the event verification process. Since most scenarios had near but sub-severe winds, it seems appropriate that some but not all warning decisions were altered. This implies the new data supported both negative and affirmative warning decisions.

Forecasters were revealing a confidence in their higher speed estimates both in their higher confidence ratings and their shift to more warning decisions. In order to understand these confidence ratings better, a Spearman's non-parametric correlation was used to investigate any relation between absolute wind speed assessment error and confidence rating. While this test is not strictly appropriate given the experimental design, its results may still be informative. As expected this correlation is negative, increased confidence does correlate with decreased error for this experiment (rs = − 0.198, N = 175, p = 0.004, one-tailed) suggesting that overconfidence was not a major issue. Future investigation may be able to address this in a more appropriate fashion and provide insights on the differences between ‘familiar’ and ‘experimental’ data sources.

The increase in warnings for the same scenario when given additional radar data has implications for operational forecasters. They will need to adapt their mental models to incorporate the low altitude wind data and the increase in gust estimates. The results indicate an increase in warnings as previously missed low altitude events are now detected. With an increase in detections, the number of false alarms may increase even as more events are properly warned and skill increases. A policy decision may be needed regarding the threshold for severe winds or the size and duration of warnings that would incorporate these low altitude events and properly alert the public (Morss et al., 2010). System enhancements like WSR-88D ‘Super Resolution’ (National Weather Service Radar Operations Center, 2009) increases the spatial resolution of data while experimental radar like MPAR (Heinselman et al., 2008) has similar spatial resolution with update rates of 1 min. The CASA system features even greater spatial resolution, rapid updates (1 min for the current testbed), and low altitude data. As systems such as CASA, WSR-88D ‘Super Resolution’, and MPAR come on-line, the forecast community may need to revisit the policies and thresholds for issuing warnings.

These results are promising given the limitations of the experimental design. The WSR-88D-first task set was inherently harder than the CASA-first task set based on the difference between the source data ‘max on display’ and the ground truth (Table I). The WSR-88D-first task set had a larger total difference than the CASA-first task set (47.9 vs 27.8 m s−1). For the three scenarios with CASA data in the WSR-88D-first task set, the differences between the CASA ‘max on display’ and the ground truth summed to 36.9 m s−1 and for the three WSR-88D only scenarios, 11.0 m s−1, yielding a total difference of 47.9 m s−1. For the three scenarios with CASA data in the CASA-first task set, the differences for CASA sources summed to 11.4 m s−1 and those with WSR-88D only data summed to 16.4 m s−1, yielding a total of 27.8 m s−1.

Also, there were some interactions between task-set and data source which can be expected due to natural variations in the scenarios and the alternating of data sources across participants. This alternation effectively pairs half the scenarios against each other and no two weather scenarios could ever be perfectly matched. For the ‘max on display’ heuristic across the four combinations of task set and data source, WSR-88D-first with CASA sources has the lowest wind speeds (mean 9.4 m s−1) and the greatest difference to the ground truth (mean of absolute values, 12.3 m s−1) whereas WSR-88D-first with the WSR-88D source has the highest wind speeds (mean 20.2 m s−1) and the smallest ground truth difference (mean of absolute values, 3.7 m s−1). While this is not a perfect match with the observed interactions it was also shown that forecasters out-performed the simple ‘max on display’ heuristic.

As noted above, the scenarios were placed in time sequential order. This design choice could have influenced performance in later scenarios because of the information given in earlier scenarios. However, any potential effect is representative of real operational forecasting where weather events progress throughout the day.

WDSS-II (Lakshmanan et al., 2007), while an effective tool, would ideally be replaced by standard NWS operations software to remove additional confounds and allow detailed warning generation. This standard software, called AWIPS, provides data from many sensors in real time, allowing forecasters to interrogate them quickly visually and with built-in tools (Raytheon Company, 2009). Experienced forecasters have customized AWIPS display configurations as well as strongly developed motor and cognitive routines for accessing radar data in an orderly fashion to build their mental model of the storm. However, the interface control differences between the WDSS-II software used and AWIPS interfered with these routines. Further, radar rendering and display differences may have caused additional error in interpretation due to colouring or other visual differences. Future integration of CASA data into AWIPS would alleviate these issues and provide additional data sources (e.g. satellites) normally available during operations. This integration would allow for even more realistic test settings and possible reductions in assessment error.

Because of CASA design characteristics, the current study could be enhanced by the systematic control of radar beam attributes in various scenarios. Using a single WSR-88D throughout the experiment made radar coverage below 2 km more representative of nationwide coverage. However, radars in the CASA system differ from WSR-88D in more than just beam height or average resolution. CASA radars also update faster and scan regions with automated changes in azimuth coverage and number of tilts per volume (Philips et al., 2008). Additionally, with four radars to choose from in the current testbed, there is no lack of choice for wind observation angle. Update rate, beam height, wind-to-beam intersection angle and sampling fidelity may each influence forecaster performance in different ways. Future work could quantify the impact of these attributes individually regardless of radar source. To understand the impact on warnings fully, additional measures of performance will need to be collected, including the size of warnings, their duration and effective lead time. Future work will investigate the potential effect of increased spatio-temporal and low level data resolution on these severe weather warning attributes.

This work indicates that the addition of high resolution, low altitude, rapidly updating radar data has both qualitative and quantitative benefits. However, training and policy implications for the incorporation of new technology on warning operations must be carefully considered. In particular, the paired increase of confidence and wind speed estimates, no matter how much more accurate, may require changes to warning policy. The method developed here has been very effective for CASA. The use of part-task simulation paired with process and outcome measures has provided feedback vital to the radar system engineering process.

Acknowledgements

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References

We thank the 30 forecasters who volunteered to participate in the study described herein. Also, we thank Jerry Brotzge, Patrick Marsh, and Ron Przybylinski for their help with scenario development and forecasting subject matter expertise. We thank the National Weather Association and NOAA for supporting us with the running of the experimental trials at the 2008 NWA annual meeting in Louisville, KY, and as part of the Experimental Warning Project, respectively. Liz Quoetone provided many informative materials from NOAA's Warning Decision Training Branch and supported recruiting participation from NWA attendees. She and two anonymous reviewers provided constructive feedback and therefore this manuscript benefits from those comments as well. Kevin Manross and Greg Stumpf helped with recruiting participants as part of the Experimental Warning Project. Brendan Hogan and Eric Knapp supported the data collection while at the NWA annual meeting. This work was supported in part by the Engineering Research Centers Program of the National Science Foundation under NSF Award Number 0313747. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foundation. The NWS is one of CASA's government partners.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Methods
  5. 3. Results
  6. 4. Discussion
  7. Acknowledgements
  8. References
  • Andra D, Quoetone E, Bunting W. 2002. Warning decision making: the relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Weather and Forecasting 17: 559566.
  • Bieringer P, Ray PS. 1996. A comparison of Tornado warning lead times with and without NEXRAD Doppler Radar. Weather and Forecasting 11: 4752.
  • Bluestein HB, Weiss CC, French MM, Holthaus EM, Tanamachi RL, Frasier S, Pazmany AL. 2007. The structure of tornadoes near Attica, Kansas, on 12 May 2004: high-resolution, mobile, Doppler radar observations. Monthly Weather Review 135: 475506.
  • Brotzge J, Hondl K, Philips B, Lemon L, Bass E, Rude D, Andra D. 2010. Evaluation of Distributed collaborative adaptive sensing for detection of low-level circulations and implications for severe weather warning operations. Weather and Forecasting 25: 173189.
  • Brown RA, Flickinger BA, Forren E, Schultz DM, Sirmans D, Spencer PL, Wood VT, Ziegler CL. 2005. Improved detection of severe storms using experimental fine-resolution WSR-88D measurements. Weather and Forecasting 20: 314.
  • Doswell CA. 2004. Weather forecasting by humans—heuristics and decision making. Weather and Forecasting 19: 11151126.
  • Einhorn HJ, Hogarth RM. 1978. Confidence in judgment: persistence of the illusion of validity. Psychological Review 85: 395416.
  • Fischhoff B, MacGregor D. 1982. Subjective confidence in forecasts. Journal of Forecasting 1: 155172.
  • Galway JG. 1989. The evolution of severe thunderstorm criteria within the weather service. Weather and Forecasting 4: 585592.
  • Heideman KF, Stewart TR, Moninger WR, Reagan-Cirincione P. 1993. The weather information and skill experiment (WISE): the effect of varying levels of information on forecast skill. Weather and Forecasting 8: 2536.
  • Heinselman P, Priegnitz D, Manross K, Smith T, Adams R. 2008. Rapid sampling of severe storms by the National Weather Radar Testbed Phased Array Radar. Weather and Forecasting 23: 808824.
  • Klayman J, Soll JB, González-Vallejo C, Barlas S. 1999. Overconfidence: it depends on how, what, and whom you ask. Organizational Behavior and Human Decision Processes 79: 216247.
  • Klazura GE, Imy DA. 1993. A description of the initial set of analysis products available from the NEXRAD WSR-88D system. Bulletin of the American Meteorological Society 74: 12931311.
  • Lakshmanan V, Smith T, Stumpf G, Hondl K. 2007. The warning decision support system–integrated information. Weather and Forecasting 22: 596612.
  • McLaughlin D, Pepyne D, Chandrasekar V, Philips B, Kurose J, Zink M, Droegemeier K, Cruz-Pol S, Junyent F, Brotzge J, Westbrook D, Bharadwaj N, Wang Y, Lyons E, Hondl K, Liu Y, Knapp E, Xue M, Hopf A, Kloesel K, Defonzo A, Kollias P, Brewster K, Contreras R, Dolan B, Djaferis T, Insanic E, Frasier S, Carr F. 2009. Short-wavelength technology and the potential for distributed networks of small radar systems. Bulletin of the American Meteorological Society 90: 17971817.
  • Maddox RA, Zhang J, Gourley JJ, Howard KW. 2002. Weather radar coverage over the contiguous United States. Weather and Forecasting 17: 927934.
  • Morss RE, Lazo JK, Demuth JL. 2010. Examining the use of weather forecasts in decision scenarios: results from a US survey with implications for uncertainty communication. Meteorological Applications 17: 149162.
  • Murphy AH, Winkler RL. 1977. Reliability of subjective probability forecasts of precipitation and temperature. Journal of the Royal Statistical Society, Series C (Applied Statistics) 26: 4147.
  • Murphy AH, Winkler RL. 1984. Probability forecasting in meteorology. Journal of the American Statistical Association 79: 489500.
  • Nadav-Greenberg L, Joslyn SL. 2009. Uncertainty forecasts improve decision making among nonexperts. Journal of Cognitive Engineering and Decision Making 3: 209227.
  • National Weather Service Radar Operations Center. 2009. Build10FAQ. NOAA Radar Operations Center: Norman, OK; http://www.roc.noaa.gov/NWS_Level_2/BuildInfo/Build10FAQ.aspx (accessed 4 March 2009).
  • NOAA. 2009. Summary of Natural Hazard Statistics for 2008 in the United States. Office of Climate, Water, and Weather Services. http://www.nws.noaa.gov/om/hazstats/sum08.pdf (accessed 5 November 2009).
  • Oskamp S. 1965. Overconfidence in case-study judgments. Journal of Consulting Psychology 29: 261265.
  • Pepyne D, Westbrook D, Philips B, Lyons E, Zink M, Kurose J. 2008. Distributed collaborative adaptive sensor networks for remote sensing applications. American Control Conference, 2008, Seattle, WA; 41674172.
  • Philips B, Westbrook D, Pepyne D, Bass E, Rude D. 2008. Evaluation of the CASA system in the NOAA Hazardous Weather Test Bed. 24th Conference on IIPS, New Orleans, LA. p 9A.9. http://ams.confex.com/ams/88Annual/techprogram/paper_135360.htm (accessed 17 June 2011).
  • Polger PD, Goldsmith BS, Przywarty RC, Bocchieri JR. 1994. National weather service warning performance based on the WSR-88D. Bulletin of the American Meteorological Society 75: 203214.
  • Quoetone E, Huckabee K. 1995. Anatomy of an effective warning: event anticipation, data integration, feature recognition. Preprints 14th Conference on Weather Analysis and Forecasting. AMS: Dallas; 420425.
  • Raytheon Company. 2009. Raytheon: advanced weather information processing system. http://awips.raytheon.com (accessed 5 January 2009).
  • Rude D, Bass E, Philips B. 2009. Impact of increased spatio-temporal radar data resolution on forecaster wind assessments. 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX; 349354.
  • Stewart TR. 2001. Improving reliability of judgmental forecasts. In Principles of Forecasting, Armstrong JS (ed.). Kluwer Academic Publishers: Hingham, MA; 81106.
  • Stewart TR, Heideman KF, Moninger WR, Reagan-Cirincione P. 1992. Effects of improved information on the components of skill in weather forecasting. Organizational Behavior and Human Decision Processes 53: 107134.
  • Stewart TR, Lusk CM. 1994. Seven components of judgmental forecasting skill: implications for research and the improvement of forecasts. Journal of Forecasting 13: 579599.
  • Stumpf G, Smith T, Manross K, Andra D. 2008. The experimental warning program 2008 spring experiment at the NOAA Hazardous Weather Testbed. 24th Conference on Severe Local Storms, American Meteorological Society, Savannah, GA. http://ams.confex.com/ams/24SLS/techprogram/paper_141712.htm (accessed 17 June 2011).
  • University of Oklahoma Board of Regents. 2009. Oklahoma Mesonet//Instruments//WMAX. http://www.mesonet.org/instruments/WMAX.php (accessed 5 January 2009).
  • Varouhakis J, Nordholts M. 2008. recordMyDesktop Version 0.3.7.3. http://recordmydesktop.sourceforge.net (accessed 21 October 2008).
  • Zink M, Lyons E, Westbrook D, Pepyne D, Philips B, Kurose J, Chandrasekar V. 2008. Meteorological command & control: architecture and performance evaluation. Geoscience and Remote Sensing Symposium, 2008, IGARSS 2008, IEEE International, Vol. 5; V-152V-155.