SEARCH

SEARCH BY CITATION

Keywords:

  • probabilistic forecasts;
  • decision-making;
  • forecast biases

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References

The general public understands that there is uncertainty inherent in deterministic forecasts as well as understanding some of the factors that increase uncertainty. This was determined in an online survey of 1340 residents of Washington and Oregon, USA. Understanding was probed using questions that asked participants what they expected to observe when given a deterministic forecast with a specified lead time, for a particular weather parameter, during a particular time of year. It was also probed by asking participants to estimate the number of observations, out of 100, that they expected to fall within specified ranges around the deterministic forecast. Almost all respondents (99.99%) anticipated some uncertainty in the deterministic forecast. Furthermore, their answers suggested that they expected greater uncertainty for longer lead times when the forecasted value deviated from climatic norms. Perhaps most noteworthy was that they expected specific forecast biases (e.g. over-forecasting of extremes), most of which were not borne out by an analysis of local National Weather Service verification data. In summary, users had well-formed uncertainty expectations suggesting that they are prepared to understand explicit uncertainty forecasts for a wide range of parameters. Indeed, explicit uncertainty estimates may be necessary to overcome some of the anticipated forecast biases that may be affecting the usefulness of existing weather forecasts. Despite the fact that these bias expectations are largely unjustified, they could lead to adjustment of forecasts that could in turn have serious negative consequences for users, especially with respect to extreme weather warnings. Copyright © 2010 Royal Meteorological Society


1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References

Although at present most public forecasts are deterministic, providing a single value for parameters such as temperature, there is a movement afoot (NRC, 2006) to include more uncertainty information in them. Forecasts with uncertainty estimates are not only more in line with the current scientific understanding of future weather conditions: they are also potentially useful to everyday decision-making. There is some concern, however, that uncertainty forecasts will not be well understood by general public end-users. This concern has inspired a new surge of targeted social science research. The early results are encouraging. There is experimental research showing improved decision-making with uncertainty forecasts as compared to decisions based on deterministic forecasts (Roulston et al., 2006; Joslyn et al., 2007; Nadav-Greenberg and Joslyn, 2009) and there is survey research indicating that people anticipate uncertainty in deterministic forecasts (Morss et al., 2008; Lazo et al., 2009) suggesting that they are prepared to understand explicit uncertainty forecasts.

This line of research is important because any potential benefit of uncertainty forecasts will depend on end users understanding them, which will depend in turn on how they are communicated. Establishing the background knowledge with which non-expert end users approach such forecasts, including how they interpret deterministic forecasts, is key to successful communication. There is no doubt that users have substantial exposure to weather forecasts. A nationwide survey conducted in 2006 suggested that 96% of people use weather forecasts and consult them an average of 115 times per month, or more than three times a day (Morss et al., 2008). Experience with daily forecasts and the ensuing weather could lead users to anticipate specific patterns of uncertainty which may, in turn, influence users' understanding of uncertainty forecasts. In this respect, weather forecasts are unlike other kinds of risk communication because people may have well formed expectations based on vast prior experience.

Indeed, it is clear that people do not expect single value forecasts to verify exactly. In the survey of Morss et al. (2008) the vast majority of respondents (95%) selected a range of high temperatures to indicate what they expected to observe. Moreover, their confidence ratings indicated that they were sensitive to the fact that forecast uncertainty increases with lead time. Thus, users may have fairly sophisticated uncertainty expectations. They may understand some of the factors that increase forecast uncertainty and perhaps have expectations about systematic forecast bias (Morss et al., 2008). A full understanding of user expectations is important when considering how best to communicate forecast uncertainty in the future because it may be interpreted through the lens of quite specific previously formed beliefs.

To that end, the study reported here takes up where previous research left off. We sought to probe the details of every day users' understanding of the uncertainty inherent in deterministic forecasts for a specific geographic region, the Pacific North West of the United States. The focus was on participants from a small geographic area so that their responses could be interpreted against the background of the weather to which they were exposed. Questions were designed to uncover uncertainty expectations for several weather parameters, during various seasons of the year and for a range of forecast values to determine the influence of these factors on uncertainty expectations. For instance, users may expect more uncertainty for forecasts that differ to a greater degree from climatology. They may expect specific biases, such as the over-forecasting of snowfall. In addition, questions were asked with both 1 day and 3 day lead times to determine whether users were sensitive to the impact of this factor on uncertainty. If significant differences in uncertainty expectations are observed due to these variables, it implies that people have complex expectations for forecast uncertainty. Furthermore, because a particular geographic area was targeted, it was possible to compare uncertainty expectations to forecast verification to determine whether the expectations were well founded. In addition, some questions were also asked in context of specific binary decisions to determine whether overarching decision goals influenced uncertainty expectations as has sometimes been suggested (Weber, 1994). Finally, to determine whether specific uncertainty forecasts would be useful for various parameters the probability thresholds for taking action against extreme conditions were asked about. If a variety of probability thresholds are reported, suggesting differing risk tolerances, a single value forecast or even a worst-case scenario forecast, may be inadequate to meet user needs. Only explicit uncertainty information would allow users to tailor the forecast to individual risk tolerances.

In the following sections the survey methodology, including the survey design, and implementation is first described. Next, findings concerning participants' understanding of the uncertainty inherent in deterministic weather forecasts are presented. Finally, there is discussion of the findings and how providing calibrated uncertainty forecasts to the general public can help to overcome unjustified expectations, narrow the range of their expectations and provide them with information that can be targeted to individual risk tolerance.

2. Method

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References

2.1. Participants

A total of 1526 participants were recruited through a local Seattle-area weather blog (www.cliffmass.blogspot.com) between 2 and 5 January 2009 and again between 5 and 12 March 2009. Forty five participants were excluded because of missing data. Another group of 113 respondents was excluded because their answers suggested that they either did not understand the questions or were not taking the task seriously. Of these 113 participants, 93 were excluded because they reversed their responses for the ‘would not be surprised’ questions providing a larger value for the ‘as low as’ question than for the ‘as high as question’ and 20 others were excluded because they provided expected values that were more than 20 °F from single value forecast. A small group (18) with professional atmospheric sciences background and 10 from outside the states of Washington and Oregon were excluded because their responses would not be representative of the group targeted in this study. This left a total of 1340 participants. Fifty five percent of respondents classified themselves as everyday weather users, 38% as interested amateurs, and 7% as having some education involving atmospheric sciences. Males and females were almost equally represented with slightly more males (53%) than females (47%). Thirty four percent of the respondents were between the ages of 18 and 40, 51% were between the ages of 41 and 60, and 15% were 61 years of age or older. Forty one percent of respondents had advanced degrees, 50% had college degrees, and 9% indicated that a high school diploma or equivalent was their highest level of education. Overall, the respondents were similar with respect to gender, but older and more highly educated than the general population (U.S. Census Bureau, 2007).

2.2. Questionnaire

There were 52 total questions (see Appendix A), although each participant responded to only a subset of them. The questions asked about forecast values and parameters that were selected to be representative of local climate and to be relevant to weather concerns revealed by a preliminary survey asking Pacific Northwesterners about the decisions for which they used weather forecasts. The parameters tested were daytime high and night time low temperature, wind speed and precipitation. Precipitation is the one parameter for which uncertainty forecasts are available, although the expressions vary widely in public forecasts. Because of the variety of ways in which it is expressed, and because the research question concerned people's understanding of deterministic forecasts, precipitation forecasts were simplified to ‘rain’ or ‘no rain’ and ‘snow’ or ‘no snow’. While this approach might be inadequate to evaluate peoples' understanding of forecast wording (e.g. showers likely) it provided information about peoples' general expectations in a manner comparable to the other parameters tested here.

There were two kinds of questions designed to probe uncertainty expectations, 42 asking about specific forecasts (See Appendix, Question 1) and 10 asking participants to generalize over a group of forecasts for an individual parameter (see Appendix, Question 13). The ‘specific’ questions provided an individual area forecast and asked participants what they expected to observe and the highest and lowest values they would not be surprised to observe. Several weather parameters, months of the year and forecast values were targeted (Table I). Each ‘specific’ question was asked twice, once with a 1 day lead time and once with a 3 day lead time. Most of the questions were also asked in context of a binary decision. All participants were asked at least one question for each parameter and an equal number of questions about 1 day and 3 day lead time. However, no participant saw the same basic question twice.

Table I. Parameters tested in specific questions
 Daytime highNighttime lowWind speedPrecipitation
NormalJune (67 °F) November (51 °F)December (32 °F)Rain (October)
ExtremeAugust (100 °F)January (20 °F)October (40 mph sustained winds)Snow (January)

In order to probe uncertainty expectations another way, respondents were asked to indicate what percent of the time (e.g. out of 100) they expected the generic forecast parameters described above (daytime high temperature, night time low temperature, wind speed, rain and snow) to be within various ranges of the deterministic forecast (‘generalization’ questions). Notice that contextual information is absent from this question format and may well yield different responses. In addition, respondents were asked at what percent chance of observing extreme forecasts (heat, cold, wind speed and precipitation) they would begin to take precautionary action and what precautionary action they would take. All of the questions were then pilot tested on a small group of participants (N = 76) to refine wording, remove ambiguity and increase response options. This resulted in a total of 52 final questions that were divided into six unique versions. Each unique version included nine questions, with the exception of one version, which included only eight questions and one question which appeared in two versions. The content of the six versions was similar in that each version included at least one question about each parameter, approximately equal numbers of ‘specific’ questions with 1 and 3 day lead times and at least one of each of the question types described above. The order in which the parameters were mentioned was the same for all versions. In addition, for all versions the ‘out of 100’ and precautionary action threshold questions were asked last to avoid any influence on answers to previous questions. On average, it took participants approximately 13 minutes to complete the questionnaire.

2.3. Procedure

The survey was administered online and accessed with a link from the Seattle-area weather blog describing it as a study investigating ‘how individuals interpret weather forecasts’. They were informed that they must be 18 years of age or older and that the information collected would remain anonymous. The questions were presented individually on the screen along with the instructions to ‘read the forecast information below and answer the questions that refer to that forecast’. Participants responded to each question then clicked a ‘continue’ button in the bottom right-hand corner of the screen to advance to the next question. Participants were not allowed to go back and change their answers to previous questions. For most questions a single response was required. There were some questions, used to gather more detailed explanations that allowed free format text responses. These questions will not be discussed here. At the end of the questionnaire, participants were asked for their gender, age, level of education, background in atmospheric sciences, and zip code.

3. Results

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References

Participants' understanding of the uncertainty inherent in deterministic weather forecasts was revealed in their responses to both the ‘specific’ questions and the ‘generalization’ questions described above. First, the range of expectations, between the ‘as high as’ and ‘as low as’ values that participants indicated they would ‘not be surprised’ to observe, for each of the parameters tested were examined. Notice that for temperature, the smallest expectation range was 7.57 °F (32 °F in December 1 day lead time) and the largest was 12.44 °F (100 °F in August 3 day lead time) (Figure 1(a)). Expectation ranges for wind speed (40 mph sustained winds) were wider, 21.73 mph for a 1 day lead time to a range of 25.01 mph for a 3 day lead time forecast (Figure 1(b)). Only seven of the 1340 participants (0.005%) entered the deterministic value for both ‘as high as’ and ‘as low as’ questions, indicating they had no uncertainty expectations. Therefore, it is clear that participants understand that there is uncertainty involved in deterministic forecasts for temperature and wind speed.

thumbnail image

Figure 1. (a) Mean expectation range for temperature differences [positive (light gray) and negative (dark gray)] between the values people would not be surprised to observe and the deterministic forecast). (b) Mean expectation range for wind speed differences [(light gray) and negative (dark gray)]

Download figure to PowerPoint

Participants also appeared to understand that increased lead time increases forecast uncertainty, as indeed it does (Murphy and Brown 1984; Baars and Mass, 2005). Table II shows that for every forecast, except the June forecast for 67 °F, the expectation range for the forecast with a 3 day lead time was significantly greater than for the next day forecast.

Table II. Mean expectation range by forecast
ParameterMean 1 day expectation rangeMean 3 day expectation rangeMean expectation range differenceMean std. error differencet-statdfSig. (2-tailed)
  • a

    Significant at the Bonferroni corrected value of p = 0.008.

June 67 °F11.14911.112− 0.0370.374− 0.1008690.920
November 51 °F9.59210.8541.2620.4872.5894670.010
December 32 °F7.5648.7311.1670.2724.2828690.000a
August 100 °F10.38612.4432.0570.4144.9739530.000a
January 20 °F9.10910.1000.9910.9923.3978690.001a
October 40 mph21.73725.0143.2770.7524.3598690.000a

An understanding of the impact of lead time was also evident in precipitation forecast expectations (Figure 2(a)). A logistic regression analysis revealed that participants were almost five times more likely to think the next day rain forecast would verify as compared to a rain forecast with a 3 day lead time (Exp(ß) = 4.755, p = 0.001). The results for the ‘no rain’ forecast were similar. There was a threefold increase in the odds of participants thinking the next day forecast would verify compared to a forecast with a 3 day lead time (Exp(ß) = 3.311, p = 0.000).

thumbnail image

Figure 2. (a) Percent of participants who believed the rain forecast would be correct. (b) Percent of participants who believed the snow forecast would be correct

Download figure to PowerPoint

Lead time also had a significant impact on expectations for the snow forecast (Figure 2(b)). Again there was a threefold increase in the odds of participants thinking a next-day forecast for snow would verify compared to a snow forecast with a 3 day lead time (Exp(ß) = 2.935, p = 0.000). The forecast for ‘no snow’ was the only exception. Most participants thought this forecast would be correct (94.55%) regardless of lead time (Exp(ß) = 1.363, p = 0.494). Thus, for every parameter tested, there is some evidence that participants expect more uncertainty at longer lead times.

Among the most interesting discoveries of this research was the expectation among respondents for systematic bias in the deterministic forecasts. Participants correctly anticipated over-forecasting of warning forecasts that target high winds and lowland snow. As Table III indicates, participants expected a 40 mph wind speed forecast to verify significantly lower than the forecasted value. Indeed, there is an average 50% false alarm rate for high wind warnings (sustained 40 mph or gust 58 mph) while the average miss rate is much lower at 3%. In addition, participants indicated that they thought that a ‘snow’ forecast was more likely to be wrong than a ‘no snow’ forecast, suggesting an over-forecasting bias. The odds of a participant indicating that it would snow when snow was predicted were more than six times less than the odds of indicating it would not snow when no snow was predicted at a 1 day lead time (Exp(ß) = 6.616, p = 0.000) and more than 26 times less than the odds of a participant indicating it would not snow when no snow was predicted (Exp(ß) = 26.446, p = 0.000) for a 3 day lead time. Again, there is about a 50% false alarm rate for winter storm warnings indicating snow, while the average miss rate is much lower at 4%. The false alarm and miss rates reported above are based on an analysis of warning forecasts at the Seattle Weather Forecasting Office between 1989 and 2009.

Table III. Asymmetry in range of values above (as high as) compared to below (as low as)
ParameterNumber of participantsMean difference between ‘as high as’ and ‘as low as’Mean std. error differencet-statdfSig. (2-tailed)Do participants' responses indicate that the forecast is biased?
  • a

    Significant at the Bonferroni corrected value of p = 0.004.

June 67 °F 1 day-high442− 0.2810.196− 1.4324410.153No
June 67 °F 3 day-high429− 0.6690.212− 3.1554280.002aYes, high bias
November 51 °F 1 day-high250− 1.7440.276− 6.3262490.000aYes, high bias
November 51 °F 3 day-high219− 2.2240.307− 7.2452180.000aYes, high bias
December 32 °F 1 day-low4290.5340.1643.2564280.001aYes, low bias
December 32 °F 3 day-low4421.2690.1976.4604410.000*Yes, low bias
August 100 °F 1 day-high456− 7.9250.386− 20.5124550.000aYes, high bias
August 100 °F 3 day-high499− 11.2850.562− 20.0754980.000*Yes, high bias
January 20 °F 1 day-low4424.2400.24017.6454410.000aYes, low bias
January 20 °F 3 day-low4295.4800.28119.4754280.000aYes, low bias
October 40 mph 1 day429− 12.8880.684− 18.8544280.000aYes, high bias
October 40 mph 3 day442− 16.0720.625− 25.6994410.000aYes, high bias

However, many of these anticipated biases are not borne out by analyses of National Weather Service verification for this region (Baars and Mass, 2005). On average, participants expected the high temperature to be lower than the forecasted value, suggesting they thought these forecasts had a high bias. In contrast they expected the low temperature to be higher than the forecast temperature, suggesting that they thought this forecast had a low bias (Table III). These comparisons were statistically significant for every forecast tested with the exception of the June temperature forecast of 67 °F. The actual temperature forecasts for the Pacific Northwest of the United States have a slight low bias (about a half a degree Fahrenheit) for both night time low temperature and daytime high temperature. Notice that in the case of daytime high temperature, this bias is exactly opposite to user expectations.

Participants' responses also indicated that they thought precipitation forecasts were biased. The odds of participants indicating that it would rain the next day when rain was predicted were almost three times greater than the odds of participants indicating that it would not rain when no rain was predicted (Exp(ß) = 2.888, p = 0.019). A similar pattern was observed for a 3 day lead time, with odds almost twice as great for indicating that it would rain when rain was predicted as compared to indicating that it would not rain when no rain was predicted (Exp(ß) = 1.978, p = 0.007). This suggests that users expect an under forecasting bias for rain because they think that ‘no rain’ forecasts are more likely to be wrong than ‘rain’ forecasts. In fact, next-day precipitation forecasts tend to be quite accurate and the error at longer lead times tends to be unbiased (Baars and Mass, 2005).

Participants indicated that they expected more uncertainty in single value extreme forecasts than in forecasts within the normal range. Moreover, they expected extreme forecasts to have significantly greater bias than forecasts within normal ranges, but in the same direction (Table IV). For example, notice that for daytime high temperatures, participants anticipated a high bias for both the normal and the extreme forecast: however, the difference between the forecast and participants' expectations is significantly greater for the extreme August forecast for 100 °F than for normal June temperature of 67 °F (see row 1 and 2 of Table IV). A similar, but reversed, pattern can be seen for the night time low temperatures (see rows 3 and 4 of Table IV). In general, participants' responses suggested that they thought that extreme forecasts would be too extreme when compared to the observation. In other words, they thought the observation would be closer to the normal values for that time of year than what is indicated in the forecast. In fact, there is greater error in extreme forecasts as compared to forecasts in general (Baars and Mass, 2005). However, this error tends to be in the opposite direction to that anticipated by participants. Instead of being too extreme, such forecasts are on average too near the normal values, that is not sufficiently extreme.

Table IV. Comparison of temperature biases for extreme and normal temperature forecasts
 Mean bias for extreme temperaturesMean bias for normal temperaturesMean difference between extreme and normal biasesMean std. error differencet-statdfSig. (2-tailed)
  • a

    Significant at the Bonferroni corrected value of p = 0.00125.

1 day daytime high− 7.925− 0.281− 7.6450.437− 17.4828960.000a
3 day daytime high− 11.285− 0.669− 10.6160.637− 16.6559260.000a
1 day night time low4.2400.5343.7060.29312.6698690.000a
3 day night time low5.4801.2694.2110.34212.3308690.000a

Including a specific decision goal further influenced uncertainty expectations. Although expectations were similar with respect to anticipated biases and increased expectation ranges for longer lead times, the ranges of expectations tended to be wider overall when a decision goal was specified than when it was not. As Table V shows, this difference was statistically significant in six of the 10 conditions. Notice that the same forecast value and month were specified in both conditions, suggesting that having a specific decision goal alone tended to expand the range of values participants expected to observe with a given forecast. Two of the forecasts for which the range was not wider in the decision condition were among the widest overall (Wind Speed 1 day and Wind Speed 3 day) suggesting that the effect of extreme values may have overpowered the effect of decision goal in these cases.

Table V. Mean expectation ranges for forecasts involving no decisions and decisions
 Mean expectation range (no decision)Mean expectation range (decision)Mean expectation range differenceMean std. error differencet-statdfSig. (2-tailed)
  • a

    Significant at the Bonferroni corrected value of p = 0.005.

June 67 °F 1 day high9.8112.192.3800.5694.1824400.000a
June 67 °F 3 day high10.3511.731.3810.4742.9114270.004a
December 32 °F 1 day low7.387.720.3420.3570.9604270.338
December 32 °F 3 day low8.089.241.1590.4122.8134400.005a
August 100 °F 1 day high9.7810.951.1730.5592.0974540.037
August 100 °F 3 day high11.2313.662.4350.5934.1084970.000a
January 20 °F 1 day low8.069.921.8670.3695.0604400.000a
January 20 °F 3 day low10.0710.130.0590.4520.1304270.896
October 40 mph 1 day22.82820.852− 1.9761.068− 1.8504270.065
October 40 mph 3 day25.05724.980− 0.0771.071− 0.0724400.943

Participants' uncertainty expectations were also investigated in a more general way. They were asked to indicate what percentage of forecasts they thought would verify within certain ranges. On average, participants thought that daytime high temperature would verify within an 8 °F range 89% of the time (Figure 3(a)), and that night time lows would verify within an 8 °F range 90% of the time (Figure 3(b)). However, the biases for high and low temperature observed in participant responses to ‘specific’ questions above, were not evident when the question is asked this way. The percentage of responses falling above and below the ‘as predicted’ temperature is within 1% in both cases. More uncertainty was anticipated with wind forecasts, which participants thought would verify within 8 mph only 78% of the time (Figure 3(c)). Also notice that these responses indicate an anticipated bias in wind speed forecasts. Participants expected that observed wind speeds would be lower than the forecasted value more often (41% of the time) than they expected them to be higher than the forecasted value (30% of the time).

thumbnail image

Figure 3. (a) Percentage of daytime high temperature forecasts (expressed as a number ‘out of 100’) that participants indicated would verify within the specified ranges. (b) Percentage of night time low temperature forecasts (expressed as a number ‘out of 100’) that participants indicated would verify within the specified ranges. (c) Percentage of high sustained wind forecasts (expressed as a number ‘out of 100’) that participants indicated would verify within the specified ranges

Download figure to PowerPoint

Similarly, with the ‘out of 10’ questions asked about precipitation, the expected bias in snow forecasts is evident but the expected bias in rain forecasts is not. Notice in Figure 4(a) and (b) that participants thought the ‘no snow’ forecast would be accurate 85% of the time (8.5 times out of 10) whereas they thought the snow forecast would verify only 59% (5.9 times out of 10) of the time. A paired t-test revealed that this difference in anticipated accuracy was significant (t(218) = 16.236, p = 0.000). The fact that participants thought snow forecasts would fail to verify almost half of the time whereas ‘no snow’ forecasts would be accurate most of the time, suggests that they thought snow was over-forecasted. However, there is no evidence of the rain forecast bias when the question is asked this way. People thought that both a no rain and a rain forecast (Figure 4(c) and (d)) would be approximately equally likely to verify about 69% of the time (t(191) = − 0.534, p = 0.594).

thumbnail image

Figure 4. (a) Percentage of no snow forecasts (expressed as a number ‘out of 10’) that participants indicated would verify. (b) Percentage of snow forecasts (expressed as a number ‘out of 10’) that participants indicated would verify. (c) Percentage of no rain forecasts (expressed as a number ‘out of 10’) that participants indicated would verify. (d) Percentage of rain forecasts (expressed as a number ‘out of 10’) that participants indicated would verify

Download figure to PowerPoint

This suggests that peoples' estimates of forecast uncertainty are sensitive to the way in which the question is asked. The unjustified biases in temperature and rain forecasts were only evident when asking participants to indicate their expectations with respect to specific forecasts whereas the expected wind speed and snow biases were evident both in the specific context and when participants are asked to make generalizations over multiple unspecified forecasts.

However, these results clearly demonstrate that people anticipate uncertainty in deterministic forecasts, regardless of how the question is asked. This suggests that non-expert end users are psychologically prepared to understand uncertainty forecasts. To explore the potential need for uncertainty information among such users, a final question designed to reveal whether participants had individualized probability thresholds for taking precautionary action that are not well served by deterministic forecasts was asked.

Five kinds of weather forecasts for which it is known that many Pacific North Westerners take precautionary action were surveyed: snowfall, extremely high and low temperatures, extreme amounts of rainfall and high wind speeds. The results appear in Table VI. Notice that people have a wide range of thresholds at which they would take action. The key feature is that there is no parameter for which more than 24% of people have the same threshold. It is clear from these data that this is a very personalized issue and that people would benefit from having such information made explicit so that they could apply the forecast to their own risk tolerance.

Table VI. Percentage of participants taking precautionary action at probability thresholds
 10 (%)20 (%)30 (%)40 (%)50 (%)60 (%)70 (%)80 (%)90 (%)100 (%)
100 °F temperature10.03.614.48.418.012.018.09.22.44.0
20 °F temperature4.75.28.99.414.613.520.314.13.65.7
Extreme rain2.13.16.25.211.913.520.716.18.313.0
Snow2.41.26.011.616.011.622.818.04.06.4
Wind speed1.41.87.810.516.412.823.720.11.83.7

4. Conclusion

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References

These results suggest that people have the background knowledge necessary to understand explicit uncertainty forecasts. They were aware of the uncertainty inherent in deterministic forecasts as well as the factors that tend to increase uncertainty, including lead time and deviations from climatology. It is also clear from these results that explicit uncertainty information would benefit users. The expectation ranges for deterministic forecasts among the respondents surveyed here were quite wide for many parameters, especially when the forecasts were applied to specific decisions. Sufficiently sharp uncertainty forecasts could narrow expectations, providing more precise and useful information for decision-making. In addition, respondents expected a number of unjustified biases that might well reduce the value of the deterministic forecasts. Uncertainty forecasts, in the form of calibrated predictive intervals for instance, may override expectations for invalid biases providing a better understanding of where to expect potential uncertainty. Finally, threshold probabilities for taking precautionary action varied widely among respondents for every parameter tested here, suggesting that specific forecast uncertainty would allow users to tailor the forecast to their own tolerance for risk.

It was also found that peoples' estimates of forecast uncertainty were sensitive to the way in which the question was asked. With the exception of wind speed and snow, for instance, the anticipated biases revealed in the ‘specific’ questions were not evident when people made uncertainty estimates over a group of forecasts in the ‘generalization’ questions. There is a possible psychological explanation for this. It may be that people base uncertainty estimates on different factors when considering a single forecast compared to a group of forecasts. A similar effect is reported with confidence estimates in which people are over-confident about the accuracy of a single answer but less so when estimating the proportion of a group of answers that will be correct (Gigerenzer et al., 1991). In the context of weather forecasts, the difference may be due to the fact that people are consciously aware of some bias expectations but not others. The issue of bias may have been brought to the fore by the ‘generalization’ questions that asked people to estimate the proportion of observations above and below the deterministic forecast because they required a deliberate choice about whether or not to add a larger proportion on one side or the other. Perhaps people are aware of anticipating bias for high winds and snow, exceptional forecasts that may be regarded sceptically and for which their expectations are justified, so they indicate that they would expect greater proportions of observations with lower winds and no snow. However, people may not be aware of anticipating bias for temperatures within the normal range or rain, leading them to estimate approximately equal proportions above and below the deterministic forecast when making this conscious choice. Nonetheless, such biases may influence the choice of values participants expected to observe on a single occasion. From the perspective of the respondent, a particular kind of error for a single forecast may be expected on an individual occasion even though the person does not consciously think there is bias in the forecast. However, when such choices are averaged over many cases and respondents, the bias is revealed. While these explanations are merely speculative at this point, future research may provide a more complete explanation for these effects.

Sensitivity to the form of the question was also revealed in the fact that participants anticipated wider ranges of values when they were assigned a specific decision goal. It is important to note that both the forecast values and the response mode were identical in the two question types and yet the majority of estimates had wider ranges for the questions that were in the context of a specific decision goal. This may be due to a more general psychological effect in which interpretation of uncertainty is influenced by the outcome it describes (Weber, 1994). The same probability (e.g. ‘30%’) applied to a more serious side effect (cancer) is considered greater than when applied to a less serious side effect (headache). The theoretical explanation for this inconsistency is to prevent the decision maker from making the more serious decision error; the consequences of underestimating cancer are greater than the consequences of under estimating a headache. In the study reported here, uncertainty ranges may have been increased in context of a decision goal for a similar reason, to account for a greater range of outcomes when consequences are involved.

It is important to note that some of the anticipated biases uncovered here were not valid. They did not arise directly from users experience with the forecast and ensuing weather. There are probably a variety of reasons for these expectations. It may be that, people tend to remember some forecast errors better than others and this in turn, affects the estimated frequency. This is referred to as the availability heuristic (Tversky and Kahneman, 1973). For instance, perhaps when the daytime high temperature is lower than predicted it has a bigger impact on peoples' plans than when it is higher. This leads to a memorable event that is easier to access later. When estimating forecast errors, such events may be remembered disproportionately more often, affecting the subjective estimate of their frequency. In other words, some forecast errors may be more available to memory causing them to seem more frequent.

However, this does not necessarily explain the fact that people expected extreme forecasts to be more moderate than what were predicted. This may have another explanation. Perhaps people have an implicit understanding of the notion of regression to the mean. Although some psychological evidence suggests people tend to be insensitive to this principle (Kahneman and Tversky, 1973) seeking causal explanations instead, for weather forecasts this principle may make more sense. People may be aware of climatological norms and expect forecast error to be biased toward these values. This would explain many of the biases reported here. The extreme high temperatures were expected to verify lower and extreme low temperatures higher. A bias toward climatological norms also explains the expectations for precipitation forecasts. Participants, all residents of the rainy Pacific Northwest, expected the rain forecast to verify more often than the ‘no rain’ forecast.

Although any explanation of these results is largely speculative at this point, the important issue is that these anticipated biases may be affecting the usefulness of existing weather forecasts. Despite the fact that these bias expectations are in some cases unjustified, they could lead to adjustment of forecasts that could in turn have serious negative consequences for users. This is especially problematic for extreme forecasts. Participants systematically discounted all of these forecasts, potentially reducing the perceived urgency for precautionary action when it is most important. The bottom line is that when people are forced to estimate forecast uncertainty on their own, they are not be getting maximum benefit from the forecasts they are currently receiving. Calibrated uncertainty forecasts may overcome these unjustified expectations, narrow the range of expectations and provide people with information that can be targeted to individual risk tolerance.

Appendix A

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References
  • 1.
    On a day in June, you notice that the predicted daytime high temperature for the next day is 67 °F.
    What do you think the daytime high temperature will be on the next day? _____ °F
    I would not be surprised if the daytime high temperature on the next day was as high as _____ °F.
    I would not be surprised if the daytime high temperature on the next day was as low as _____ °F.
  • 2.
    On a Wednesday in June, you notice that the predicted daytime high temperature for Saturday that week is 67 °F.
    What do you think the daytime high temperature will be on Saturday? _____ °F
    I would not be surprised if the daytime high temperature on Saturday was as high as _____ °F.
    I would not be surprised if the daytime high temperature on Saturday was as low as _____ °F.
  • 3.
    Imagine that it is June and you have an outdoor party planned for the next day. A daytime high temperature of 67 °F and clear skies are predicted for that day. Assume that you think that 67 °F is too cool for an outdoor party.
    Would you decide to plan for and indoor party instead? (Check one) Yes_____ No_____
    What do you think the daytime high temperature will be on the next day? _____ °F
    I would not be surprised if the daytime high temperature on the next day was as high as _____ °F.
    I would not be surprised if the daytime high temperature on the next day was as low as _____ °F.
  • 4.
    Imagine that you have an outdoor party planned for a Saturday in June. On the previous Wednesday you notice that a daytime high temperature of 67 °F and clear skies are predicted for Saturday. Assume that you think that 67 °F is too cool for an outdoor party.
    Would you decide to plan for an indoor party instead? (Check one) Yes_____ No_____
    What do you think the daytime high temperature will be on Saturday? _____ °F
    I would not be surprised if the daytime high temperature on Saturday was as high as _____ °F.
    I would not be surprised if the daytime high temperature on Saturday was as low as _____ °F.
  • 5.
    On a day in November, you notice that the predicted daytime high temperature for the next day is 51 °F.
    What do you think the daytime high temperature will be on the next day? _____ °F
    I would not be surprised if the daytime high temperature on the next day was as high as _____ °F.
    I would not be surprised if the daytime high temperature on the next day was as low as _____ °F.
  • 6.
    On a Wednesday in November, you notice that the predicted daytime high temperature for Saturday that week is 51 °F.
    What do you think the daytime high temperature will be on Saturday? _____ °F
    I would not be surprised if the daytime high temperature on Saturday was as high as _____ °F.
    I would not be surprised if the daytime high temperature on Saturday was as low as _____ °F.
  • 7.
    On a day in August, you notice that the predicted daytime high temperature for the next day is 100 °F.
    What do you think the daytime high temperature will be on the next day? _____ °F
    I would not be surprised if the daytime high temperature on the next day was as high as _____ °F.
    I would not be surprised if the daytime high temperature on the next day was as low as _____ °F.
  • 8.
    On a Wednesday in August, you notice that the predicted daytime high temperature for Saturday that week is 100 °F.
    What do you think the daytime high temperature will be on Saturday? _____ °F
    I would not be surprised if the daytime high temperature on Saturday was as high as _____ °F.
    I would not be surprised if the daytime high temperature on Saturday was as low as _____ °F.
  • 9.
    Imagine it is August and you have planned to take an elderly friend for a walk the next day. A daytime high temperature of 100 °F is predicted for that day. You know this could be dangerous for your elderly friend.
    Would you alter your plans? (Check one) Yes_____ No_____
    What do you think the daytime high temperature will be on the next day? _____ °F
    I would not be surprised if the daytime high temperature the next day was as high as _____ °F.
    I would not be surprised if the daytime high temperature the next day was as low as _____ °F.
  • 10.
    Imagine that you have planned to take an elderly friend for a walk on a Saturday in August. On the previous Wednesday you notice that a daytime high temperature of 100 °F is predicted for Saturday. You know this could be dangerous for your elderly friend.
    Would you alter your plans? (Check one) Yes_____ No_____
    What do you think the daytime high temperature will be on Saturday? _____ °F
    I would not be surprised if the daytime high temperature on Saturday was as high as _____ °F.
    I would not be surprised if the daytime high temperature on Saturday was as low as _____ °F.
  • 11.
    Imagine that it is August and you have a hike planned for the next day. A daytime high temperature of 100 °F is predicted for the next day. Assume that you think 100 °F is too hot for a hike.
    Would you alter your plans? (Check one) Yes_____ No_____
    What do you think the daytime high temperature will be on the next day? _____ °F
    I would not be surprised if the daytime high temperature on the next day was as high as _____ °F.
    I would not be surprised if the daytime high temperature on the next day was as low as _____ °F.
  • 12.
    Imagine that you have planned to take hike on a Saturday in August. On the previous Wednesday you notice that a daytime high temperature of 100 °F is predicted for Saturday. Assume that you think 100 °F is too hot for a hike.
    Would you alter your plans? (Check one) Yes_____ No_____
    What do you think the daytime high temperature will be on Saturday? _____ °F
    I would not be surprised if the daytime high temperature on Saturday was as high as _____ °F.
    I would not be surprised if the daytime high temperature on Saturday was as low as _____ °F.
  • 13.
    Assume that every day you look at the daytime high temperature forecast for the next day. Out of 100 consecutive high temperature forecasts (make sure your answers to the following sum to 100) …
    How many times do you think the actual daytime high temperature will be what was predicted _____, 1–2 degrees higher _____, 1–2 degrees lower _____, 3–4 degrees higher _____, 3–4 degrees lower _____, 5 or more degrees higher _____, 5 or more degrees lower _____ ?
  • 14.
    If you knew that temperature was expected to reach 100 °F tomorrow (possible health hazard for vulnerable groups), would it help you to know what time the temperature was expected to reach 100 °F? In other words would it help you to plan in some way if you knew when the temperature would reach 100 °F? (Check one)
    Yes_____ No_____
  • 15.
    If you knew that very high temperatures (100 °F +) were predicted (possible health hazard for vulnerable groups)…
    For which of the following forecasts would you begin preparation (select one):
    Pull down list where 10% chance of 100 °F+ temperatures to 100% chance of 100 °F+ temperatures presented in increments of 10% chance
  • 16.
    On a day in December, you notice that the predicted night time low temperature for the next night is 32 °F.
    What do you think the night time low temperature will be on the next night? _____ °F
    I would not be surprised if the night time low temperature on the next night was as high as _____ °F
    I would not be surprised if the night time low temperature on the next night was as low as _____ °F
  • 17.
    On a Wednesday in December, you notice that the predicted night time low temperature for Saturday night that week is 32 °F.
    What do you think the night time low temperature will be on Saturday night? _____ °F
    I would not be surprised if the night time low temperature on Saturday night was as high as _____ °F.
    I would not be surprised if the night time low temperature on Saturday night was as low as _____ °F.
  • 18.
    Imagine that it is December and you have a favourite potted plant outside that would be damaged by freezing temperature. The nighttime low temperature of 32 °F (freezing) is predicted for the next night.
    Would you bring the plant inside? (Check one) Yes_____ No_____
    What do you think the nighttime low temperature will be on the next night? _____ °F
    I would not be surprised if the nighttime low temperature on the next night was as high as _____ °F.
    I would not be surprised if the nighttime low temperature on the next night was as low as _____ °F.
  • 19.
    Imagine that you have a favourite potted plant outside that would be damaged by freezing temperature. You are going away for a weekend in December. On the previous Wednesday you notice that a nighttime low temperature of 32 °F (freezing) is predicted for Saturday.
    Would you bring the plant inside? (Check one) Yes_____ No_____
    What do you think the nighttime low temperature will be on Saturday night? _____ °F
    I would not be surprised if the nighttime low temperature on Saturday night was as high as _____ °F.
    I would not be surprised if the nighttime low temperature on Saturday night was as low as _____ °F.
  • 20.
    On a day in January, you notice that the predicted nighttime low temperature for the next night is 20 °F.
    What do you think the nighttime low temperature will be on the next night? _____ °F
    I would not be surprised if the nighttime low temperature the next night was as high as _____ °F.
    I would not be surprised if the nighttime low temperature the next night was as low as _____ °F.
  • 21.
    On a Wednesday in January, you notice that the predicted nighttime low temperature for Saturday that week is 20 °F.
    What do you think the nighttime low temperature will be on Saturday? _____ °F
    I would not be surprised if the nighttime low temperature on Saturday was as high as _____ °F.
    I would not be surprised if the nighttime low temperature on Saturday was as low as _____ °F.
  • 22.
    Imagine that it is January and a nighttime low temperature of 20 °F is predicted for the next night. Assume that you live in a house where your water pipes may freeze if temperature is that low.
    Assuming you had not done so already, would you do something to prevent frozen water pipes (e.g. leave dripping faucets, insulate outdoor faucets) (Check one) Yes_____ No_____
    What do you think the nighttime low temperature will be on the next night? _____ °F
    I would not be surprised if the nighttime low temperature on the next night was as high as _____ °F.
    I would not be surprised if the nighttime low temperature on the next night was as low as _____ °F.
  • 23.
    Imagine that it is a Wednesday in January and a nighttime low temperature of 20 °F is predicted for Saturday. Assume that you live in a house where your water pipes may freeze if temperature is that low.
    Assuming you had not done so already, would you do something to prevent frozen water pipes (e.g. leave dripping faucets, insulate outdoor faucets) (Check one) Yes_____ No_____
    What do you think the nighttime low temperature will be on Saturday? _____ °F
    I would not be surprised if the nighttime low temperature on Saturday was as high as _____ °F.
    I would not be surprised if the nighttime low temperature on Saturday was as low as _____ °F.
  • 24.
    Assume that every day you look at the nighttime low temperature forecast for the next night. Out of 100 consecutive nighttime low temperature forecasts (make sure your answers to the following sum to 100)…
    How many times do you think the actual nighttime low temperature will be what was predicted _____, 1–2 degrees higher _____, 1–2 degrees lower _____, 3–4 degrees higher _____, 3–4 degrees lower _____, 5 or more degrees higher _____, 5 or more degrees lower _____ ?
  • 25.
    If you knew that freezing temperatures were predicted for tomorrow (possible icy road conditions, frozen pipes), would it help you to know what time they were expected? In other words would it help you to plan in some way if you knew when the temperature would drop below freezing? (Check one)
    Yes_____ No_____
  • 26.
    If you knew that freezing temperatures were predicted (possible icy road conditions, frozen pipes)…
    For which of the following forecasts would you begin preparation (select one):
    Pull down list where 10% chance of freezing temperatures to 100% chance of freezing temperatures presented in increments of 10% chance
  • 27.
    On a day in October, you notice that 40 mph sustained winds are predicted for the next day.
    What do you think the sustained wind speed will be on the next day? _____ mph
    I would not be surprised if the sustained wind speed on the next day was as high as _____ mph.
    I would not be surprised if sustained wind speed on the next day was as low as _____ mph.
  • 28.
    On a Wednesday in October you notice that 40 mph sustained wind speeds are predicted for Saturday that week.
    What do you think the sustained wind speed will be on Saturday? _____ mph
    I would not be surprised if the sustained wind speed on Saturday was as high as _____ mph.
    I would not be surprised if sustained wind speed on Saturday was as low as _____ mph.
  • 29.
    Imagine that it is October and you have plans that involve driving across the Lake Washington floating bridge the next day. You notice that 40 mph sustained wind speed is predicted for the next day that could lead to closing the bridge.
    Would you alter your plans? (Check one) Yes_____ No_____
    Would you think about an alternative route? (Check one) Yes_____ No_____
    What do you think the sustained wind speed will be on the next day? _____ mph
    I would not be surprised if the sustained wind speed on the next day was as high as _____ mph.
    I would not be surprised if the sustained wind speed on the next day was as low as _____ mph.
  • 30.
    Imagine that you have plans that involve driving across the Lake Washington floating bridge on a Saturday in October. On the previous Wednesday you notice that 40 mph sustained wind speed is predicted that for Saturday that could lead to closing the bridge.
    Would you alter your plans? (Check one) Yes_____ No_____
    Would you think about an alternative route? (Check one) Yes_____ No_____
    What do you think the sustained wind speed will be on Saturday? _____ mph
    I would not be surprised if the sustained wind speed on Saturday was as high as _____ mph.
    I would not be surprised if the sustained wind speed on Saturday was as low as _____ mph.
  • 31.
    Assume that every day you look at the sustained wind speed forecast for the next day. Out of 100 consecutive wind speed forecasts (make sure your answers to the following sum to 100) …
    How many times do you think the actual sustained wind speed will be what was predicted _____, 1–2 mph higher _____, 1–2 mph lower _____, 3–4 mph higher _____, 3–4 mph lower _____, 5 or more mph higher _____, 5 or more mph lower _____ ?
  • 32.
    If you knew that 40 mph sustained winds were predicted for tomorrow (possible power outages, downed trees, bridge closures), would it help you to know what time the high winds were expected? In other words would it help you to plan in some way if you knew when winds were thought to begin? (Check one)
    Yes_____ No_____
  • 33.
    If you knew that 40 mph sustained winds were predicted (possible power outages, downed trees, bridge closures, etc.)…
    For which of the following forecasts would you begin preparation (select one):
    Pull down list where 10% chance of 40 mph sustained winds to 100% chance of 40 mph sustained winds presented in increments of 10% chance
  • 34.
    On a day in October, you notice that rain is predicted for the next day.
    Do you think it will rain the next day? (Check one) Yes_____ No_____
  • 35.
    On a day in October, you notice that NO rain is predicted for the next day.
    Do you think it will rain the next day? (Check one) Yes_____ No_____
  • 36.
    On a Wednesday in October, you notice that rain is predicted for Saturday that week.
    Do you think it will rain on Saturday? (Check one) Yes_____ No_____
  • 37.
    On a Wednesday in October, you notice that NO rain is predicted for Saturday that week.
    Do you think it will rain on Saturday? (Check one) Yes_____ No_____
  • 38.
    Imagine that it is June and you have an outdoor picnic planned for the next day. You notice that rain is predicted for the next day. You do not want to picnic in the rain.
    Would you cancel the picnic? (Check one) Yes_____ No_____
    Would you think about an alternative plan? (Check one) Yes_____ No_____
  • 39.
    Imagine that you have an outdoor picnic planned for a Saturday in June. On the previous Wednesday you notice that rain is predicted for Saturday that week. You do not want to picnic in the rain.
    Would you cancel the picnic? (Check one) Yes_____ No_____
    Would you think about an alternative plan? (Check one) Yes_____ No_____
  • 40.
    Assume that every day in the spring, you look at the precipitation forecast for the next day. Out of 10 forecasts for NO rain (make sure your answers to the following sum to 10) …
    How many of them do you think will be correct (NOT rain when NO rain is predicted)? _____
    How many of them do you think will be wrong (rain when NO rain has been predicted)? _____
    Out of 10 forecasts for RAIN (make sure your answers to the following sum to 10) …
    How many of them do you think will be correct (rain when rain is predicted)? _____
    How many of them do you think will be wrong (NOT rain when rain has been predicted)? _____
  • 41.
    If you knew that rain was predicted for the following day, would it help you to know what time the rain was expected? In other words would it help you to plan in some way if you knew approximately when rainfall was thought to begin? (check one)
    Yes_____ No_____
  • 42.
    If you knew that extreme amounts of rain were predicted (possible urban flooding) for tomorrow, would it help you to know what time it was expected? In other words would it help you to plan in some way if you knew when extreme rainfall was expected to begin? (Check one)
    Yes_____ No_____
  • 43.
    If you knew that extreme amounts of rain were predicted (possible urban flooding)…
    For which of the following forecasts would you begin preparation (select one):
    Pull down list where 10% chance of extreme rain to 100% chance of extreme rain presented in increments of 10% chance
  • 44.
    On a day in January, you notice that snow in the lowlands is predicted for the next day.
    Do you think it will snow the next day? (Check one) Yes_____ No_____
  • 45.
    On a day in January, you notice that NO snow in the lowlands is predicted for the next day.
    Do you think it will snow the next day? (Check one) Yes_____ No_____
  • 46.
    On a Wednesday in January, you notice that snow in the lowlands is predicted for Saturday that week.
    Do you think it will snow on Saturday? (Check one) Yes_____ No_____
  • 47.
    On a Wednesday in January, you notice that NO snow in the lowlands is predicted for Saturday that week.
    Do you think it will snow on Saturday? (Check one) Yes_____ No_____
  • 48.
    Imagine that it is January and you have a long drive planned for the next day. Snow is predicted in the lowlands for the next day. Assume that you do not want to drive far in the snow.
    Would you cancel the drive? (Check one) Yes_____ No_____
    Would you think about an alternative plan? (Check one) Yes_____ No_____
  • 49.
    Imagine that you have a long drive planned for a Saturday in January. On the previous Wednesday you notice that snow is predicted in the lowlands for Saturday that week. Assume that you do not want to drive far in the snow.
    Would you cancel the drive? (Check one) Yes_____ No_____
    Would you think about an alternative plan? (Check one) Yes_____ No_____
  • 50.
    Assume that every day in a cold season, you look at the snow forecast for the next day. Out of 10 forecasts for NO snow in the lowlands (make sure your answers to the following sum to 10)
    How many of them do you think will be correct (NOT snow when NO snow is predicted)? _____
    How many of them do you think will be wrong (snow when NO snow has been predicted)? _____ Out of 10 forecasts for SNOW in the lowlands (make sure your answers to the following sum to 10)
    How many of them do you think will be correct (snow when snow is predicted?) _____
    How many of them do you think will be wrong (NOT snow when snow has been predicted)? _____
  • 51.
    If you knew that snow was predicted in the lowlands for the following day, would it help you to know what time the snow was expected? In other words, would it help you to plan in some way if you knew approximately when snowfall was thought to begin and end? (Check one)
    Yes_____ No_____
  • 52.
    If you knew that snow was predicted in the lowlands (causing possible traffic problems and closures)…
    For which of the following forecasts would you begin preparation (select one):
    Pull down list where 10% chance of snow in the lowlands to 100% chance of snow in the lowlands presented in increments of 10% chance

Acknowledgements

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References

This material is based upon work supported by the National Science Foundation under Grant No. ATM 0724721.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Method
  5. 3. Results
  6. 4. Conclusion
  7. Appendix A
  8. Acknowledgements
  9. References
  • Baars JA, Mass CF. 2005. Performance of national weather service forecasts compared to operational consensus, and weighted model output statistics. Weather and Forecasting 20: 10341047.
  • Gigerenzer G, Hoffrage U, Kleinbolting H. 1991. Probabilistic mental models: a Brunswikian theory of confidence. Psychological Review 98: 506528.
  • Joslyn S, Pak K, Jones D, Pyles J, Hunt E. 2007. The Effect of probabilistic information on threshold forecasts. Weather and Forecasting 22: 804812.
  • Kahneman D, Tversky A. 1973. On the psychology of prediction. Psychological Review 80: 237251.
  • Lazo JK, Morss RE, Demuth J. 2009. 300 billion served sources, perceptions, uses, and values of weather forecasts. Bulletin of the American Meteorological Society 6: 785798.
  • Morss RE, Demuth J, Lazo JK. 2008. Communicating uncertainty in weather forecasts: a survey of the U.S. public. Weather Forecasting 23: 974991.
  • Murphy AH, Brown BG. 1984. A comparative evaluation of objective and subjective weather forecasts in the United States. Journal of Forecasting 3: 369393.
  • Nadav-Greenberg L, Joslyn S. 2009. Uncertainty forecasts improve decision-making among non-experts. Journal of Cognitive Engineering and Decision Making 2: 2447.
  • National Research Council (NRC). 2006. Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. National Academy Press: Washington, DC.
  • Roulston MS, Bolton GE, Kleit AN, Sears-Collins AL. 2006. A laboratory study of the benefits of including uncertainty information in weather forecasts. Weather Forecasting 21: 116122.
  • Tversky A, Kahneman D. 1973. Availability: a heuristic for judging frequency and probability. Cognitive Psychology 5: 207232.
  • U.S. Census Bureau. cited 2007a: 2006. American community survey. Available online at [http://www.census.gov/acs/www/.].
  • Weber EU. 1994. From subjective probabilities to decision weights: the effect of asymmetric loss function. Psychological Bulletin 115: 228242.