SEARCH

SEARCH BY CITATION

Keywords:

  • risk communication;
  • flood risk management;
  • uncertainty;
  • civil protection;
  • EFAS;
  • Europe

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

Following trends in operational weather forecasting, where ensemble prediction systems (EPS) are now increasingly the norm, flood forecasters are beginning to experiment with using similar ensemble methods. Most of the effort to date has focused on the substantial technical challenges of developing coupled rainfall-runoff systems to represent the full cascade of uncertainties involved in predicting future flooding. As a consequence much less attention has been given to the communication and eventual use of EPS flood forecasts. Drawing on interviews and other research with operational flood forecasters from across Europe, this paper highlights a number of challenges to communicating and using ensemble flood forecasts operationally. It is shown that operational flood forecasters understand the skill, operational limitations, and informational value of EPS products in a variety of different and sometimes contradictory ways. Despite the efforts of forecasting agencies to design effective ways to communicate EPS forecasts to non-experts, operational flood forecasters were often skeptical about the ability of forecast recipients to understand or use them appropriately. It is argued that better training and closer contacts between operational flood forecasters and EPS system designers can help ensure the uncertainty represented by EPS forecasts is represented in ways that are most appropriate and meaningful for their intended consumers, but some fundamental political and institutional challenges to using ensembles, such as differing attitudes to false alarms and to responsibility for management of blame in the event of poor or mistaken forecasts are also highlighted. Copyright © 2010 Royal Meteorological Society


1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

Certain of his faith in an almighty God, Noah had little trouble deciding how to respond to his flood warning. Hearing God promise to ‘bring the flood of waters upon the earth’ (Genesis 6:17), Noah built an arc and was saved while all around him perished. By contrast, the flood forecasts issued by mere mortals do not benefit from divine omniscience, and so their recipients must make difficult decisions about how much to rely on them. This, in turn, typically involves judgments about the skill and trustworthiness of the forecaster and about the costs that would be incurred by acting on the forecast as against the losses if the forecast is ignored and proves correct.

Recent advances in the application of ensemble prediction systems (EPS) to flood forecasting promise to improve the basis for making these judgments. First operationally developed in the early 1990s to cope with the inevitable uncertainties in numerical weather prediction about initial conditions and the parameterization of complex, often stochastic, atmospheric processes (Molteni et al., 1996), EPS is now well established in operational weather forecasting (Park et al., 2008). Building on that success, there is a number of ongoing efforts across Europe, and beyond (Thielen et al., 2008), to use EPS to drive flood forecasting systems (Table I; Cloke et al., 2009). In Britain, the Pitt Review into the 2007 floods has urged the UK Met Office and Environment Agency to move quickly to develop an ensemble flood forecasting capacity so as to deliver ‘a step change in the quality of flood warnings’ (Pitt, 2008, p. vii).

Table I. From research to operational implementation: six examples of flood forecasting systems in Europe now using ECMWF EPS weather inputs to generate ensemble flood forecasts
CountryFlood forecasting institutionHydrological EPS inputsSystem descriptionPre-processingPost-processingStarted EPS research and developmentOperational status
  1. For further details about the individual forecast systems, see Cloke and Pappenberger (2009)

EUEuropean Flood Alert System (EFAS), Joint Research CentreECMWF-EPS; COSMO-LEPSEFAS with Lisflood hydrological modelHeight correction of temperature; precipitation correction using ECMWF reforecasts (in research phase)ARMAS (in research phase)1999Yes: ‘pre-operational’ since 2005
HungaryEnvironment and Water Management Research Institute (VITUKI)ECMWF-EPS; NWS-NCEPNational hydrological forecasting system NHFS/OVSZ) with GAPI type conceptual hydrological modulesGlobal kriging utilizing and downscaling regional elevation dependents of meteorological elements2000Yes, pre-operational since 2006
SwedenSwedish Meteorological and Hydrological Institute (SMHI)ECMWF-EPSHydrologiska Byråns Vattenbalansav delning Sweden (HBVSv) with HBV hydrological modelStatistical downscaling according to sub-basins2001/2002Yes, since 2004
FinlandFinnish Hydrological Service (SKYE)ECMWF-EPSWatershed simulation and forecasting system (WSFS) with hydrological model of conceptual HBV styleHeight correction on temperature and precipitationGaussian adjustment for real time hydrological maps2000Yes, since 2000 for 10 day EPS
The NetherlandsRijkswaterstaatECMWF-EPS; COSMO-LEPSFlood Early Warning System (FEWS NL) with hydrological model HBV and routing model SOBEK1999Yes, since 2009
FranceSCHAPI (French Hydrometeorological and Flood Forecasting Service)ECMWF EPS; Arpege EPSSAFRAN-ISBA-MODCOU (SIM) with land surface model ISBA and hydrogeological model MODCOUStatistical and dynamical downscaling2006No, but in test phase since 2008

Such efforts are designed to deliver two practical advantages promised by EPS over comparable deterministic flood forecasting set-ups. First, given the current inability of even the finest resolution operational numerical weather prediction models to resolve convective rainfall effectively (Golding, 2009), EPS rainfall predictions often exhibit greater skill, particularly over the medium term of 3–15 days, than deterministic ones (Richardson, 2000; Buizza, 2008; Bartholmes et al., 2009). Whether directly incorporated as inputs to drive a fully coupled hydrological EPS or used more qualitatively as another source of information to inform the interpretation of deterministic flood forecast predictions, the hope is that making greater use of EPS products will increase the skill and time horizon of flood forecasts. Second, generating a suite of forecasts, rather than a single deterministic prediction, also provides a way to quantify, and thereby communicate, the uncertainty about them. Accordingly EPS promoters also insist that probabilistic forecasts are more valuable than deterministic ones because they not only tell recipients what is most likely to happen but also the probability of its occurrence as well as an indication of the potential for extreme events. In turn, this additional information can enable sophisticated forecast users to optimize their exposure to risk through hedging or other behaviour to optimize their cost-loss functions (i.e. Krzysztofowicz, 2001; Palmer, 2002; Buizza, 2008).

To deliver on those promises, most of the research to date has focused on the substantial technical challenges of developing coupled rainfall-runoff systems and in representing the full cascade of uncertainties involved in predicting future flooding (for a recent review see Cloke and Pappenberger, 2009). Much less attention has been given to the challenges involved in communicating, interpreting and making operational use of novel forecast products based on EPS (but see Faulkner et al., 2007). Recent social science research suggests that the promises of innovative decision support technologies, such as EPS, are not always realized in operational practice. Rayner et al. (2005) and Morss et al. (2005) have both documented significant cultural and institutional constraints to water resource managers making the best use of innovative decision-support technologies. Other research has highlighted the institutional factors shaping the communication and reception of flood forecasts by civil protection officials and the general public (Pielke, 1999; Demeritt et al., 2007; Parker et al., 2009; Nobert et al., 2010).

In this paper the understanding, communication and use of novel EPS products in operational flood forecasting in Europe is explored. Sophisticated forecast products are of little value if they are misunderstood, used inappropriately, or simply ignored by their recipients. Drawing on interviews and other research with operational flood forecasters from across Europe, this paper highlights a number of challenges to applying new EPS products in operational flood forecasting. While good training in the communication and use of EPS is clearly essential, some fundamental political and institutional challenges to using EPS, such as differing attitudes to false alarms and to the management of blame in the event of poor or mistaken forecasts, are also highlighted. The scientific uncertainties about whether or not a flood will occur comprise only part of the wider ‘decision’ uncertainties faced by those charged with flood risk management, who must also consider questions about how their forecasts will subsequently be interpreted. By making those first order scientific uncertainties more explicit, ensemble forecasts can sometimes complicate, rather than clarify, the second order ‘decision’ uncertainties they are supposed to inform.

2. Research design, data and methods

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

This paper draws from a wider set of more than 50 interviews conducted with various forecasters, civil protection authorities (CPAs), and policy makers in 17 countries from across Europe, as well as on participant observation conducted during site visits to operational flood forecasting centres and at EPS training workshops, research meetings, and conferences. The data were collected as a part of a wider project, funded by the UK Economic and Society Research Council, focused on the European Flood Alert System (EFAS) and its role in the ‘Europeanization’, or European-level reorganization, of institutional responsibility for flooding and its management.

From that broader study, the focus here is on operational forecasters and their reflections on the challenges of understanding, communicating and using EPS products in operational flood risk management. The interview sample focused largely on those concerned with flooding on the Rhine and the Danube, though interviews with forecasters working in France, Sweden and the United Kingdom as well those working at the EU level are also drawn on. By disciplinary background, informants were largely hydrologists and worked in a wide range of institutional settings, from unified national hydro-meteorological services to regional water authorities, reflecting the institutional diversity of flood forecasting and management across Europe (Lumbroso, 2007). Informants had a range of experience working with EPS products. Some forecasters, particularly those from eastern Europe, had little experience of or access to EPS products. Most, however, received flood alerts from EFAS driven by its processing of the 51 member ECMWF weather forecasts (Thielen et al., 2009), though a few worked for organizations that had not signed the memorandum of understanding with EFAS to receive its alerts. In addition to EFAS, some forecasters, particularly those in western European states, also received other EPS products, including rainfall and other EPS forecasts from ECMWF or various national meteorological services, and were, in some cases, also working to develop their own in-house hydrological EPS capacity.

Interviews followed a semi-structured protocol, in which prompts were used to focus discussion on key topics of interest: i.e. informants' own forecasting systems and warning protocols; their customers and relationships with CPAs; their attitudes to uncertainty and error, and their understandings and experience with EPS. In contrast to more structured survey methods, in which informants must respond to closed questions and conceptual framings devised, in advance, by the researcher, semi-structured interview protocols provide the flexibility for interviewees to use their own words and to steer the conversation to issues of interest to them (Hoggart et al., 2002). This is important when the aim is to access and understand the understandings of interview informants themselves. Interview findings were also influenced by the fact that they were conducted by two social scientists, Nobert and Demeritt, rather than by recognized flood forecasting experts. Although the interviewers possess the ‘interactional’ expertise (Collins and Evans, 2007) to ask intelligent questions and probe further in response to interesting answers, their status as comparative novices allowed them to approach informants and, ask for lengthy explanations and not appearing to judge the responses received. This made it easier for informants to express contentious views about EPS and other members of the international flood forecasting community in ways that might not have happened if the interviewers had been perceived to be members of that expert community or representatives of particular forecasting centres, such as ECMWF or EFAS.

Apart from a few interviews conducted by Nobert in French (and then translated into English), discussions were conducted in English, which was for most interviewees a second language spoken with varying degrees of fluency, though in one or two cases a more fluent colleague helped to translate for another forecaster whose English was not very good. Discussions were recorded and then subsequently transcribed and coded for analysis, which benefitted from the ‘contributory’ scientific expertise of Cloke and Pappenberger necessary for identifying some of the tensions and technical contradictions implicit to respondents' reflections on operational flood forecasting and EPS. In keeping with the principles of good qualitative research (Hoggart et al., 2002) the paper quotes, as much as possible, directly from informants, who are described in broad, non-identifying terms so as to protect their identity. Given the political sensitivities involved in civil protection and operational flood risk management, this promise of confidentiality was important to ensure that interviewees felt safe enough to speak frankly about their experiences working with EPS and with other agencies.

3. Understanding ensembles and their operational value

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

Traditionally, operational flood forecasting has been based on deterministic modelling. Thus, the first major challenge to applying new EPS products to flood forecasting is familiarizing operational forecasters with these new technologies and developing their appreciation for their potential operational value and limitations. While some forecasters interviewed were well versed in the latest peer reviewed literature on EPS and its use in flood forecasting, most were much less knowledgeable, and there were some surprising misconceptions, even amongst those regularly receiving EFAS alerts and other EPS products. When asked about EPS, the scientific director of a hydrological authority in one eastern European country that has signed the Memorandum of Understanding to receive EFAS alerts, replied, through a translator:

‘What does that mean? Issue of the model ensembles is it makes some average number of when you have few different models and this is the resulting model? Is it right? Some average?’ (Scientific director, hydrological authority, eastern Europe)

His expectation was that the interviewer would explain it to him. After some further, obviously fruitless effort to explain to him about ECMWF and the basic concept behind its 51 member rainfall ensemble (Molteni et al., 1996), the younger colleague who was translating for her older boss conceded:

‘[He] doesn't know how it works. We have ensemble… we use about 6 models here. They give different curves. Of course they try to use all 6. But for example the specialist who forecasts, they know that if you have some mess from west, the history of… the water tables and another situation if they are from the east direction.’ (Flood forecaster, eastern Europe)

What she meant by ‘ensemble’ was not a formally designed system in which model parameters or initial conditions are carefully perturbed to represent the uncertainty about them, but rather what is sometimes called a ‘poor man's ensemble’ in which different predictions, either from different models or from the same model run with different initial conditions, are compared. The results can be informative (Arribas et al., 2005). Indeed, ECMWF (2010) praises the poor man's ensemble modelling approach for ‘provid[ing] important additional information’, but it is not the same as an EPS, and it was not clear that these forecasters entirely understood the distinction.

Of the 24 flood forecasting centres visited, only three had their own fully operational hydrological EPS, though three others were in various stages of development. Most, however, had regular access to one or more EPS products provided to them from other centres, such as ensemble weather forecasts from ECMWF or one (or more) national meteorological services and EFAS flood alerts. In those centres without their own operational hydrological EPS, these EPS products were typically used as background to inform the preparation of their own flood forecasts based on their own local deterministic models.

Forecaster: ‘We receive EPS and so we compare our data.’

Interviewer: ‘With deterministic predications?’

Forecaster: ‘Yeah, we use EPS…. we usually check it if we see the situation is dangerous or might be dangerous by our own judgment …’ (Flood forecaster, Hungary)

Although EPS is not designed for this purpose, it was very common for flood forecasters to try to use EPS products to confirm their own deterministic flood forecasts:

Interviewer: ‘Let's say you're using your model and then you look at what EPS model is predicting…’

Forecaster: ‘Yeah, we can compare and we can decide and we can strictly express that it's true or not… [whether] our statements are right or wrong…’ (Flood forecaster, Slovakia)

There are two potential problems with this application of EPS to flood forecasting. First, it is prone to a ‘confirmation bias’ whereby decision-makers tend to over-emphasize information that confirms some prior belief (Nichols, 1999). Confronted, for example, by outputs driven by the 51 member ECMWF EPS, the tendency is to search out those ensembles that confirm forecasters' preconception and to discount those that do not (Demeritt et al., 2007). Second, looking to EPS to confirm a deterministic forecast in this way is more likely to frustrate than reassure. The hankering for confirmation—in the form of an ensemble tightly clustered around the value predicted by a deterministic forecast—will be greatest when it is hardest to satisfy. Operational forecasters are most likely to seek confirmation for their local deterministic forecasts in situations of high uncertainty, when the spread of the ensemble members is likely to be widest (Buizza et al., 2005). Of course, displaying the uncertainty associated with predictions about future system states is precisely what EPS is designed to do. However, this is not necessarily what operational flood forecasters were hoping for when they turned to EPS products. Their desire was to reduce uncertainty, rather than confirm it, and as this Swiss flood forecaster noted ruefully, ‘It doesn't make your life easier if they have EPS. It's a bit more work. A little bit more interpretation’.

Such misapprehensions about EPS should not, perhaps, be surprising, given its relatively recent introduction to operational flood forecasting. As one forecaster explained, in Europe most operational flood forecasting systems are deterministic, and so if forecasters see EPS products, it is only as a supplement to results from their own deterministic models:

‘Well, the EPS is rather, um, is rather rare, from what I've heard. Most countries really work with deterministic.’ (Flood forecaster, Germany)

There was a wide range of opinion among operational flood forecasters about the informational value contained in EPS products and the best way to extract and use it. With their medium (3–15 day) term focus, EFAS flood alerts (Figure 1) were often welcomed as a sort of ‘pre-alert’ to spur greater vigilance about a potentially emerging flood threat:

thumbnail image

Figure 1. EFAS flood alert. The latest version combines a cartographic overview showing the location of pixels exceeding various threshold alert levels with a facility for zooming in and extracting more detailed, tabular information about the number of ensemble members for any given pixel exceeding those alert levels. Courtesy of EFAS, Joint Research Centre, European Commission, Ispra, Italy

Download figure to PowerPoint

‘Yeah. We also get EFAS reports. For us, it's a useful pre-information, but normally 2 or 3 or 4 days before a flood, we normally know from the weather forecast that we should be aware of this…’ (Flood forecaster, Austria)

Flood forecasters from less well resourced countries in eastern Europe could not afford to be so dismissive of the added value of EFAS alerts. Particularly on the main tributaries of large transnational catchments it was widely recognized that hydrological EPS had the potential to extend the time horizon for skillful flood forecasting. Operationally, the provision of earlier flood warnings made it possible to take a more proactive, precautionary approach to flood incident management, as this forecaster explained:

‘Forecasts are more or less well doing for the Danube. But we are getting information, long term forecasts… The idea was to provide an instrument with a long term forecast to give a large anticipation, where is probability to occur a flood, to go more in deep in these areas … which could become dangerous, to look better at what is happening …. So this product is very welcome just to tell your country, “Pay attention, there could arise dangerous phenomena. Look at this area which could be affected.” So from this point of view, it's very good because it's preparing you.’ (Flood forecaster, Romania)

By contrast, flood forecasters working in mountainous areas subject to flash flooding saw much less operational potential in current EPS products:

‘They are testing this in our ensemble forecast. But normally these forecasts are normally only useful for bigger catchments. Our catchments are not so big. So we use ensemble forecasts only [a little].’ (Flood forecaster, Germany)

The medium-term focus of EFAS, as with other hydrological EPS, was not relevant to flash flooding, while limited area ensemble products such as COSMO-LEPS do not resolve convective rainfall well enough to predict it consistently (Golding, 2009). Although flood forecasters could not always explain why, sometimes conflating the difficulties of forecasting convective rainfall events with those of predicting catchment responses in space and time to rainfall inputs, the limited utility of current EPS products for dealing with flash flooding was widely recognized:

Forecaster 1: ‘The problem of lower Austria is these rivers are alpine type, very small scale and it's good information [EPS]… but not for detailed flood forecasting…’

Forecaster 2: ‘It's a rough idea for someone sitting in Brussels and in London. But not really useful for someone sitting in Vienna. This is just a problem of scale.’ (Flood forecasters, Austria)

Another issue about which there were strongly divergent views among flood forecasters was about the value of the ensemble mean. Recognizing the difficulties of dealing with all 51 members of the ECMWF EPS, for example, some forecasters believed the ensemble mean provided a useful summary of the most important elements of an EPS forecast:

‘But generally, it's better to have these ensembles. Even if you don't use the probabilistic information, just use the ensemble mean. This is basically what we do for some customers. We give them ensemble mean because it's better than the deterministic one.’ (Meteorologist, Austria)

While recognizing the loss of information entailed by ignoring the full ensemble, this forecaster believed that the ensemble mean provided a superior ‘best guess’ forecast than existing deterministic models. The other practical advantage of the ensemble mean is that it is simpler and easier to communicate to less sophisticated forecast users.

There were, however, other forecasters who argued passionately that the ensemble mean is not a meaningful summary indicator and that the entire ensemble must be considered:

‘This is still an important debate. So there are some people around the world trying to use the mean. If you use the mean, it's not too much value.’ (Flood forecaster, Romania)

For this and many other forecasters, the operational value of EPS products was not so much their promise of greater skill in predicting future rainfall or associated flooding, desirable as that is, but their capacity to provide an indication of the uncertainty of those predictions.

One measure of that uncertainty is the statistical dispersion of ensemble members both within and between forecasting time steps. As in the peer review literature itself (e.g. Golding, 2000; Montanari, 2005; Jaun et al., 2008; Cloke and Pappenberger, 2009) there was a range of opinion among operational forecasters about how much confidence could be placed in the number and spread of ensemble members as a measure of the total uncertainty about future system states. While persistence between EPS forecasts is widely regarded in the literature as an indicator of forecasting skill (Buizza, 2008; Bartholmes et al., 2009), this was not mentioned much by the operational flood forecasters we interviewed. Their silence on the issue could be interpreted as ignorance or, alternatively perhaps, as evidence, that the meaning of persistence is so self-evident as not to bear mentioning. It is difficult to say. Whatever the reason, forecasters tended to focus on what the dispersion of ensemble members within a single forecasting time step said about forecast uncertainty.

Responses fell into two broad categories. On the one hand, there was a fairly bullish view of the informational content of the ensemble spread, typified by this response:

‘[EPS] gives us some impression about the uncertainty that is in the weather forecast. So if there's a big spread, there's a lot of uncertainty and we would not be able to give forecasts with big, big lead-times. And if there's a very small spread then we are a little bit more certain about the forecast and we might enlarge the, the lead-time of the forecast for this, for this occasion.’ (Flood forecaster, Netherlands)

Although this flood forecaster conceded EPS gives only an ‘impression’ of the uncertainty of rainfall forecasts, this impression was still welcomed as operationally useful information that influenced the flood forecasts he would issue. A CPA in Sweden, who regularly received EPS weather and flood forecasts as part of a pilot project to test their operational value for emergency services (Nobert et al., 2010), took a similarly pragmatic view of the ensemble spread as operationally robust enough to use in making operational decisions:

‘And then if it's lot of forecast that's almost the same, then you have… but if it's a huge spread, there's going to be a lot of uncertainties.’ (CPA, Sweden)

To this way of thinking, the ensemble spread is a useful heuristic that summarizes, at a glance, the degree of forecast uncertainty. Others were keen to go even farther and use the ensemble spread to quantify the probability of given events:

‘What I would like though is something that gives you the spaghetti plot which is like the raw output, but also then sort of says right, you’ve got a threshold in the middle of that spaghetti plume, this is the probability that this spaghetti plot is telling you that this output is… rather than having to sit there and count how many of the plots go above the line, it does some sort of post processing to give you a probability.’ (Flood forecaster, UK)

On the other hand, there was a more sceptical view about this equation of ensemble spread with the probability of given future states. A few hydrologists worried that insofar as most applications of EPS to flood forecasting were concerned with capturing the uncertainty about rainfall inputs, the danger was that equally large uncertainties about run-off and water routing were simply ignored by the measures of forecast uncertainty produced by hydrological EPS:

‘In hydrology, we are obviously conscious of our model uncertainty. What is more uncertain than hydrological models? I don't really know frankly, but it is obvious that hydrological models are as uncertain as meteorological models. This is because the representation of river basin is so simplified in comparison with reality.’ (Flood forecaster, France)

This, however, was a minority view. More common were concerns about the conversion of the ensemble spread to a quantitative probability distribution. Asked whether probabilistic forecasts would be more useful than deterministic ones for calibrating emergency responses, this German flood forecaster explained:

‘No, I don't believe because if you make a spaghetti plot and 5 of the 51 lines are higher than the threshold value, you can't say the probability is 10%. It may be it's a special case. It's a special event. May be it's higher or lower than 10%. I said if a forecast is 5 spaghetti lines above our threshold, it doesn't mean that the probability is 10%…it might be 5% or 50%. I don't know. I can't estimate it.’ (Flood forecaster, Germany)

Given the unresolved uncertainties about the uncertainty of EPS, this forecaster concluded, ‘the benefit [of issuing probabilistic forecasts] seems not very high’. His forecasting centre was concentrating on improving their own deterministic forecasting model, though its flood forecasters did still receive a variety of EPS weather products as well the EFAS flood alerts.

Notwithstanding those differences of opinion about the meaning of the ensemble spread and its relationship to the total uncertainty associated with predictions about future system states, the basic desire to understand those uncertainties and have some way of communicating them to forecast users was an important driver for those European water authorities developing their own hydrological EPS. In Bavaria, for example, the push for probabilistic flood forecasts came from the Ministry of Environment in the wake of poor forecasting during the 2005 floods. Although forecasters had also been aware of the need to get a better grip on forecast uncertainty, the move to hydrological EPS came in response to external pressure for greater transparency:

‘The motivation? The motivation was that we had some inaccuracy of forecasts during the 2005 flood. So they said ok, we need some… we have to communicate that forecasting is always connected to uncertainty and this uncertainty must be communicated.’ (Flood forecaster, Germany)

Likewise in Austria, which is developing its own limited area ensemble weather forecasting system, ALADIN-LAEF (Wang et al., 2009), driven by dynamically downscaling the ECMWF EPS (Buizza et al., 2007), it is the ability of EPS to represent the uncertainty of rainfall forecasts that is regarded as of particular operational value for flood risk management. To support the new system, the national meteorological service provided training for provincial hydrologists responsible for operational flood forecasting and warning:

‘It was planned from the beginning to have these trainings to explain … what this and this uncertainty would mean and what you can expect and what you should not interpret out of this data. Because expectations [of operational flood forecasters] generally are of course too high. And then in small catchments, there are still floods you cannot predict and these things. So it's a matter of scale. Large scale is easier and small scale is… [throws hands up to signal futility]. This is of course understandable for hydrologists. But then also there's a big difference in the type of precipitation event. You can have one very convective event with local connective cells and hardly predictable. Then you can have the same amount of precipitation on an aerial basis, but with much less uncertainty because it's part of the front that moves through. So what's necessary for the hydrologists to ….[pauses] I think it was the most difficult thing to understand that the quality of the [rainfall] forecasts varies very much from one case to another. …. Of course the hydrologists who had worked with us in the project, they learnt early on to work with uncertainties and ensembles. Actually the wish to do this came from the hydrologists that said we would like to have ensembles from you.’ (Meteorologist, Austria)

That desire for a better measure of forecast uncertainty is also at the heart of the ongoing work in Britain to implement an operational ensemble flood forecasting system (Sene et al., 2007) and in Sweden (Nobert et al., 2010), where SMHI has been providing operational ensemble river flow forecasts for more than 50 catchments by coupling the ECMWF EPS to its own HBV-96 rainfall-runoff model since 2004 (Olsson and Lindström, 2008).

While proponents of EPS often regard its meaning and operational value as self-evident, our research suggests that EPS products are understood and used in a variety of different ways by operational flood forecasters. As well as training programmes to improve the understanding of EPS among operational flood forecasters, further uptake of EPS in operational flood forecasting will also depend on continued research and technical development. In particular, clarifying the relationship of ensemble spread to forecast uncertainty and improving the ability to forecast flash flooding events will be important to winning over sceptics in the hydrological community about the operational value of this approach.

4. Communicating EPS

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

If one driver for the application of EPS to flood forecasting is the promise that it can improve the understanding of predictive uncertainty, then a second challenge involves whether, how, and to whom that uncertainty should best be communicated. One forecaster put the problem this way:

‘[We] need to show something about the probabilistic. Because we can't predict exact… but the problem is how to show this uncertainty.’ (Flood forecaster, Austria)

While there is a variety of different ways of visualizing probabilistic information and some very good research on how probabilistic weather forecasts and warnings have been understood by the public (e.g. Murphy et al., 1980; Baker, 1995; Gigerenzer et al., 2005; NRC, 2006; Broad et al., 2007; Handmer and Proudley, 2007; Morss et al., 2008), ensemble flood forecasting is new, and there are not yet any universally agreed upon practices for communicating ensemble flood forecasts (Lumbroso and von Christierson, 2009).

As a result, those institutions beginning to experiment with developing their own hydrological EPS are also having to experiment with different ways of communicating the results. The problem of communication is not trivial, as this forecaster explained:

Forecaster: ‘We're going to start using ensemble forecast. We did a lot of work to prepare our models for this ensemble within the preview project. Now we're able to use this ensemble forecast. But at the moment we're still…’

Interviewer: ‘Not operational yet.’

Forecaster: ‘We can do it operationally. But we do not because we're not sure how to handle all of this spread… It's a problem of communication.’ (Flood forecaster, Germany)

One way in which the evolving uncertainty about future river flows could be displayed is through so-called spaghetti diagrams, plotting values for every member of the forecast ensemble (see Figure 2).

thumbnail image

Figure 2. An example of an ensemble ‘spaghetti’ hydrograph for a hindcasted flood event. The plot shows the discharge predicted for each ensemble forecast (solid lines), the observed discharge (dashed black line) and four flood discharge warning levels (horizontal dashed lines). Taken from Cloke and Pappenberger (2009: 614)

Download figure to PowerPoint

Forecasters were almost universal in their desire to see and digest as much of this kind of information as possible to inform the production of their own forecasts. Indeed, the most consistent complaint made by national level flood forecasters about the EFAS flood alerts they received from the European Commission's Joint Research Centre was that these alerts took the form of summary tables (see Figure 1) rather than a complete, spaghetti-style representation of the ensemble hydrographs:

Forecaster: ‘If they would give a hydrograph, forecast with hydrograph, that would be…’

Interviewer: ‘Fantastic.’

Forecaster: ‘Fantastic, yes. In addition to [the summary tables].’ (Flood forecaster, Serbia)

Despite this desire to see for themselves the full set of ensembles in order to make better forecasting decisions, flood forecasters were typically sceptical about the ability of CPAs to cope with the same volumes of information in making their decisions.

‘These are spaghetti plots. These are different realisations of weather forecasts. That's going to happen if we will have this water level and it's not going to happen when we have this. That's our environment of flood forecasting…. They use this information to give it further to the disaster managers. But the disaster managers don't want to look by themselves. … They want the expert sitting next to them explaining what's going to happen.’ (Flood forecaster, Germany)

This was a very common sentiment. Many forecasters believed that EPS products were too complicated for CPAs to understand in their raw form without a scientific expert to help them interpret the ensemble.

‘we never deliver a forecast as it is. Even if we run the EPS and have the spaghetti plot, we will never send to civil protection a spaghetti plot. We will make either an easier to understand picture or a text that we describe into the text how the situation is or it will be… Especially for these kinds of end-users that are not used with forecasts. … You can't just send a forecast picture like this showing the forecast is like this and like that when they have no idea. You look at a graph like this in the system, it's not that easy. Take a graph like this, just send it out? [They'll say], ‘what is this? What is this? What? What? What?’ So that's why it's good to post process a forecast and to make it understandable for the end users.' (Flood forecaster, Sweden)

Rejecting the spaghetti plot as too difficult for non-scientists to understand, the SMHI in Sweden has engaged with CPAs and other flood forecast users to devise a reduced form to present the ensemble hydrographs generated by its EPS (Nobert et al., 2010). In order to capture a sense of the range of the ensemble as well as its central and modal tendencies, SMHI displays the maximum and minimum values along with the quartiles and median values, the most recent measurements, and the observed climatology (Figure 3). Hydrological services in Bavaria and Austria are also developing similar forms of plume chart to visualize the evolving uncertainty of forecasted river flows. The information made available to CPAs and other professional partners takes the form of hydrographs in which one can see the expected discharge value and water level accompanied by a range of possibility in which the forecasted value can be found (Figures 4 and 5). However, in both places forecasters also expressed concerns that ‘this is too much information that that the public can't use’ (Flood Forecaster, Austria).

thumbnail image

Figure 3. Ensemble forecast of river discharge for the Krokfors Kvarn catchment from the WebHyPro presentation system of SMHI as presented to Swedish CPAs. The forecast shows observed discharge and a 9 day ensemble forecast presented as minimum, maximum and quantiles of probable discharge together with flood warning thresholds as presented to Swedish CPAs. Courtesy of SMHI

Download figure to PowerPoint

thumbnail image

Figure 4. Lower Austria flood forecast of river Danube for Korneuburg's station, available on http://www.wasserstand-niederoesterreich.at showing real time discharge in blue, the most probable scenario in green, and the confidence interval in grey. Last accessed 24 January 2010

Download figure to PowerPoint

thumbnail image

Figure 5. Bavarian flood forecast showing observed water level for the station located at the junction of river Passau and river Danube available on http://www.nid.bayern.de. The dark green line corresponds to the mean forecasting value and the light green to the range of this forecasted value can possibly take. Last accessed 24 January 2010

Download figure to PowerPoint

Among other forecasting services, there was greater reluctance to release probabilistic flood forecasts or other quantitative information about forecast confidence. While the need to communicate some sense of forecast uncertainty was widely accepted, forecasters agonized about how best to do it:

Forecaster 1: ‘There are different ways of showing this uncertainties. You have the spaghetti face with all the members… you have the statistical…’

Forecaster 2: ‘The mean and…’

Interviewer: ‘Is it the plume plot?’

Forecaster 1: ‘That's right.’

Forecaster 2: ‘Something like this. And in the end we can have a look at our system and we can see. But we're not sure we really can publish these pictures to our customers and if they really can work with it or if they really can understand.’ (Flood forecasters, Germany)

There were widespread concerns that probabilistic flood forecasts would be misunderstood by their recipients. One forecaster argued that rather than raising the awareness of end users about the potential for forecast error the release of uncertainty information would, ironically, have completely the opposite effect:

‘But even if you have a wide range, you can't be sure that really the water level is outside the range. I think this is really a problem. People think, “I'm safe, my threshold isn't reached”. But the threshold can be reached, can be the higher water level…you can't be sure that the water level is in this range. Therefore the benefit [of EPS] is not very high.’ (Flood forecaster, Germany)

This forecaster feared EPS would lull end users into false confidence and encourage the over-optimization of response, because forecast recipients would fail to understand that EPS does not necessarily capture the full uncertainty. These concerns are reminiscent of what sociologist of science Donald Mackenzie (1990) has called the ‘certainty trough’ (Figure 6). Mackenzie suggests that his schematic diagram represents ‘the distribution of certainty about any established technology’ (p. 371). In the case of climate models, Shackley and Wynne (1995) argue that policymakers and other model users place undue confidence in them because of their failure to appreciate the tacit judgments and uncertainties involved in their construction. This flood forecaster is raising exactly the same concern about the potential for ensemble forecast recipients to place inappropriate levels of confidence in the formal representations of uncertainty EPS provides.

thumbnail image

Figure 6. The ‘certainty trough’ in the relative confidence accorded to forecasting models, such as EPS, by model builders, policy users of model outputs, and the general public. After Mackenzie (1990) and Shackley and Wynne (1995)

Download figure to PowerPoint

These fears about the over-interpretation of probabilistic forecasts were much less common than the widespread belief forecast users would not be able to make any sense of them at all:

‘People cannot deal with uncertainties, it is too complicated. The problem is that to live, to go for a walk, to know whether we go to the picnic or not, we could cope with it. However, when it is time to decide whether we evacuate or not, it is another story.’ (Flood forecaster, France)

Such disparaging comments about the ability of CPAs and the public at large to cope with ensembles and with the uncertainty of flood forecasts were a common refrain among forecasters.

Many forecasters believe that in an emergency, people will look to them to predict what is going to happen. In this context, they believe the additional information about forecast uncertainty conveyed by EPS will not clarify the situation for non-scientists but rather confuse and frustrate them:

‘But these people simply don't understand, they don't need this information. “I don't care what the probability is. Give me exact figure!!” [they say]. It really doesn't operate on uncertainties. I said, there is uncertainty of 10%. “What does it, what do you mean, 10% uncertainty? Give me the figure. I want exact forecast.” [Laughs]’ (Flood forecaster, Serbia)

Preliminary experience in Bavaria with disseminating uncertainty bounded flood forecasts would seem to bear out some of these concerns about the ability of forecast recipients to understand of probabilistic forecasts. One flood forecaster complained:

‘But still we have difficulties to explain why some forecasts can be out of the range. It's only 90% and 10%. That's what people don't understand. They say there's ranges of uncertainty, we can imagine, but all forecasts must be within this. That's the problem. On the other hand, you have people who say that too wide, the bandwidth is too wide, I need precise forecasts, I must know the water levels within 10 cm. It's impossible. The discussion as well, if you talk to the civil protection or the disaster managers, they don't want this. They want a precise statement of what you expect to happen within the next 12 hours.’ (Flood forecaster, Germany)

In the absence of much systematic research about how CPAs and the public at large respond to ensemble flood forecasts (Lumbrosa and von Christierson, 2009), there is not the evidence either to confirm or refute these worries. Although the majority of the forecasters we interviewed expressed some doubt about the ability of non-scientists to digest EPS forecasts, the Swedish case suggests the potential for effective training to improve the understanding and use of ensemble forecasts by non-scientific forecast recipients (Nobert et al., 2010). There are clearly some important challenges to meet in terms of designing effective and understandable ways to communicate complicated EPS products to non-experts, but the persistent demands faced by forecasters for more precise predictions suggest that the obstacles to using ensembles are not simply communicative. They are institutional as well.

5. Institutional politics of ensemble flood forecasting

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

One of the promises of EPS is that it will make the uncertainty of flood forecasts more transparent to their recipients. In theory, this additional information should empower forecast recipients and enable them to make better decisions, but as several forecasters acknowledged, the shift from deterministic to probabilistic forecasting also entails shifting onto forecast recipients more of the liability for dealing with uncertainty:

‘You're putting the onus on the people that receive that probabilistic warning to make a decision what to do with it themselves.’ (Forecaster, UK)

The first order scientific uncertainties about whether or not a flood will occur comprise only part of the wider ‘decision’ uncertainties faced by those charged with flood risk management. They must also consider questions such as how the warnings they issue will subsequently be interpreted and what will happen if they are wrong. By making those first order scientific uncertainties more explicit, EPS can sometimes complicate the second order decision uncertainties they are supposed to clarify. Another flood forecaster put the dilemma even more starkly:

‘EPS also means dumping responsibility onto forecast users. By forcing forecasters to provide deterministic predictions, the accountability remains entirely on the shoulders of forecasters. If a forecaster provides a probabilistic forecast, they give the import for the decision to forecasts users. … [By contrast] asking for a deterministic prediction is also a way for the person in charge of taking a decision to avoid decisional problems and blame.’ (Flood forecaster, France)

Some forecasters welcomed the opportunity to shift more responsibility onto forecast users. The provision of probabilistic forecasts made it easier to head off complaints about poor forecasts:

‘So to be on the safe side and not to get angry calls all the time, [flood forecasters] want to have some upper and lower limit. Then if the forecast is within this, then you're ok. This is also for the meteorologists. It puts some pressure off you because if it's in the range you have predicted, then you are basically ok.’ (Meteorologist, Austria)

As this comment suggests, being ‘on the safe side’ can mean something rather different for the forecaster than the forecast recipient. The forecaster is at least partly concerned with managing what Rothstein et al. (2006) call the ‘institutional’ risk of blame in the event of an inaccurate forecast. Error and blame are rather different risks to the substantive first order one of flooding, and there can be tensions between their management. Historically, one of the reasons that European flood forecasting agencies have sometimes set quite high thresholds for issuing flood warnings is that their statutory focus has been on public safety.

‘[They are] primarily concerned with issuing short-notice flood warnings, you know 2 hours, with as a high level of certainty about that as they can manage… so normally, unless they are absolutely certain that there is going to be a flood, they are not going to issue a warning, even if there is a fair chance of flooding. And this is because their primary customer for flood warnings is the general public. So they think that's what they have to do…’ (Meteorologist, UK)

In this case concerns about the effects of false alarms on public confidence in and responsiveness to flood warnings reinforce an institutional tendency to avoid issuing early warnings (Demeritt et al., 2007). Like the fable about the little boy who falsely cried wolf (Roulstone and Smith, 2004), this bias against type 1 errors helps protect the public and forecasting agency against the costs of issuing false alarms, but it also increases the risk that warnings will not be issued early enough to allow precautionary action. While flood forecasters are institutionally biased towards one kind of error, as Doswell (2004, p. 1119) notes, in the epistemic culture of ‘weather forecasting, false negatives are seen as a less desirable outcome than false positives (“false alarms”), because they are associated with the unfavourable notion of an unforecast weather event, perhaps with casualties as a result’.

From an institutional perspective one of the big attractions of EPS over a simple binary flood warnings or absolute predictions is that it is more difficult to judge when a prediction couched in probabilistic terms has failed. This has important reputational advantages for forecasting agencies, as another British forecaster explained:

‘Before now we said yes or no, you are going to flood or not. If you say yes, they're going to do something about it. If we say 40% chance, then it's up to them what they do with it. To me, that's where the Met Office have always coped better with things. In the Met Office, all their weather warnings always come out as probabilistic. So when it doesn't happen, they never have any complaints because they always say we only said it was a 60% chance and so it hasn't happened. In the Agency, our flood warning services have been built on yes, no and if we get it wrong, then that's got higher consequences than the Met Office getting it wrong in producing a probabilistic forecast. So I think for our own reputation as well, to go to probabilistic forecasting would be quite useful because it almost gives us, not an excuse exactly, but it gives us a reason, it quantifies our uncertainty and it means we won't necessarily get criticised as much because when you give a probability, there's always a chance that what you're saying won't happen as well.’ (Flood forecaster, UK)

Other forecasters were much less comfortable with the way that EPS shifted so much responsibility for dealing with risk onto forecast users. Beyond the concerns noted above about whether CPAs and other forecast recipients had the cognitive ability to make sense of EPS, there was also the view that they should not have to. Some forecasters regarded probabilistic forecasting as a derogation of their professional responsibility to predict what will happen:

Forecaster: ‘Yeah. I think I have to keep in mind that we are the hydrologists and we are the ones that are responsible for this forecast and we have to… other than forecasts, from saying it can spread like this and then they have no more idea and think what you want and do what you want with the forecast. I think this is not so good. The spread shouldn't be too wide.’

Interviewer: ‘Not 10%.’

Forecaster: ‘Not 10% for us because this is like putting the responsibility down to them and say we have no idea, make what you want of the spaghetti plots. This shouldn't happen I think.’ (Flood forecaster, Germany)

Very similar reservations about EPS were expressed by another forecaster when asked who should be responsible for dealing with uncertainties of their model predictions:

‘we have the responsibility because they ask us what will happen and we have to say this short value is reached or not. So it's the responsibility for us. Now I think we don't have the right tools to give this information in a way which they can handle.’ (Flood forecaster, Germany)

Partly his concern was about the difficulties of communicating the full range of uncertainties, given that extreme flows may well exceed those predicted by even the highest ensemble members, but as he went on, detailing the great lengths his forecasting centre took to deliver the most accurate forecast possible, it became clear that his concern was with the very idea of EPS.

‘For us, we need more reliability in ensemble forecasts, more reliability about… this forecast at the moment. Because we're operating in operational work, I'm not interested to say normally we have reliability of 20% if we have 20% of spaghetti plots are higher. Normally it says we have reliability of 20% [of] thresholds will be reached. But if it's a special event, maybe it's really not…. [maybe] it's very much higher and actually we need more information about this.’ (Flood forecaster, Germany)

For him couching your forecasts in probabilistic terms felt like an admission of professional failure. The challenge, he seemed to suggest, was to improve the capacity of deterministic models to get the right answer, rather than calculating the error to be expected from them.

6. Summary and conclusions

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

New EPS technologies have been frequently touted in the peer reviewed literature as adding value to flood incident management by increasing the capability to issue warnings as well by bringing greater skill than deterministic predictions to hydrological forecasting (Bálint et al., 2006; Roulin, 2006; Bartholmes et al., 2009). In this paper a number of challenges to realizing those promises in operational practice have been identified. First, the research has shown how EPS products are understood and used in a variety of different and sometimes contradictory ways by operational flood forecasters. While proponents of hydrological EPS tend to regard its meaning and operational benefits as self-evident, a variety of opinion about its skill, operational limitations, especially for flash flooding, and ability to capture and quantify the full range of forecast uncertainty has been documented.

If one driver for the application of EPS to flood forecasting is the promise that it can quantify the uncertainty associated with predicting future system states, then a second challenge involves designing effective and understandable ways to communicate the resulting EPS products to non-experts. There is an important tension here between the transparency promised by EPS and the belief that this uncertainty is too complicated to be understood in full by non-experts. Forecasting agencies are developing a variety of methods for visualizing novel hydrological EPS products. However, operational flood forecasters were often sceptical about the ability and appetite of forecast recipients to understand or use them.

One way to address these first two challenges would be through better training and closer contacts between operational flood forecasters, EPS system designers and users. This can go some way to ensuring the uncertainty surrounding EPS forecasts is represented in ways that can be communicated and understood by their intended consumers, but it is also argued that the challenges involved with using ensembles in flood forecasting are not simply cognitive or communicative. They are also institutional and political. The shift from deterministic to probabilistic forecasting also entails a shift in institutional liability for decisions taken in the face of uncertainty. In this context, responses to the introduction of new EPS products are as much responses to the altered institutional relationships heralded by those products as to the products themselves.

Acknowledgements

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

Research was made possible by a grant from the Economic and Social Research Council (RES-062-23-0913). We gratefully acknowledge SMHI, the JRC, the Bayerisches Landesamt für Umwelt and the Abteilung Hydrologie, Gruppe Wasser of Amt Nieder Östereich Landesregierung for permission to reproduce their figures. The paper has also benefited from the constructive criticism offered by two anonymous referees and from comments made at seminar audiences at the 5th EFAS annual meeting, UK Met Office, and the Hazards and Risk Seminar Series at King's College London.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Research design, data and methods
  5. 3. Understanding ensembles and their operational value
  6. 4. Communicating EPS
  7. 5. Institutional politics of ensemble flood forecasting
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References
  • Arribas A, Robertson KB, Mylne KR. 2005. Test of a poor man's ensemble for short-range probability forecasting. Monthly Weather Review 133: 18251839.
  • Baker E. 1995. Public response to hurricane probability forecasts. The Professional Geographer 2: 137147.
  • Bálint G, Csík A, Bartha P, Gauzer B, Bonta I. 2006. Application for meteorological ensembles for Danube flood forecasting and warning. In Transboundary Floods: Reducing Risk through Flood Management, NATO Science Series IV: Earth and Environmental Sciences, MarsalekJ, StancalieG and BálintG (eds), Vol. 72. Springer: Dordrecht; 5767.
  • Bartholmes J, Thielen J, Ramos M, Gentilini S. 2009. The European flood alert system EFAS—Part 2: statistical skill assessment of probabilistic and deterministic operational forecasts. Hydrology and Earth System Sciences 2: 141153.
  • Broad K, Leiserowitz A, Weinkle J, Steketee M. 2007. Misinterpretations of the “cone of uncertainty” in Florida during the 2004 hurricane season. Bulletin of American Meteorological Society 88: 651667.
  • Buizza R. 2008. The value of probabilistic prediction. Atmospheric Science Letters 9: 3642.
  • Buizza R, Bidlot J-R, Wedi N, Fuentes M, Hamrud M, Holt G, Vitart F. 2007. The new ECMWF VAREPS (Variable Resolution Ensemble Prediction System). Quarterly Journal of the Royal Meteorological Society 133: 681695.
  • Buizza R, Houtekamer PL, Toth Z, Pellerin G, Wei M, Zhu Y. 2005. A comparison of the ECMWF, MSC and NCEP global ensemble prediction systems. Monthly Weather Review 133: 10761097.
  • Cloke HL, Pappenberger F. 2009. Ensemble flood forecasting: a review. Journal of Hydrology 375: 613626.
  • Cloke HL, Thielen J, Pappenberger F, Nobert S, Bálint G, Edlund C, Koistinen A, de Saint-Aubin C, Sprokkereef E, Viel C, Salamon P, Buizza R. 2009. EPS progress in the implementation of hydrological ensemble prediction systems (HEPS) in Europe for operational flood forecasting. ECMWF Newsletters 121: 2024, http://www.ecmwf.int/publications/newsletters/pdf/121.pdf.
  • Collins H, Evans R. 2007. Rethinking Expertise. University of Chicago Press: Chicago, IL.
  • Demeritt D, Cloke H, Pappenberger F, Thielen J, Bartholmes J, Ramos M-H. 2007. Ensemble predictions and perceptions of risk, uncertainty, and error in flood forecasting. Environmental Hazards 7: 115127.
  • Doswell CA III. 2004. Weather forecasting by humans—heuristics and decision making. Weather and Forecasting 19: 11151126.
  • ECMWF. 2010. The poor man's ensemble approach. http://www. ecmwf.int/products/forecasts/guide/The_poor_man_s_ensemble_ approach_1.html (Last accessed 20 March 2010).
  • Faulkner H, Parker D, Green C, Beven K. 2007. Developing a translational discourse to communicate uncertainty in flood risk between science and the practitioner. Ambio: A Journal of the Human Environment 36: 692703.
  • Gigerenzer G, Hertwig R, Van den Broek E, Fasolo B, Katsikopoulos KV. 2005. “A 30% Chance of rain tomorrow”: how does the public understand probabilistic weather forecasts? Risk Analysis 3: 623629.
  • Golding B. 2000. Quantitative precipitation forecasting in the UK. Journal of Hydrology 239: 286305.
  • Golding B. 2009. Long lead time flood warnings: reality or fantasy? Meteorological Applications 16: 312.
  • Handmer J, Proudley B. 2007. Communicating uncertainty via probabilities: the case of weather forecasts. Environmental Hazards 7: 7987.
  • Hoggart K, Lees LC, Davies AR. 2002. Researching Human Geography. Arnold: London.
  • Jaun S, Ahrens B, Walser A, Ewen T, Schär C. 2008. A probabilistic view on the August 2005 floods in the upper Rhine catchment. Natural Hazards and Earth System Sciences 8: 281291.
  • Krzysztofowicz R. 2001. The case for probabilistic forecasting in hydrology. Journal of Hydrology 249: 29.
  • Lumbroso D. 2007. Review report of operational flood management methods and models. FLOODsite Project Report T17-0701. WL Delft Hydraulics: Delft, Netherlands. http://www.floodsite.net/html/ partner_area/project_docs/Task17_ report_M17_1review_v1_1.pdf. (Last accessed 21 January 2010).
  • Lumbroso D, von Christierson B. 2009. Communication and Dissemination of Probabilistic Flood Warnings—Literature Review of International Material. Science Project SC070060/SR3 for Environment Agency/Defra Flood and Coastal Erosion Risk Management Research and Development Programme. Environment Agency: Bristol.
  • MacKenzie D. 1990. Inventing Accuracy: An Historical Sociology of Nuclear Missile Guidance. MIT Press: Cambridge, MA.
  • Molteni F, Buizza R, Palmer TN, Petroliagis T. 1996. The ECMWF Ensemble prediction system: methodology and validation. Quaternary Journal of the Meteorological Society 122: 73119.
  • Montanari A. 2005. Large sample behaviors of the generalized likelihood uncertainty estimation (GLUE) in assessing the uncertainty of rainfall–runoffsimulations. Water Resources Research 41: W08406, DOI:10.1029/2004WR003826.
  • Morss R, Demuth J, Lazo J. 2008. Communicating uncertainty in weather forecasts: a survey of the US public. Weather and Forecasting 23: 974991.
  • Morss RE, Wilhelmi OV, Downton MW, Gruntfest E. 2005. Flood risk, uncertainty, and scientific information for decision making: lessons from an interdisciplinary project. Bulletin of the American Meteorological Society 86: 15931601.
  • Murphy A, Lichtenstein S, Fischhoff B, Winkler RL. 1980. Misinterpretation of precipitation probability forecasts. Bulletin of the American Meteorological Society 61: 695701.
  • Nicholls N. 1999. Cognitive illusions, heuristics, and climate prediction. Bulletin of the American Meteorological Society 7: 13851397.
  • Nobert S, Demeritt D, Cloke H. 2010. Informing operational flood management with ensemble predictions: lessons from Sweden. Journal of Flood Risk Management 3: 7279.
  • NRC (National Research Council). 2006. Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. National Academy Press: Washington, DC.
  • Olsson J, Lindström G. 2008. Evaluation and calibration of operational hydrological ensemble forecasts in Sweden. Journal of Hydrology 350: 1424.
  • Palmer TN. 2002. The economic value of ensemble forecasts as a tool for risk assessment: from days to decades. Quarterly Journal of the Royal Meteorological Society 128: 147174.
  • Park Y-Y, Buizza R, Leutbecher M. 2008. TIGGE: preliminary results on comparing and combining ensembles. Quaternary Journal of the Meteorological Society 134: 20292050.
  • Parker DJ, Priest SJ, Tapsell SM. 2009. Understanding and enhancing the public's behavioural response to flood warning information. Meteorological Applications 16: 103114.
  • Pielke RA Jr. 1999. Who decides? Forecasts and responsibilities in the 1997 Red River flood. Applied Behavioral Science Review 7: 83101.
  • Pitt M. 2008. Learning Lessons from the 2007 Floods: An Independent Review by Sir Michael Pitt. Cabinet Office: London.
  • Rayner S, Lach D, Ingram H. 2005. Weather forecasts are for wimps: why water resource managers do not use climate forecasts. Climatic Change 69: 197227.
  • Richardson DS. 2000. Skill and relative economic value of the ECMWF ensemble prediction system. Quaternary Journal of the Meteorological Society 126: 649667.
  • Rothstein H, Huber M, Gaskell G. 2006. A theory of risk colonization: the sprialling regulatory logics of societal and institutional risk. Economy and Society 35: 91112.
  • Roulin E. 2006. Skill and relative economic value of medium-range hydrological ensemble predictions. Hydrology and Earth System Sciences Discussions 3: 13691406.
  • Roulston MS, Smith LA. 2004. The boy who cried wolf revisited: the impact of false alarm intolerance on cost-loss scenarios. Weather and Forecasting 19: 391397.
  • Sene K, Huband M, Chen Y, Darch G. 2007. Probabilistic flood forecasting scoping study. R&D Technical Report FD2901/TR. Defra: London.
  • Shackley S, Wynne B. 1995. Integrating knowledges for climate change: pyramids, nets and uncertainties. Global Environmental Change 5: 113126.
  • Thielen J, Bartholmes J, Ramos M-H, de Roo A. 2009. The European flood alert system—Part 1: concept and development. Hydrology and Earth System Science 13: 125140.
  • Thielen J, Schaake J, Hartman R, Buizza R. 2008. Aims, challenges and progress of the hydrological ensemble prediction experiment (HEPEX) following the third HEPEX workshop held in Stresa 27 to 29 June 2007.Atmospheric Science Letters 9: 2935.
  • Wang Y, Bellus M, Wittmann C, Steinheimer M, Ivatek-Sahdan, S, Kann A, Tian W, Ma X, Tascu S, Bazile E. 2009. The Central European limited area ensemble forecasting system: ALADIN-LAEF. Technical report, RC—LACE Project (Regional Cooperation for Limited Area modeling in Central Europe). http://www.rclace.eu/?page = 40. (Last accessed 21 January 2010).