Data assimilation has been widely tested for flood forecasting, although its use in operational systems is mainly limited to a simple statistical error correction. This can be due to the complexity involved in making more advanced formal assumptions about the nature of the model and measurement errors. Recent advances in the definition of rating curve uncertainty allow estimating a flow measurement error that includes both aleatory and epistemic uncertainties more explicitly and rigorously than in the current practice. The aim of this study is to understand the effect such a more rigorous definition of the flow measurement error has on real-time data assimilation and forecasting. This study, therefore, develops a comprehensive probabilistic framework that considers the uncertainty in model forcing data, model structure, and flow observations. Three common data assimilation techniques are evaluated: (1) Autoregressive error correction, (2) Ensemble Kalman Filter, and (3) Regularized Particle Filter, and applied to two locations in the flood-prone Oria catchment in the Basque Country, northern Spain. The results show that, although there is a better match between the uncertain forecasted and uncertain true flows, there is a low sensitivity for the threshold exceedances used to issue flood warnings. This suggests that a standard flow measurement error model, with a spread set to a fixed flow fraction, represents a reasonable trade-off between complexity and realism. Standard models are therefore recommended for operational flood forecasting for sites with well-defined stage-discharge curves that are based on a large range of flow observations.
Plain Language Summary
Flood early warning systems are an efficient and affordable tool to mitigate flood risks without altering the fluvial ecosystems. Their benefits rely, however, on an accurate prediction of the timing and magnitude of flood peaks. As in any mathematical simulation of natural processes, flood forecasts must deal with several sources of uncertainty. To reduce them, a standard practice is to compare the outputs of the hydrological model with real-time observations in order to understand and correct the registered deviations. In operational systems, streamflow measurements are thought to be error-free or having a typical, although arbitrary, error. The impact of this choice has been little investigated. Recent advances in the characterization of rating curve uncertainty allow now a more objective definition of the streamflow measurement error, whose consideration in flood forecasting might improve the quality of the flood warnings. While this research has proven this, it has also shown that, when the deterministic rating curve is based on a broad range of flow gaugings, its use while correcting the hydrological model can still lead to good forecasts. Hence, the loss of accuracy may not be significant enough to justify an increase in complexity and computational costs.