It is common practice when assessing the skill of either deterministic or ensemble forecasts to consider the observations with no uncertainty. Observation uncertainty may be associated with different causes and the present paper discusses the uncertainty that derives from the mismatch between model-generated grid point precipitation and locally measured precipitation values. There have been many attempts to add uncertainty to the verification process; in the present paper the uncertainty is derived from the observed precipitation distribution within grid boxes of assigned resolution. The Brier skill score (BSS) and the area under relative operating characteristic curve skill score calculated utilizing the verification method which includes observational uncertainty (O-OP), are compared to analogous scores obtained from standard verification methods. The scores are calculated for two different forecasting systems: the European Centre for Medium-Range Weather Forecasts Ensemble Prediction System and the Spanish Meteorological Agency Short-Range Ensemble Prediction System.
The results show that the resolution component of the BSS improves when using the O-OP method, i.e. forecast probabilities are distinguished from climatological probabilities and therefore the system has better skill. The reliability component, on the contrary, greatly degrades and this degradation is worse for lower precipitation thresholds. The results also show that the more asymmetric the precipitation distribution is within the grid box, the larger is the degradation of the reliability component. The overall BSS improves except for low thresholds. These results encourage further research into observation uncertainty and how it can be effectively accounted for in the verification of weather parameters such as precipitation. Copyright © 2011 Royal Meteorological Society