The 30-year ‘baseline’ (1)
Article first published online: 4 JAN 2013
Copyright © 2013 Royal Meteorological Society
Volume 68, Issue 1, page 25, January 2013
How to Cite
Rowley, M. (2013), The 30-year ‘baseline’ (1). Weather, 68: 25. doi: 10.1002/wea.2057
- Issue published online: 4 JAN 2013
- Article first published online: 4 JAN 2013
I would like to widen the debate on this subject (aired in Letters in the May and October 2012 issues of Weather) – after first commenting that I personally find it useful to maintain sets of averages for both a fixed reference period (for climate change assessment) and a more recent period for ‘general’ application.
But why 30 years, and why a stepped-rolling 30 years? I have books on my shelf dating back to the 1940s which quote reference averages for 1901–1930. I assume that, when codification of international ‘normals’ was sponsored by the International Meteorological Organisation in the inter-war years, it was thought that a period sufficiently long, but within the compass of the (then) standardised observing practice, should be truncated at what was technically the start of the twentieth century: I think we need to re-think this approach.
I suggest we need averages for two main reasons:
1. We need a long base-line dataset that is fixed to tease out the drift in contemporary data, whatever the cause. We will all have our own ideas – but the important thing is it must remain in place: changes should only be applied when new data for the defined period are received or errors in the original analysis detected. It should be of sufficient length that it covers the commonly available datasets (e.g., EWP, CET) in an easily manageable number of ‘chunks’: I would follow Hubert Lamb and use half-century blocks, but there's an argument, where the data are thought sufficiently robust, for using a 100-year set. Currently, I would like to see the CET & EWP values related to the twentieth century mean (i.e., 100 years).
2. The needs of those providing summaries to the non-specialist community require something different. The stepped-rolling 30-year datasets obviously do that, and have the benefit, as the Editor pointed out, that they relate to the recent experience of the largest proportion of the population. But precisely because it relates to the largest sub-set of the general population, it means that the average will not relate to the entire experience of an older generation. For example, the recent summer (2012) was declared to have a provisional CET of 15.2 °C. This gives a difference from the 1981–2010 era of −0.7 degC, but with respect to the 1961–1990 period, encompassing the experience of roughly a fifth of the population, the difference was only −0.1 degC: hardly remarkable!
My own preference would be to relate contemporary readings of rain, temperature etc., to the most recent years – I suggest a ‘rolling decade’. With modern analysis software, it is perfectly possible to keep these updated: so in a particular forecast, an expected maximum temperature might be put into context thus: ‘ …. and afternoon temperatures will be no higher than 10 °C, that's 5 degC below the recent 10-year average’.
For summaries intended for general consumption, whatever averaging period is used I would like to see much more use of the technique that Philip Eden employs, as for example in his September monthly summary on the WeatherOnline site: The Central England Temperature (CET) was 13.1 °C: in the last 100 years, 25 Septembers were colder and 75 were warmer. Straightaway, we can picture where this month is situated in the September catalogue.