Geophysical Research Letters

Temporal variations in frost-free season in the United States: 1895–2000



[1] A newly available data set of daily temperature observations was used to study the temporal variability of the frost-free season, based on an inclusive 0°C threshold, for 1895–2000 in the conterminous United States. A national average time series of the length of the frost-free season is characterized by 3 distinct regimes. The period prior to 1930 was notable for decreasing frost-free season length from 1895 to a minimum around 1910, followed by a marked increase in length of about 1 week from 1910 to 1930. During 1930–1980, frost-free season length was near the period average with relatively little decadal-scale variability. Since 1980, frost-free season length has increased by about 1 week. The national average increase in frost-free season length from the beginning to the end of the 20th Century is about 2 weeks. Frost-free season length has increased much more in the western U.S. than in the eastern U.S.

1. Introduction

[2] Recent studies have documented an increase in the length of the frost-free season, defined as the number of days between the last frost in the spring and the first frost in the fall, in the United States. Cooter and LeDuc [1995] found such an increase in the northeastern U.S. over the period 1950–93. Cayan et al. [2001] indicated that the western spring streamflow “pulse,” which is usually initiated by a warm episode spanning the West, has been retreating back toward winter, by about 7–10 days over the past half century. They found a similar change in the time of blooming of lilac/honeysuckle plants. Nemani et al. [2001] showed that frost days in the Napa/Sonoma region of California have diminished from 28 days in 1950 to about 8 days by 1997, and a substantial increase (66 days) in the length of the continuously frost-free season, from 254 to 320 days per year, which has benefited the premium wine industry. In a more comprehensive study, Easterling [2002] found increases in the length of the frost-free season in all regions of the U.S. over the period 1948–99. Studies such as these and in other land areas led the Intergovernmental Panel on Climate Change to conclude that “…high night-time minimum temperatures are lengthening the freeze and frost season in many mid- and high latitude regions” [Cubasch et al., 2001]. The IPCC also concluded that it is highly likely that such trends will continue in the future [Houghton et al., 2001].

[3] A common limitation of these and other national temperature-extremes studies is that they used data beginning in the middle portion of the 20th Century. The reason for this has been a lack of digitally available data. Although the National Weather Service Cooperative Observer Network (COOP) has been in operation since the late 19th century, the routine digitizing of observations from the paper forms began with 1948 data and proceeded forward. A recent effort to digitize all pre-1948 COOP data has resulted in an enhanced set of daily temperature and precipitation data for the U.S. starting in the late 1800s [CDMP, 2001]. The availability of these pre-1948 daily data affords an opportunity to perform studies with unprecedented detail, extending back to the late 1800s, of trends in short duration extreme events. Such studies may provide important insights into natural and anthropogenically-forced variability.

[4] These data have been quality-controlled by the authors of this paper, and here we present analyses of century-scale trends in the frost-free season. A related paper [Kunkel et al., 2003] described an analysis of trends in heavy precipitation events.

2. Analysis

[5] A set of station records with less than 10% missing temperature data for 1895–2000 was identified; this set consists of 794 stations, distributed as shown in Figure 1. New long-term stations that are now available as the result of the recent keying project are indicated by the symbol ‘X’ in red. Prior to this project, there were very few stations in the southeast, along the east and west coasts, and in the intermountain west. Although there are still areas with a low density of long-term stations, particularly in Wyoming, the Great Basin, Idaho, and California, all western states now have at least 3 such sites.

Figure 1.

Location of stations with less than 10% missing daily temperature data for 1895–2000. The symbol ‘o’ indicates that long-term data were available prior to CDMP while the symbol ‘x’ indicates newly available long-term stations.

[6] Frost-free season indicators were determined using the inclusive threshold of 0°C for daily minimum temperature; these indicators included the frost-free season length, the first fall occurrence of frost, and the last spring occurrence of frost. The following method for producing regional and national time series was intended to weight all areas equally, regardless of station density. To determine national values for the conterminous U.S., station values were arithmetically averaged for climate divisions. The climate division values were then averaged with area weighting to derive state values (there were a small number of climate divisions without any long-term stations and a few state values are averages of only some of their climate divisions). Finally, state values were averaged with state area weighting to derive national values. Thus, areas of high or low station density are not unduly weighted or ignored, respectively. In this method, stations in low density areas make a greater contribution to the national average than stations in high density areas.

[7] Quality control of this dataset involved a two-step procedure described in detail in Kunkel et al. [2004]; a brief description is given here. In the first step, a set of “outliers” was identified using two objective screening tests. In the second step, each outlier was manually assessed by a trained climatologist. A outlier was flagged as invalid if the climatologist judged that the observed value was outside a physically possible range or that the observed spatial pattern was not likely to be physically possible. Values judged to be invalid were excluded from the analysis. In the first screening test, a temperature observation was considered an outlier if its value was more than 5 standard deviations from the station's monthly mean. In the second screening test, the value was compared with values from nearby stations in the following manner. First, daily gridded fields of temperature were generated. Second, each data value was expressed as a deviation from the station's monthly mean. Third, for each data value, a grid estimate was generated as the bi-linear interpolation from the four nearest grid points. Fourth, each grid estimate was expressed as a deviation from the monthly mean of grid estimates. Fifth, the differences between data value deviations and grid estimate deviations were calculated. Sixth, the frequency distribution of differences was constructed and the 1% and 99% threshold limits identified. Seventh, a value was considered an outlier if the difference between the data value deviation and grid estimate deviation was approximately 3 times greater (less) than the 99% (1%) limit.

[8] Trends can be affected by temporal inhomogeneities in the data record, such as instrumentation changes and exposure. During most of the period studied, temperature was measured by liquid-in-glass thermometers in the cotton region shelter (CRS). Beginning in the 1980s, there was a gradual change to an electronic system known as the Maximum-Minimum Temperature System (MMTS). However, Hubbard et al. [2001] found only small differences in the effects of the CRS and MMTS shelter on nighttime temperature. Another possible effect is the gradual increase in urban areas and possible effects of the urban heat island on the results. However, as will be seen the results are spatially coherent including rural areas.

3. Results

[9] A national average time series is presented first. This is useful to examine net changes in the U.S. but can mask important regional differences. Regional variations are then examined. Figure 2 shows time series of national average values of the date of the first fall frost, the date of the last spring frost, and the length of the frost-free season. There are three distinct regimes. During the early part of the period (prior to about 1930), the frost-free season was shorter than the 1895–2000 period average by about 5 days with earlier fall frosts and later spring frosts. During this pre-1930 period, the frost-free season length decreased from 1895 to a minimum around 1910 with an increase in length of about 1 week from 1910 to 1930. During the period 1930–1980, the frost-free season length in individual years was near the long-term average and had remarkably little variability. After 1980, the frost-free season increased in length by 5–10 days. Earlier occurrences of the last spring frost made a greater contribution to this lengthening; for the period 1980–2000, the average date of the last spring frost was about 4 days earlier than the 1895–2000 average while the average date of the first fall frost was about 2 days later.

Figure 2.

Average U.S. values of the first fall occurrence of frost, the last spring occurrence of frost, and the length of the frost-free season. The values have been smoothed with a 10-year moving average filter and plotted as a deviation (days) from the period average. Positive values indicate dates later in the year (fall and spring occurrences) or a greater number of days (length).

[10] Although the station density is much improved over what was previously available, the density is rather low in portions of the western U.S. Is it possible that the large changes in frost-free season length are an artifact of sampling? This possibility was examined by analyzing more recent data using Monte Carlo sampling techniques. Specifically, a set of 3266 stations with less than 5% missing data for 1971–2000 was identified. This set has much higher station density in the western U.S. The sensitivity to station density was examined by randomly selecting a single station in each box of a 4° latitude by 5° longitude grid, a density approximately equal to the least dense areas in the intermountain west in Figure 1, covering the entire U.S. The frost-free season length was then computed for this thinned-out station network. This procedure was repeated 5000 times and the distribution of values was examined. The 95% confidence limits for decadal average values of the growing season length were approximately ±2 days. This test is more stringent than necessary since only a small portion of the U.S. has such low station densities. Nevertheless, the changes in frost-free season lengths shown in Figure 2 are considerably larger than these limits. Thus, there is a very high likelihood that the large changes are real and not an artifact of sampling.

[11] Figures 3 and 4 show frost-free season length anomalies for 1895–1920 and 1980–2000, respectively, for the 4° latitude by 5° longitude grid used in the Monte Carlo analysis, computed as an arithmetic average of all stations within each grid box. Short frost-free seasons occurred in the early period over much of the central and western U.S. The long frost-free seasons of recent years are predominantly a feature of western areas, generally similar to the spatial pattern of trends found by Easterling [2002] for 1948–99. The lengthening of the frost-free season is seen at grid boxes including both large urban areas and those mainly rural in nature. This difference between east and west is further highlighted in Figure 5, which shows frost-free season time series for the U.S. west of 100°W and east of 100°W. Based on a least-squares linear fit, the trend in frost-free season length for the western U.S. is 19 days per century, but only about 3 days per century for the eastern U.S. (both trends are statistically significant at the 95% level of confidence).

Figure 3.

Map of 1895–1920 gridbox-averaged anomalies from the long-term mean frost-free season length. Filled-in (open) circles indicate positive (negative) anomalies.

Figure 4.

Map of 1980–2000 gridbox-averaged anomalies from the long-term mean frost-free season length. Filled-in (open) circles indicate positive (negative) anomalies.

Figure 5.

Time series of frost-free season length for the western U.S. west of 100°W (heavy line) and for the eastern U.S. east of 100°W (light line). The values have been smoothed with a 10-year moving average filter and plotted as a deviation (days) from the period average.

[12] Mean temperature anomalies (compared to the 1895–2000) average were calculated for the 1895–1920 and 1980–2000 periods and compared to frost-season length anomalies for each station. A summary of the results (Figure 6) indicates that there is a direct relationship between the two with longer (shorter) frost-free seasons associated with warmer (cooler) periods. White et al. [1999] had found a direct relationship between frost-free seasons and mean annual temperature when comparing different locations. The results here indicate a similar relationship in time for a fixed location. Interestingly, the frost-free season anomaly for a given temperature anomaly bin is more positive in the 1980–2000 period than in the 1895–1920 period. This may reflect the observation that minimum temperatures are rising faster than maximum temperatures.

Figure 6.

Frost-free season anomalies (relative to 1895–2000) versus mean temperature anomalies for 1895–1920 and 1980–2000. Each temperature anomaly bin is an average of all stations with temperature anomalies in the bin range. The error bars indicate the ± standard deviation values. The bin label is the center of the range (e.g., 1.5 indicates a range of 1.26–1.75).

4. Discussion

[13] The increase in frost-free season length from the beginning to the end of the 20th Century by about 2 weeks is a substantial change. The timing of the two periods of major increases (1910–1930 and 1980–2000) is broadly consistent with the timing of periods of increases in U.S. mean annual temperature ( Although first fall frosts have been later, the spring season has made a greater contribution to the increase in frost-free season length. The spatial pattern of change is also consistent with the pattern of U.S. mean annual temperature changes, that is, larger changes in the west than in the east. The recent large increases in frost-free season length in the mountainous western states (Figure 4) agree very well with the results of Nemani et al. [2001] and Cayan et al. [2001], who employ direct and indirect measures of temperature.

[14] A lowering of the range in diurnal temperature is typical of air masses that carry more moisture [Hubbard et al., 2003] and allow less penetration of solar radiation to the earth's surface [Mahmood and Hubbard, 2002]. Thus, it is possible that the shorter growing seasons in the early 1900s were characterized by drier air masses and more global solar radiation while the longer growing seasons in recent years may be associated with air masses that are moister in comparison and result in less global solar radiation. We can only speculate at the moment.

[15] Further insights may be gained in the future as other data sets become available. For example, the CDMP is now undertaking the keying of selected 19th Century records of U.S. daily climate observations taken prior to the establishment of the COOP. Although observational practices were not as standardized and thus homogeneity of record will be an issue, it may be possible to add a few decades to the century-long findings presented here.


[16] This material is based upon work supported by the Office of Global Programs, National Oceanic and Atmospheric Administration (NOAA) under Award No. NA16GP1498. Additional support was provided under NOAA cooperative agreement NA17RJ1222. Partial support for David Easterling was provided by the Office of Biological and Environmental Research, U.S. Dept. of Energy (DOE). Any opinions, findings, and conclusions are those of the authors and do not necessarily reflect the views of NOAA or the Illinois State Water Survey.