Reconciling precipitation trends in Alaska: 1. Station-based analyses


  • Stephanie A. McAfee,

    Corresponding author
    1. The Wilderness Society, Anchorage, Alaska, USA
    2. Now at Scenarios Network for Alaska and Arctic Planning, University of Alaska Fairbanks, Anchorage, Alaska, USA
    • Corresponding author: S. A. McAfee, Department of Geography, University of Nevada, Reno, Reno, NV, 89557, USA. (

    Search for more papers by this author
  • Galina Guentchev,

    1. Visiting scholar at University Corporation for Atmospheric Research Visiting Scientist Programs, Boulder, Colorado, USA
    2. Now at Climate Science and Applications Program, Research Applications Laboratory, National Center for Atmospheric Research, Boulder, Colorado, USA
    Search for more papers by this author
  • Jon K. Eischeid

    1. Cooperative Institute for Research in Environmental Sciences, NOAA, Boulder, Colorado, USA
    Search for more papers by this author

  • This article was corrected on 17 SEP 2014. See the end of the full text for details.


[1] Numerous studies have evaluated precipitation trends in Alaska and come to different conclusions. These studies differ in analysis period and methodology and do not address the issue of temporal homogeneity. To reconcile these conflicting results, we selected 29 stations with largely complete monthly records, screened them for homogeneity, and then evaluated trend over two analysis periods (1950–2010 and 1980–2010) using three methods: least absolute deviation regression, ordinary least squares regression (with and without transformation), and Mann-Kendall trend testing following removal of first-order autocorrelation. We found that differences in analytical period had a significant impact on trends and that the presence of inhomogeneities or step changes also posed a substantial challenge in detecting reliable long-term trends in precipitation over Alaska, particularly in the southern part of the state. Although some of these inhomogeneities occur in the mid-1970s and could be associated with well-documented changes in the Pacific Ocean and the Aleutian Low at that time, many of the inhomogeneities co-occur with changes in station location, instrumentation, or operation. These operationally induced changes make it difficult to accurately detect the impact of decadal to multidecadal climate variability on precipitation amounts and to assess historical precipitation trends in Alaska.

1 Introduction

[2] In the popular press, Alaska is “ground zero for climate change” [Reiss, 2010]. Editorial embellishment aside, a number of environmental changes have been observed over the last several decades. These include increases in boreal forest fire size and frequency [Kasischke and Turetsky, 2006], reductions in the productivity and greenness of the southern boreal forest [Beck et al., 2011; Verbyla, 2008], more intensive insect activity [Berg et al., 2006], and changes in lake and wetland dynamics [Riordan et al., 2006; Klein et al., 2005].

[3] Many of these observations are consistent with perceived precipitation changes, as well as with warming. Recent studies, however, emphasize the challenges of evaluating secular climate trends in Alaska, which appears to be subject to significant decadal scale climate variability [López-de-Lacalle, 2011], and note that trend analyses can be sensitive to analytical period [Bone et al., 2010]. Although these specific studies focus on temperature, it is likely that these difficulties translate to evaluating precipitation trends, as well.

1.1 Challenges in Analyzing Precipitation Trends

[4] In addition to the challenges of identifying trends in Alaska, precipitation has a number of characteristics that complicate robust and compelling trend detection. Precipitation, particularly in arid areas, is often not normally distributed [Groisman and Easterling, 1994]. Serial correlation, outliers, and missing data can influence estimates of statistical significance for many trend tests. Changes in gauge location, instrumentation, or even observer may introduce inhomogeneities or step changes in the amount of precipitation measured at a station [Groisman and Easterling, 1994]. All of these difficulties likely contribute to the high level of disagreement in the literature about the direction and magnitude of precipitation change in Alaska.

[5] Traditional rain gauges do not accurately measure light or frozen precipitation in windy locations [Goodison et al., 1998; Yang et al., 2005]. Some authors assume that this measurement bias or undercatch is constant and, therefore, unlikely to influence trend analysis [L'Heureux et al., 2004; Stafford et al., 2000]. When Yang et al. [2005] tested this hypothesis, they found that correcting for undercatch can enhance the magnitude of trends. However, applying a constant correction could introduce spurious trends if the catch ratio is, in fact, not constant over time. For example, long-term variability in wind speed, such as that documented by Hinzman et al. [2005] at Barrow, could alter the degree of undercatch over time. Undercatch also differs for different gauge types [Yang et al., 2001], and changes in gauge styles are not uncommon. Thus, in our opinion, adjusting precipitation for wind and instrument-related undercatch may introduce as many uncertainties to trend analysis as leaving the record unaltered.

1.2 Methodology of Previous Studies

[6] A large number of studies have evaluated trends in precipitation across Alaska and the Arctic, often in the context of answering related questions, such as identifying the source of hydrological and ecological changes [Riordan et al., 2006; Hinzman et al., 2005; Zhang et al., 2009; Serreze et al., 2000; Stone et al., 2002; White et al., 2007]. There have also been several studies focused specifically on precipitation in Alaska [Curtis et al., 1998; L'Heureux et al., 2004; Stafford et al., 2000; Wendler and Shulski, 2009] and over larger portions of boreal and arctic North America [Diaz, 1986; Kattsov and Walsh, 2000; Groisman and Easterling, 1994]. Unfortunately, these studies, summarized in Table 1, do not paint a particularly compelling and consistent picture of precipitation trends in Alaska.

Table 1. Summary of Published Precipitation Trendsa
Location/SourceTime PeriodAnnualWinterSpringSummerAutumnCitation
  1. aUnless indicated otherwise, winter is defined as December through February, spring as March through May, summer as June through August, and autumn as September through November. Regional divisions follow those in Stafford et al. [2000]. pos or neg alone indicates the presence of positive or negative trend where the statistical significance was not presented. neg/pos indicates that a gridded product displays both negative and positive trends in the region. All significant results are given in bold with 90 or 95 next to the word pos or neg indicating the significance level. -ns appended to pos or neg indicates a nonsignificant trend at the level indicated; ns with a number next to it indicates a nonsignificant trend at the level indicated and that the study did not provide information about the direction of trend. Blanks in the table indicate a lack of information, not a lack of trend or change; in only one case was the presented change equal to 0. Additional information is provided in footnotes to the table.
  2. bStart and end years are approximate, as values were read from a plot.
  3. cTrend in precipitable water from the TIROS Operation Vertical Sounder.
  4. dTrends are mapped and so are reported for the region. In cases where a region shows both positive and negative trends, that is indicated.
  5. eTrend was also significant over the period 1949–1988.
  6. fTrend in October to February water-equivalent precipitation.
  7. gMean change in precipitation at Bettles, Fairbanks, Big Delta, McGrath, and Northway.
  8. hIt is unclear if the trend for Bettles reported in this paper actually covers the period from 1949 to 1998, as most sources provide precipitation data for this station beginning in 1951 or 1952. Likewise, the Anchorage at Ted Stevens International Airport was not established until April 1952.
  9. iMean change in precipitation at Kotzebue, Nome, Bethel, St. Paul Island, and Cold Bay.
  10. jMean change in precipitation at Talkeetna, Gulkana, Anchorage, Matanuska, Kenai, Seward, Kodiak, Cordova, Yakutat, Sitka, Juneau, Little Port Walter, Wrangell, and Annette.
55°N–85°N obs1900–1994pos    Kattsov and Walsh [2000]
55°N–85°N ECHAM41900–1994pos    
55°N–85°N GHCN1904–1994pos    
55°N–85°N [Eischeid, 1995]1900–1995bposposposposposSerreze et al. [2000]
Alaska1950–1990pos    Groisman and Easterling [1994]
North Slope
TIROSc1979–2005posdpos/negdposdposdposdWhite et al. [2007]
GPCP (2.5° × 2.5°)1983–2005pos90d    Zhang et al. [2009]
GPCC (0.5° × 0.5°)    
Barrow1941–2002neg90    Riordan et al. [2006]
1942–2002bneg90    Hinzman et al. [2005]
1949–1996eneg95neg95neg95neg-ns95neg-ns95Curtis et al. [1998]
1949–1998neg95neg95neg-nsneg-ns95neg-ns95Stafford et al. [2000]
1950–1998 neg95 ns90 L'Heureux et al. [2004]
1956–2006neg    Alessa et al. [2011]
1966–2000b negf   Stone et al. [2002]
1950–2009neg/pos    Kittel et al. [2011]
Barter Island1949–1988neg95neg95neg95neg-ns95neg-ns95Curtis et al. [1998]
1950–1988 neg95 neg95 L'Heureux et al. [2004]
TIROSc1979–2005posdposdposdposdneg/posdWhite et al. [2007]
GPCP (2.5° × 2.5°)1983–2005neg90d    Zhang et al. [2009]
GPCC (0.5° × 0.5°)    
Interiorg1949–1998posposposposposStafford et al. [2000]
Bettles1949–1998hns95ns95ns95ns95ns95Stafford et al. [2000]
1951–2002pos-ns90    Riordan et al. [2006]
1952–1998ns90  ns90 L'Heureux et al. [2004]
1956–2006pos    Alessa et al. [2011]
Fairbanks1900–2002bpos-ns90    Hinzman et al. [2005]
1916–2006neg-ns95negnegneg0Wendler and Shulski [2009]
1946–2002neg-ns90    Riordan et al. [2006]
1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1950–1998 ns90 ns90 L'Heureux et al. [2004]
1956–2006pos    Alessa et al. [2011]
Big Delta1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006neg    Alessa et al. [2011]
McGrath1936–2002neg-ns90    Riordan et al. [2006]
1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1950–1998 pos90 ns90 L'Heureux et al. [2004]
1956–2006neg    Alessa et al. [2011]
Northway1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1949–2002pos-ns90    Riordan et al. [2006]
TIROSc1979–2005posdpos/negdposdposdnegdWhite et al. [2007]
GPCP (2.5° × 2.5°)1983–2005neg/pos    Zhang et al. [2009]
GPCC (0.5° × 0.5°)pos90d    
Westerni1949–1998posposposnegposStafford et al. [2000]
Kotzebue1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006pos    Alessa et al. [2011]
Nome1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1950–1998 pos95 ns90 L'Heureux et al. [2004]
1956–2006neg    Alessa et al. [2011]
Bethel1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006pos    Alessa et al. [2011]
King Salmon1956–2006pos    Alessa et al. [2011]
St. Paul Island1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006neg    Alessa et al. [2011]
Cold Bay1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006pos    Alessa et al. [2011]
TIROSc1979–2005posdpos/negdposdposdnegdWhite et al. [2007]
GPCP (2.5° × 2.5°)1983–2005neg    Zhang et al. [2009]
GPCC (0.5° × 0.5°)neg90    
South/Southeastj1949–1998posposposnegposStafford et al. [2000]
Gulkana1941–2002pos-ns90    Riordan et al. [2006]
1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006neg    Alessa et al. [2011]
Talkeetna1940–2002neg-ns90    Riordan et al. [2006]
1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006pos    Alessa et al. [2011]
Matanuska1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
Anchorage1949–1998hns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006pos    Alessa et al. [2011]
Kenai1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
Seward1949–1998pos-ns95ns95ns95ns95ns95Stafford et al. [2000]
Homer1956–2006neg    Alessa et al. [2011]
Kodiak1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006pos    Alessa et al. [2011]
Cordova1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1950–1998 pos95 ns90 L'Heureux et al. [2004]
Yakutat1949–1998pos95pos95pos95pos95pos95Stafford et al. [2000]
1950–1998 pos95 pos95 L'Heureux et al. [2004]
1956–2006pos    Alessa et al. [2011]
Juneau1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1950–1998 pos95 ns90 L'Heureux et al. [2004]
1956–2006pos    Alessa et al. [2011]
Sitka1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
Little Port Walter1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
Wrangell1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
Annette1949–1998ns95ns95ns95ns95ns95Stafford et al. [2000]
1956–2006neg    Alessa et al. [2011]

[7] Nine studies report the results of station-based trend analyses [Alessa et al., 2011; Kittel et al., 2011; Wendler and Shulski, 2009; Riordan et al., 2006; Hinzman et al., 2005; L'Heureux et al., 2004; Stone et al., 2002; Stafford et al., 2000; Curtis et al., 1998]. Most of these studies use linear regression for trend detection. Although a few studies did not specify the fit assumption [Alessa et al., 2011; Wendler and Shulski, 2009], we presume that they used least squares, as it is the most commonly used regression fit assumption and the default for many statistical analysis packages. Riordan et al. [2006] evaluated their time series for influential (i.e., particularly wet or dry) years near the ends of time series, and Curtis et al. [1998] evaluated the influence of outliers. L'Heureux et al. [2004] used transformation to accommodate deviations from normality and evaluated the influence of serial correlation on statistical significance. None of the other studies explicitly discussed measures used to meet the assumptions of least squares regression. Kittel et al. [2011] fit a polynomial to the time series of precipitation, in order to identify shifts in mean precipitation, rather than trends.

1.3 Results of Previous Studies

[8] Many of the published studies discussed in section 1.2 are based on trend analyses beginning around 1950 and ending around the turn of the century. On Alaska's North Slope (Barrow and Barter Island stations), most of the analyses indicated significant decreases in precipitation (Figure 1 and Table 1 and citations therein). In contrast, Kittel et al. [2011] noted that precipitation at Barrow decreased from 1950 to the mid-1970's and increased afterward. Although it is not clear whether the recent increase they detect is statistically significant, it is consistent with positive trends in North Slope precipitable water from 1979 to 2005 and 1979 to 2010 found by White et al. [2007] and Serreze et al. [2012] and in gridded precipitation between 1983 and 2005 [Zhang et al., 2009].

Figure 1.

Location map showing stations used in this study. Numbers indicating the locations of stations in the analysis can be found in the first column of Table 2. Gray shading denotes areas with elevations > 500 m.

Table 2. General Information for Stations Used in This Studya
 Coop IDStation NameLatitudeLongitudeElevation (m)First Available YearSufficient for AnalysisClimatological Average (cm)
  1. aThe numbers in the left-hand column correspond to the numbers on the location map in Figure 1. Also shown are the stations' Coop identification numbers, the first year for which data are available, the period with sufficiently complete data for analysis, and each station's climatological precipitation (centimeters) from 1980 to 2010.
  2. bFirst-order station.
0500280Anchorage Ted Stevens International Airportb61.19−150.004019521954–20106.54.515.515.742.3
1500352Annette Island Airfieldb55.04−131.573319411950–201073.452.641.792.1259.3
2500546Barrow WSO Airport b71.29−156.76919201950–20101.
3500754Bethel Airportb60.79−161.833119231950–20106.66.518.814.946.8
15500761Bettles Airportb66.92−151.5119619511952–20106.
16500770Big Delta Airportb63.99−145.7238619171950–20102.
4502102Cold Bay Airportb55.22−162.732419501951–201026.619.722.636.6105.6
17502107College Observatory64.86−147.8418919491950–20054.
18502177Cordova Airport60.49−145.45919231950–201057.939.252.781.5231.1
5502968Fairbanks Airportb64.82−147.8713219291950–20104.
21503465Gulkana Airportb62.16−145.4647919401950–20104.52.912.88.328.5
22503665Homer Airportb59.64−151.492019391950–201019.
6504100Juneau Airportb58.37−134.58419431950–201038.925.734.759.0157.8
7504546Kenai Municipal Airport60.57−151.252618821950–20108.25.414.318.646.5
8504766King Salmon Airportb58.68−156.651419421955–20107.67.517.917.550.5
28504812Kitoi Bay58.19−152.371819541980–200452.039.333.749.2174.8
23504988Kodiak Airport/USGC Baseb57.75−152.50519411980–201059.643.138.756.8198.4
9505076Kotzebue Airportb66.89−162.60319301950–20105.03.510.68.327.5
24505733Matanuska Agricultural Experiment Station61.57−149.255219491980–20106.93.915.412.038.1
10505769McGrath Airportb62.96−155.6110119311950–20108.36.617.213.345.4
11506496Nome Airportb64.51−165.44419081950–20107.45.716.013.242.4
25506586Northway Airport62.96−141.9352219421986–20101.73.816.35.026.8
26507570Port Alsworth60.20−154.327919601980–20076.53.710.012.833.2
12508118St Paul Island Airportb57.16−170.221119111950–201013.
13508976Talkeetna Airportb62.32−150.1010719401950–201012.010.126.622.571.2
27509014Tanana Airport65.17−152.107119091980–20103.
19509641University Agricultural Experiment Station64.86−147.8614519301950–20104.
14509941Yakutat WSO Airportb59.51−139.63919361950–2010102.772.471.8146.5393.3

[9] In the interior of the state, study results were mixed, and most trends were not statistically significant (Table 1). Of the studies that specified statistical significance, only two significant trends were found. L'Heureux et al. [2004] reported an increase in winter precipitation between 1950 and 1998 at McGrath (p < 0.10), and Zhang et al. [2009] showed decreases over the interior region (p < 0.10) from 1983 to 2005 in gridded data from both the Global Precipitation Climatology Project (GPCP) and the Global Precipitation Climatology Centre (GPCC).

[10] Few significant trends were reported in western Alaska. L'Heureux et al. [2004] found increasing winter precipitation at Nome (p < 0.05, 1950–1998), conflicting with the negative trend in precipitation there reported by Alessa et al. [2011] from 1956 to 2006. The decrease in precipitation at Nome identified by Alessa et al. [2011] also contrasts with precipitation increases at Kotzebue, ~300 km to the northeast, and Bethel, King Salmon, and Cold Bay to the south.

[11] There is substantial disagreement among studies of station-based precipitation trends in south-central and southeastern Alaska. Riordan et al. [2006] found a nonsignificant increase in precipitation at Gulkana between 1940 and 2002; Alessa et al. [2011] identified a negative trend there between 1956 and 2006. At Talkeetna, Alessa et al. [2011] found increasing precipitation, while Riordan et al. [2006] identified a nonsignificant decrease. L'Heureux et al. [2004] and Alessa et al. [2011] both report increasing precipitation at Yakutat and Juneau, but Stafford et al. [2000], also evaluating trends from the middle to late twentieth century, found no significant trends at these stations.

1.4 Study Motivation

[12] Because climate change is expected to be pronounced at higher latitudes and the region as a whole may be uniquely sensitive to climate change, there is great interest in describing recent climate variability and its impacts in Alaska. Disagreements between published studies, which may arise from methodological differences, complicate such assessments. In order to resolve some of these differences, we performed a comprehensive reevaluation of precipitation trends in Alaska, examining the trends at long-term stations.

[13] Using station-based precipitation data, we statistically evaluated homogeneity and investigated potential causes of any detected inhomogeneities. Testing data for homogeneity is necessary prior to conducting a trend analysis because changes in station location, operation, and instrumentation can introduce unintentional changes in observed precipitation amounts [Peterson et al., 1998]. Precipitation time series were then subjected to three different trend tests (Mann-Kendall with trend-free prewhitening, least absolute deviation regression, and ordinary least squares regression with and without transformation) over two time periods: 1950–2010 and 1980–2010.

[14] Section 2 describes the data selection process, quality control, and gap-filling procedures. Section 3 outlines the analytical methods used to evaluate homogeneity and trends. In section 4, we present and discuss the results of homogeneity and trend analyses. We investigate station metadata for operational causes of dated inhomogeneities, compare our results to the published literature, evaluate differences between the results of the three statistical trend detection methods, and assess whether missing data and gap-filling procedures might have caused inhomogeneities or influenced trends.

2 Data

2.1 Station Data Selection

[15] We compiled an initial list of stations from the list of U.S. Historical Climatology Network candidate stations (D. P. Keiser, personal communication, 2010), the previously used Alaskan Historical Climatology stations [Karl et al., 1994] as shown by the Alaska Climate Research Center at, and the list of first-order stations ( We retrieved monthly data for these 47 stations from the National Climatic Data Center (NCDC) and from the Western Regional Climate Center (WRCC). We also received data for the Eagle station from the National Weather Service (NWS) Forecast Office in Fairbanks. NCDC served as the primary data source, and we used records from the WRCC and NWS to fill gaps in the records from NCDC. Of the 47 stations initially identified, 29 had substantially complete records from 1950 or 1980 to 2010 after the gap-filling process described in the next section. General information about these stations is listed in Table 2, and stations are mapped in Figure 1.

2.2 Gap Filling

[16] Monthly totals from the primary source, NCDC, were only supplied to users if the month had at least 25 days of data. Gaps in data from NCDC were filled using information from the WRCC and NWS. Appendix A lists the months and years at each station where gaps in the records from NCDC were filled with data from the WRCC. We used WRCC data missing up to 26 days per month. The number of missing days WRCC reported for those months is shown in Appendix A. In the majority of cases, 10 or fewer days were missing in any given month, but there were a few months where the monthly total is derived from only 4 or 5 days of data.

[17] Although many studies reject months with missing days [e.g., Stafford et al., 2000], there are costs as well as benefits to doing so. Excluding months with missing days guarantees that the data are complete but may lead to eliminating adequate data, particularly when monthly totals are summed over seasons or years. Because precipitation does not occur on every day in a month, it is not clear that missing days, even for a substantial fraction of the month, would necessarily lead to an incomplete monthly precipitation total. Not all of the trend tests used here are sensitive to outliers and missing data, so unless there is a systematic change in data completeness, using some months with missing days may not influence trend results. Nonetheless, we evaluated trend results to determine whether including months with missing days might have had an impact. Section 4.6.1 discusses the results of this evaluation.

[18] After filling gaps in the NCDC records with data from the WRCC and NWS, we evaluated data for completeness over two time periods spanning from 1950 or 1980 to 2010. Based on that evaluation, we identified 29 stations that had no more than one value missing from the time series for each calendar month. For example, Bettles was deemed usable even though it was missing 4 months of data (January and February 1998, July 1959, and August 1996), because the gaps fell in different months of the year (i.e., there were never two Februarys missing). Some stations with reasonably complete records were missing data near the ends of those time periods. We evaluated these stations over periods for which data were available. For example, there were gaps in the data for Anchorage International Airport in 1952 and early 1953, so we began the analyses in 1954. Following commonly used practice [e.g., Linacre, 1992, p. 56], we filled missing months of data with the appropriate period mean (1950–2010 or 1980–2010) for that month. Homer was the only station missing 2 years of data from a given month; March data were missing from 1973 and 1998. We filled the earlier year, 1973, with the 1950–1979 mean, and the later year, 1998, with the 1980–2010 mean. We never replaced more than three missing months in any given year with the monthly mean. For the 29 stations with reasonably complete records, we calculated annual and seasonal precipitation totals, using the four standard seasons: December to February (DJF), March to May (MAM), June to August (JJA), and September to November (SON).

[19] To determine whether filling gaps in the station data influenced our results, we compared the timing of replacements (with data from WRCC and with the period mean) with the occurrence of dated inhomogeneities. We also compared the timing of replacements with potentially influential years in the least squares regression analysis. Homogeneity testing and influence assessment methods are discussed below in sections 3.1 and 3.2.3, respectively.

3 Analytical Methods

3.1 Homogeneity Testing

[20] Annual time series were subject to four absolute homogeneity tests: Alexandersson SNHT for a single breakpoint [Alexandersson, 1986], the Buishand range test [Buishand, 1982], the Pettit test [Pettit, 1979], and the Von Neumann ratio test [Von Neumann, 1941] as described in [Wijngaard et al., 2003]. Homogeneity tests were applied to the raw annual data, as well as to the cube roots of the time series in order to accommodate the normality assumption in three of the tests. Three of the homogeneity tests used (Buishand, Pettit, and SNHT) identify the date of detected inhomogeneities. While the Buishand range test and the Pettit rank test are sensitive to inhomogeneities in the middle of the period, the Alexandersson SNHT test is more responsive to breaks in the beginning and end of the time series. Following the recommendations of Wijngaard et al. [2003], stations were categorized as “useful” if no more than one test detected an inhomogeneity at p < 0.01, “doubtful” if any two tests did, and “suspect” if three or four tests detected inhomogeneities in the time series. If testing identified an inhomogeneity prior to 1980, we tested the 1980–2010 period to determine whether it would be appropriate to perform trend analysis on the recent decades. As a final check, we evaluated the homogeneity of all seasonal time series that identified a statistically significant trend.

[21] Because inhomogeneities can arise from changes in station operation, as well as from shifts in climatic regimes, we investigated station histories in the Multi-Network Metadata System (MMS, “Doubtful” and “suspect” stations were not subject to Mann-Kendall trend testing for the 1950–2010 period, even if the inhomogeneity did not appear associated with operational origins, as it is not statistically appropriate to apply trend tests across a detected break. The other trend tests were applied across detected inhomogeneities for comparison to existing studies, but their results should not be interpreted as robust trends. Stations were analyzed for the 1980–2010 period if dated inhomogeneities occurred prior to 1980 and the period after the break was deemed homogenous by subsequent testing. Stations with inhomogeneities after 1980 were generally not analyzed by Mann-Kendall testing; however, we did test for trends at Kotzebue, because the inhomogeneity occurred so close to the beginning of the analysis period (in 1981).

3.2 Trend Evaluation

3.2.1 Mann-Kendall Trend Test and Sen's Slope Estimator

[22] Mann-Kendall is a nonparametric rank-based trend test [Gilbert, 1987] that is robust to nonnormality and is less influenced by outliers than ordinary least squares regression [Helsel and Hirsch, 2002]. The test identifies systematic increases or decreases in the rank of data points with time. Serial autocorrelation can impact trend detection [Yue et al., 2002], so time series are often prewhitened (i.e., the first-order autoregressive [AR1] process is removed) in order to avoid this issue. Yue et al. [2002] found, however, that removing a positive (negative) AR1 process can decrease (increase) the trend. In this study we assume that an AR1 process adequately describes the annual and seasonal precipitation series. Based on this assumption and following the recommendation of Yue et al. [2002], we first detrended the data and then removed the AR1 process. To obtain the final time series, we blended the trend again with the prewhitened data and evaluated the Mann-Kendall test on that blended series. The degree of change associated with any trend detected by the Mann-Kendall test was evaluated using the Sen's slope estimator [Sen, 1968]. It shares many characteristics of the Mann-Kendall test in terms of robustness to missing values and outliers.

3.2.2 Least Absolute Deviation Regression

[23] Regression based on least absolute deviations (LAD) minimizes the sum of the absolute differences between data points and the fitted straight line. Although, like most tests, it assumes that the data are temporally independent, it does not require that the data be normally distributed and is not strongly influenced by the presence of outliers. LAD is particularly robust to skewed distributions, common with precipitation data [Barrodale and Roberts, 1973]. Data were not prewhitened before evaluation as Khaliq et al. [2009] suggests that doing so prior to regression-based trend detection can degrade the results. We calculated the t-statistic (b/SEb) of the LAD regressions and evaluated the significance assuming a two-tailed t-distribution.

3.2.3 Ordinary Least Squares Regression

[24] Ordinary least squares regression (OLS) is one of the most popular trend detection techniques and the one that has been used most frequently in evaluating precipitation trends in Alaska, despite the fact that some researchers recommend against using it [Khaliq et al., 2009]. Like LAD regression, OLS regression attempts to minimize the differences between observations and the best fit straight line. OLS minimizes not the absolute differences, however, but the squared differences [Wilks, 2006]. As a result, OLS is particularly sensitive to nonnormality and to outliers. As with the LAD technique and consistent with other regression-based studies of precipitation trends in Alaska, no prewhitening was applied. We did evaluate first-order autocorrelation in all of the time series, and OLS regressions were subjected to post hoc analysis with the Durbin-Watson test for autocorrelation of residuals [Wilks, 2006].

[25] Least squares regressions were performed with and without transformation, typically ln(x + 1), on time series that were not normally distributed. Normality was assessed visually as well as with four normality tests (Anderson-Darling, Liliefors, Shapiro-Wilks, and Shapiro-Francia) as implemented in the stats and nortest [Gross and Ligges, 2006] packages for R [R Development Core Team, 2011]. In most series, ln(x + 1) transformation was effective; however, one time series required (x + 0.5)0.5 and one (x + 0.5)0.33 transformation. Time series that were normally distributed were not transformed; only results for OLS regression without transformation are shown in such cases.

[26] After performing the OLS regression, we screened for individual years that might influence the results using standard tests implemented in the R stats package. If any test indicated that a year might be influential, we evaluated whether it was associated with gap filling. We also assessed whether starting and ending years of the trend assessment might impact the results, as outliers at the ends of a time series can bias least squares regression results [see Riordan et al., 2006]. Finally, we visually assessed the residuals by inspecting their normality and looking at their relationship to the fitted values and the predictor.

3.2.4 Comparison of Trend Detection Methods

[27] Although precipitation data may violate a number of the assumptions required for OLS regression, it is a very common method of trend detection, and using it allows us to compare our results to previous studies. Because LAD regression is less sensitive to the presence of outliers than least squares regression [Barrodale and Roberts, 1973], comparison between the two regression-based methods provides insight about the influence of outliers on trends. Mann-Kendall with trend-free prewhitening is the most robust trend evaluation method used here, producing reliable results even in the presence of outliers, missing data and serial correlation.

3.3 Evaluation of Missing Data on Trend Results

[28] To determine whether and how the use of time series with missing data might impact trend detection, we selected data from four stations with distinct precipitation patterns that also had complete and largely homogenous records from 1 January 1970 to 30 December 2012 and evaluated how degradation of those records impacted trend results. Daily data for the Annette, Anchorage, Fairbanks, and Barrow stations were retrieved from the NCDC. We calculated seasonal and annual total precipitation from the complete records and evaluated trends by OLS regression and by the MK trend test (neither transformation nor trend-free prewhitening was applied for this exercise). We then randomly removed 10, 20, or 30% of the daily data, calculated new seasonal and annual totals, and reanalyzed the trends in the degraded records. This process was repeated 500 times, after which we evaluated the frequency of significant (p ≤ 0.05) tests.

4 Results and Discussion

4.1 Homogeneity

[29] Homogeneity tests were initially applied to time series of mean annual precipitation over the full station record (1950–2010 if available and 1980–2010 for stations with shorter records). There were few differences between the results of homogeneity tests applied to raw data and those obtained after cube root transformation. In all but one case, the homogeneity classification of the series remained the same (Table 3). Cold Bay was the only station that changed classification for the 1950–2010 period on the basis of the transformation. Nineteen stations were classified as “useful” (i.e., no more than one test indicated an inhomogeneity) for their full periods of analysis (either 1950–2010 or 1980–2010). The majority of these stations cluster in the interior and south-central parts of the state (Figure 2). Two stations were classified as “doubtful” (Port Alsworth and Nome), with two tests indicating an inhomogeneity, and eight stations were categorized as “suspect” (Annette, Bethel, Cold Bay, Cordova, Juneau, Kotzebue, Seward, and Yakutat), as at least three tests detected a step change during the 1950–2010 period. Inhomogeneities were identified for many of the stations along the southern and western coasts (Table 3 and Figure 2). All of the stations in southeastern Alaska contained inhomogeneities, although the breaks did not coincide in time (Figure 2 and Table 3).

Table 3. Results of Homogeneity Analysis Performed on Total Annual Precipitationa
Station BuishandPettitSNHTVon NeumannClassification 1950–2010/1980–2010
  1. aIn the second column, A and C differentiate results from analysis of annual data or the cube root of annual data. Results are shown for all four tests. If the Buishand, Pettit, or SNHT tests identify an inhomogeneity, the year of the inhomogeneity is shown in parentheses. If the Von Neumann test, which cannot identify the timing of inhomogeneities, finds a step change, the test result is underlined. The classification column identified whether stations are useful for trend analysis for the 1950–2010 or 1980–2010 periods. NA indicates that a series was not sufficiently complete for homogeneity analysis prior to 1980.
AnnetteA2.10 (1968)458 (1968)14.44 (1965)0.95Suspect/Useful
C2.10 (1968)458 (1968)13.66 (1965)1.01
BethelA2.04 (1988)492 (1988)13.98 (1952)1.39Suspect/Suspect
C2.09 (1988)492 (1988)10.501.39
Big DeltaA1.312344.271.47Useful/Useful
Cold BayA1.86 (1976)514 (1976)11.531.53Doubtful/Useful
C1.97 (1976)514 (1976)12.99 (1976)1.47Suspect/Useful
College ObservatoryA1.21662.361.86Useful/Useful
CordovaA2.02 (1994)428 (1994)11.92 (1994)0.88Suspect/Suspect
C1.98 (1994)428 (1994)12.71 (1994)0.89
JuneauA2.17 (1983)586 (1983)20.04 (1990)1.27Suspect/Suspect
C2.16 (1983)586 (1983)19.15 (1990)1.28
King SalmonA0.831402.391.75Useful/Useful
Kitoi BayA0.96644.241.91NA/Useful
KotzebueA2.01 (1981)548 (1981)13.08 (1981)1.66Suspect/Useful after 1982
C2.07 (1981)548 (1981)14.01 (1981)1.6
NomeA1.84 (1977)3085.551.32Doubtful/Useful
C1.92 (1977)3085.51.27
Port AlsworthA2.11 (1987)966.490.89NA/Doubtful
1.84 (2001)100 (1998, 1999)15.0 (2001)0.64
C2.15 (1987)966.810.77
St. Paul IslandA1.532183.071.05Useful/Useful
SewardA1.95 (1975)452 (1975)15.18 (1975)1.63Suspect/NA
C1.87 (1975)452 (1975)13.98 (1975)1.73
University AESA0.821621.131.92Useful/Useful
YakutatA2.06 (1975)426 (1973)9.960.94Suspect/Suspect
1.74 (1992)184 (2000)8.381.08
C2.10 (1973)426 (1973)12.05 (1958)0.92
Figure 2.

Map of inhomogeneities. Large and small symbols indicate homogeneity over the 1950–2010 and 1980–2010 periods, respectively. White squares mark “useful” sites and light gray and dark gray circles “doubtful” and “suspect” stations, respectively.

[30] Stations that were identified as doubtful or suspect for the 1950–2010 period but contained breaks only prior to 1980 were further tested to evaluate homogeneity between 1980 and 2010. Annette, Cold Bay, Kotzebue, and Nome were homogenous over this shorter period. The inhomogeneities at Bethel, Cordova, and Juneau all occurred after 1980, and so these stations were classified as “suspect” for this shorter period, as well. We found additional inhomogeneities in the Yakutat record occurring in 1992 and 2000, rendering it “suspect”, too.

4.2 Sources of Inhomogeneities

[31] There was little correspondence between dated inhomogeneities and months where either WRCC data or the long-term mean was substituted to fill a gap in the NCDC time series (Table 3 and Appendix A). At Seward, the break detected in 1975 coincided with the use of August data from the WRCC, which was missing 6 days. The monthly mean was substituted for October 1995 at Cordova, where multiple tests identified inhomogeneities in 1994. The detected inhomogeneity in 2000 at Yakutat follows the substitution of the entire year of 1999 with WRCC data (Appendix A).

[32] Although most detected inhomogeneities were not coincident with instances of gap filling, they often matched changes in station operation described in the MMS. The Juneau station elevation changed from 6 m to 3.6 m in 1982, just prior to the detected break in 1983. The elevation change was followed by a change in station identification in 1984. While a shift in elevation could induce a step change in precipitation, it is unlikely that changes in station identification could be associated with an inhomogeneity. In addition, this part of Alaska changed time zones in 1983 [Norris, 2001]. While the time zone shift could impact daily temperature readings, it should not influence monthly precipitation totals, and no other stations in the region display inhomogeneities in the early 1980s. The subsequent 1990 inhomogeneity at Juneau occurs between observer changes in 1989 and 1991 and so is likely related to station management. The 1975 inhomogeneity at Seward coincides with an approximately 12 m change in station elevation. As the Seward station is located in a narrow valley, this could explain the inhomogeneity. In early 1982, station records for Kotzebue report a 3 m shift in elevation which could be reflected in the 1981 inhomogeneity. Although the Kenai station was considered “useful” with only one inhomogeneity (the von Neumann test does not supply a date, so it is unclear when that inhomogeneity occurred), the MMS records changes in station location or elevation in 1967, 1982, 1999, and 2001, and changes in gauge type were indicated in 1995 and 2001. Testing identified an inhomogeneity at Port Alsworth at some time between 1998 and 2001. This could be associated with a change in the station location in 2002.

[33] Other stations displayed inhomogeneities that co-occurred with changes in operation that might not be expected to influence data quality and could be coincidental. Inhomogeneities at Yakutat (1973) and Annette (1965/1968) coincided with a shift in station type from Weather Bureau Airport to Weather Service Office. This change is usually associated only with the reporting form (T. Fathauer, personal communication, 2012). Likewise, the 1994 inhomogeneity at Cordova co-occurs with the 1995 change in station identity.

[34] Online metadata for four stations (Cold Bay, Nome, Port Alsworth, and Bethel) in western Alaska contained no information about changes in station location, elevation, instrumentation, or other operational details that coincide with statistically detectable inhomogeneities. Likewise, no changes in station operation at Yakutat were recorded around 1992 or 2000. This does not mean that there were no operational changes around the times of detected breaks but simply that there is no evidence in the MMS.

[35] Inhomogeneities at Nome (1977) and Cold Bay (1976) do correspond in time with the change in phase of the Pacific Decadal Oscillation [Mantua et al., 1997]. A shift in Alaskan climate around 1976 has been documented [Hartmann and Wendler, 2005; Shulski and Wendler, 2007]. Inhomogeneities detected at Bethel and Port Alsworth in 1987 and 1988, respectively, could be associated with a secondary late-1980s shift in the North Pacific described by Hare and Mantua [2000].

[36] Stations along the southern and western coasts that are most likely to contain inhomogeneities also tend to display the highest serial autocorrelations (Tables 3 and 4 and Figure 2). Among stations whose annual time series displayed lag-1 autocorrelations greater than 0.3 between 1950 and 2010, only St. Paul Island contained no statistically detectable inhomogeneities. Variability in precipitation amounts in southern and western Alaska is associated with ocean-atmosphere variability in the North Pacific [Hartmann and Wendler, 2005]. The Pacific Decadal Oscillation displays a high degree of interannual autocorrelation (not shown), but the shift in phase around 1976 is also associated with a relatively abrupt change in temperature and precipitation amount that could appear as an inhomogeneity [Hartmann and Wendler, 2005]. In areas where decadal to multidecadal climate variability is expected, notable autocorrelation and inhomogeneities might be anticipated in climate records, and researchers should plan to use analytical methods appropriate for detecting change in those kind of time series.

Table 4. Serial Correlation Characteristics of the Time Series Tested in This Studya
  1. aLag-1 correlation coefficients for the seasonal and annual time series are shown.
Big Delta0.530.−
Cold Bay0.310.−0.010.22−0.10−0.09
College Observatory0.09−−0.17−0.020.05−0.10
Eagle     0.02−0.420.17−0.010.13
King Salmon−0.010.06−−
Kitoi Bay     0.03−0.040.11−0.130.00
Kodiak     −−0.06−0.14
Matanuska     0.32−0.310.080.010.01
Port Alsworth
St. Paul Island0.−0.02−
Tanana     0.460.16−0.05−0.190.04
University AES0.15−−0.17−0.080.06−0.13

[37] It is also possible that operational introduction of an inhomogeneity (e.g., from moving a station) could induce a more substantial lag-1 autocorrelation than would be present otherwise. For example, if we take a 63-year long time series with a mean of 0.02, a standard deviation of 0.84, and an initial lag-1 autocorrelation of 0.01 and introduce a consistent +0.75 unit offset beginning in year 33, the lag-1 autocorrelation increases to 0.30. Stations displaying significant serial autocorrelation or inhomogeneities outside of regions impacted by low-frequency climate should be scrutinized for data quality problems.

4.3 The 1950–2010 Trends

[38] The results are presented by season, starting with winter, and are summarized for the year. Across these six decades, we saw a consistent decrease in winter precipitation at Barrow (Table 5), irrespective of method, consistent with published studies covering similar time periods [Curtis et al., 1998; L'Heureux et al., 2004; Stafford et al., 2000]. However, subsequent evaluation of the Barrow winter time series identified inhomogeneities in 1968 and 1975 that render both the seasonal time series and the trend “suspect.” Because there are no other stations north of the Brooks Range (Figure 1) with as long or complete a record, it is difficult to verify the direction of precipitation trends in that region. Both Curtis et al. [1998] and L'Heureux et al. [2004] identified negative trends in precipitation between 1950 and 1988 at Barter Island, 500 km to the east-southeast, on the Beaufort Sea coast. The Barter Island station, however, has not reported continuously since the late 1980s, so it cannot confirm recent trends at Barrow. At Bettles, just to the south of the Brooks Range, both transformed OLS and MK analyses identified an increase in winter precipitation (p ≤ 0.10), but the Brooks Range can act as a barrier to meridional moisture transport.

Table 5. Trend Analysis Results for 1950–2010aThumbnail image of
  • aResults are shown as a percent of the total seasonal or annual average precipitation. LAD, OLS, and MK refer to least absolution deviation regression, ordinary least squares regression, and Mann-Kendall trend analysis with Sen's Slope estimator. Bold numbers have p ≤ 0.05 and underlined numbers 0.05 < p ≤ 0.10. Gray font indicates inhomogeneity in analysis period; results are presented solely for comparison with previous studies, and should not be interpreted as robust trends. These time series are indicated as “suspect” or “doubtful” in the Homogeneity column if the time series of total annual precipitation contains an inhomogeneity. Only time series that deviate from normality were transformed, typically with an ln(x + 1) transformation. These are shown in parentheses under the untransformed OLS results. na indicates that trend analysis was not performed.
  • bStations that had time series beginning slightly after 1950 or ending slightly before 2010. Please refer to Table 2 for the actual years of analysis.
  • c(x + 0.5)1/3 transformation applied.
  • [39] Far to the south, on the Kenai Peninsula, the decrease in precipitation at Kenai (Mann-Kendall, p ≤ 0.10) contrasts with a marginally significant positive trend in winter precipitation at Homer, identified by both LAD and untransformed OLS regression. Although Homer was the only station along the Gulf of Alaska coast without a detectable inhomogeneity in total annual precipitation during the mid-1970s (Table 3 and Figure 2), its winter precipitation record could have been impacted by atmospheric circulation changes in the mid-1970s. Closer inspection of the winter precipitation record suggests that it is not appropriate to fit a linear trend to precipitation at Homer over this time period. First, there is a visible increase in winter precipitation around 1976, as shown by the plot of smoothed precipitation anomalies (Figure 3a). Second, a plot of OLS residuals against time is curvilinear, with peak values in the mid-1970s (Figure 3b), strongly suggesting that a linear trend does not describe time series behavior well. Third, homogeneity testing classified the station's winter record as “doubtful” with an inhomogeneity in 1973. This inhomogeneity may well result from background climate variability, but it still hinders the application of linear trend tests.

    Figure 3.

    Plots of (a) winter precipitation anomalies smoothed with an 11-year moving average and (b) residuals from the linear regression at Homer.

    [40] Previous studies indicated positive trends in winter precipitation at McGrath, Nome, Cordova, Yakutat, and Juneau from about 1950 to the end of the twentieth century (Table 1). We found inhomogeneities in Nome, Cordova, Yakutat, and Juneau, rendering those trends suspect. Although McGrath is free of inhomogeneities, we did not find a statistically significant trend in winter precipitation there, in contrast to the results of L'Heureux et al. [2004]. This disagreement may arise because there were several wet winters at McGrath during the 1990s, driving a highly significant positive trend through 2000 that did not continue through 2010 (Figure 4a).

    Figure 4.

    (a) Trends in winter (December–February) precipitation at McGrath beginning in every year from 1950 to 1979 and ending in every year from 1980 to 2010 and (b) trends in annual precipitation from 1950 to every year between 1980 and 2010 at 13 long-term stations. Red indicates negative trends, blue denotes positive trends, and the shading is symmetrical around zero. Asterisks mark trends significant at p ≤ 0.1, and dots mark trends significant at p ≤ 0.05.

    [41] In the spring, Kenai and King Salmon experienced statistically significant decreases in precipitation (Table 5). Decreasing precipitation at Kenai is consistent with a decrease in water balance and increase in temperature noted by Klein et al. [2005] after 1968. The MMS documents a station move for Kenai in 1967, however, suggesting that this trend be evaluated cautiously. Comparison with nearby stations believed to be free of statistically significant inhomogeneities (Anchorage and King Salmon) does not highlight any notable inhomogeneities at Kenai (Figure 5). Additional testing of the seasonal precipitation time series also indicated that the series are homogenous.

    Figure 5.

    Comparison of cumulative spring precipitation at Kenai with those at Anchorage and King Salmon, indicating a lack of inhomogeneity in spring precipitation at Kenai.

    [42] There were no significant trends in either summer (June to August) or autumn (September to November). Only two stations displayed a trend in annual precipitation: Bettles and Kenai. Transformed OLS and MK both detected a marginally significant (p ≤ 0.10) increase in precipitation at Bettles. Alessa et al. [2011] and Riordan et al. [2006] found increasing annual precipitation at Bettles, although Riordan et al. [2006] indicated that the trend was not statistically significant at 90%, and Alessa et al. [2011] did not specify the significance (Table 1). Mann-Kendall, but no other tests, indicated a significant (p ≤ 0.05) decrease in annual precipitation at Kenai. Stafford et al. [2000] did not find any significant trends at Kenai between 1949 and 1998, using OLS regression.

    [43] Our findings indicated no significant decrease in annual precipitation at Barrow (LAD even indicates a small insignificant increase), in contrast to published studies [Riordan et al., 2006; Hinzman et al., 2005; Alessa et al., 2011] (Table 1). This disagreement is likely attributable to two sources: analysis period and statistical method. Our analysis extends through 2010, and 2008–2010 were relatively wet years in Barrow (not shown). We tested this hypothesis by analyzing the trend via OLS regression without transformation from 1950 through each year from 1980 to 2010. The trend at Barrow was only significant (p ≤ 0.05) for analyses ending prior to 2001 and was marginally significant (p ≤ 0.1) for analysis periods ending in 2001 and 2002, suggesting that recent wet years have impacted the trend and confirming the well-known sensitivity of OLS-based trend detection to analytical period (Figure 4b). Second, all other studies use a least squares trend analysis. Of the three methods we applied, OLS estimated the largest negative trends at Barrow (Table 5).

    4.4 The 1980–2010 Trends

    [44] Since 1980, all methods indicated that winter precipitation has increased at Bettles and decreased at Tanana and Homer (Table 6). While all trend tests found statistically significant increases at Bettles, they disagreed on the level of significance with LAD and OLS indicating p ≤ 0.10 and transformed OLS and MK p ≤ 0.05. On the other hand, all tests agreed that the negative trend at Homer was significant at p ≤ 0.05. All of the trend tests deemed the decrease in winter precipitation at Tanana both extreme and highly significant (p ≤ 0.05). Indeed, the winter trend at Tanana appears unrealistically large (>100% of the climatological average). However, this incongruity appears to be the result of a seasonally specific inhomogeneity that could render linear trend analysis inappropriate (Figure 6). No inhomogeneity was detected in the annual time series because precipitation during summer, the wettest season, did not change abruptly and masked the impact of the step change in winter precipitation. Homogeneity testing of winter precipitation classified the time series as “suspect” with a break in 1997. Thus, a linear trend is not appropriate for describing precipitation change in Tanana, and the trend results should not be considered robust. Finally, transformed OLS regression detected a marginally significant negative trend at Big Delta in the winter, in contrast to the other methods (Table 6). Homogeneity testing of the winter time series classified the seasonal time series as “suspect” and detected an inhomogeneity in 1997, invalidating trend results at this station, as well. At Kotzebue, all tests indicated a significant increase in winter precipitation (p ≤ 0.05). The Mann-Kendall test was performed beginning in 1982 after a 1981 inhomogeneity but produced qualitatively the same results as OLS and LAD testing that included the inhomogeneity and might be suspect.

    Table 6. Trend Analysis Results for 1980–2010aThumbnail image of
  • aResults are shown as a percent of the total seasonal or annual average precipitation. LAD, OLS, and MK refer to least absolution deviation regression, ordinary least squares regression, and Mann-Kendall trend analysis with Sen's Slope estimator. Bold numbers have p ≤ 0.05 and underlined numbers 0.05 < p ≤ 0.10. Gray font indicates inhomogeneity in analysis period; results are presented solely for comparison with previous studies and should not be interpreted as robust trends. These time series are indicated as “suspect” or “doubtful” in the Homogeneity column if the time series of total annual precipitation contains an inhomogeneity. Only time series that deviate from normality were transformed, typically with an ln(x + 1) transformation. These are shown in parentheses under the untransformed OLS results. na indicates that trend analysis was not performed.
  • bStations that had time series beginning slightly after 1980 or ending slightly before 2010. Please refer to Table 2 for the actual years of analysis.
  • cMann-Kendall trend testing applied from 1982 to 2010.
  • d(x + 0.5)1/2 transformation applied.
  • Figure 6.

    Time series of winter (DJF), summer (JJA), and annual (ANN) precipitation at Tanana.

    [45] Spring precipitation increased at Barrow and Cold Bay but decreased at Homer (Table 6). However, there is some discordance in these results. All tests agreed that the positive trends at Barrow and Cold Bay were significant (p ≤ 0.05 and p ≤ 0.10, respectively). Two tests (LAD and OLS) suggested that the negative spring trend at Homer was significant (p ≤ 0.05), with transformed OLS providing a weaker significance assessment (p ≤ 0.10) and MK deeming the decrease nonsignificant. Remotely sensed precipitable water increased during the spring across the entire region between 1979 and 2005 [White et al., 2007], apparently in contrast to the diversity in trends at individual stations. However, a more recent analysis by Serreze et al. [2012] found little if any statistically significant increase in precipitable water below 500 hPa during the spring.

    [46] Summer and autumn displayed few clearly significant trends. In most cases, only one or two tests, usually LAD and/or OLS, found significant results (decreasing in Talkeetna during summer, Table 6). All three tests indicated a marginally significant (p ≤ 0.10) increase in summer precipitation at Annette. In contrast to the increasing summer precipitation at Kodiak, Kitoi Bay, about 50 km to the north on the opposite side of Marmot Bay, displayed a nonsignificant negative trend in summer precipitation over a similar time period (1980–2004). All methods agreed that autumn precipitation decreased at the University Agricultural Experiment Station near Fairbanks (Table 6). The Mann-Kendall test also indicated a significant decrease in precipitation at Tanana (p ≤ 0.05) and Big Delta (p ≤ 0.10), although subsequent analysis of the autumn time series at Big Delta identified an inhomogeneity in 1997 in that time series. Other stations in the Fairbanks area showed nonsignificant decreases in precipitation, consistent with decreasing precipitable water found by White et al. [2007]. A more recent analysis by Serreze et al. [2012] did not identify any significant changes in precipitable water for interior Alaska in the MERRA, CFSR, and ERA-1 reanalyses. Finally, only the MK test indicated a somewhat significant increase of autumn precipitation at Barrow (p ≤ 0.10).

    [47] Over the full year, we found one significant trend in the northern and western parts of the state (Barrow +30.4%, p ≤ 0.10 by MK) and scattered negative trends in the interior and southern regions. Negative trends at stations in the Interior (Big Delta, Talkeetna, and Tanana) indicated by all trend tests are consistent with regionally decreasing precipitation in gridded products analyzed by Zhang et al. [2009], even though precipitable water has increased across the Arctic [White et al., 2007; Serreze et al., 2012]. In southern Alaska, Homer experienced decreasing precipitation. Other stations in the region displayed both positive and negative nonsignificant trends. Zhang et al. [2009] found decreasing precipitation in both gridded datasets they analyzed, but the trend was only consistently significant in GPCC.

    4.5 Influence Analysis of OLS Results

    [48] There was some correspondence between the timing of influential years identified by post hoc analysis of the OLS regression and gap filling. Over both analysis periods, 1990 and 1994 were highlighted as influential years in the autumn OLS regression at Big Delta, and 1994 was also identified as influential in the summer analysis. Neither of these seasons displayed statistically significant trends. NCDC's record for the Big Delta station had substantial data gaps in the early 1990s. We filled these gaps with data from the WRCC, which flagged them as missing considerable amounts of data (Appendix A). A similar situation, wherein post hoc analysis identified years with missing data as influential to the OLS regression, occurred at the University Agricultural Experiment station in 1982 and 2002. For this station, only the autumn results for the 1980–2010 period were found to be significant. At College Observatory, 15 days of data were missing from December 2005, and 2005 was flagged as influencing the results of the winter trend analysis which was characterized as nonsignificant. Thus, we should, perhaps, be cautious in interpreting the OLS trends at these three stations.

    [49] At Kenai, 1950 was an influential year in the annual trend but the trend results were indicated as not statistically significant. Eight days of data were missing in September; however, 1950 did not appear to influence the autumn trend at Kenai, suggesting that the missing data did not strongly influence the annual trend. No data were available for December 1977 at St. Paul Island, so that month was replaced with the 1950–2010 mean. While 1977 is flagged in the annual trend analysis, the year 1978 (which also includes December 1977) does not appear to influence the winter trend. No significant seasonal or annual trend was found at this station for either period. At these two stations, we suspect that gap filling had minimal, if any, impact on the trend analyses.

    4.6 Assessment of Trend Results

    4.6.1 Impact of Missing Data on Trend Detection

    [50] The influence of missing data on trend results was tested by artificially degrading temporally complete series and applying trend tests over the 1970–2012 period. We found that removal of up to 30% of the daily data did not impact the sign of the mean trend in seasonal or annual total precipitation but that it reduced the likelihood of identifying a significant trend that did exist in the complete record and slightly increased the likelihood of detecting a statistically significant trend when one did not exist in the complete record. The changes were more dramatic for OLS regression than for MK trend detection. There was also a much greater impact on marginally significant trends than on highly significant ones (p ≤ 0.01).

    [51] At Barrow, degradation of the time series reduced the ability of OLS to detect significant trends in the winter (p = 0.04), spring (p = 0.01), and annual (p = 0.01) time series, with 55.4, 98.0, and 99.2% of the 500 replicate analyses displaying statistically significant trends when 10% of the data were removed, and 36.2, 79.8, and 83.4% of the replicates indicating a statistically significant trend when 30% of the data were missing. The impact on the MK test (which only identified spring and annual trends, p < 0.01) was less notable. Even with 30% of the data missing, MK detected statistically significant trends in 87.8 and 98% of the degraded spring and annual time series.

    [52] In Anchorage, where there were no statistically significant trends detected by OLS in the annual or seasonal time series, significant trends were identified in up to 5.8% of the degraded time series (30% data reduction in the annual time series). From 1970 to 2010, MK found one marginally significant trend at Anchorage (p = 0.1), which was deemed significant in 14.6% of the time series calculated from only 70% of the original data. Generally similar results were found for Fairbanks and Annette.

    [53] This analysis confirms that provided there is no systematic change in the frequency of missing precipitation data, even fairly substantial amounts of missing data are unlikely to render trend identification unreliable when the tests are highly significant. Loss of data appears to have a greater impact on the ability to detect trends of marginal significance and on the efficiency of OLS regression-based trend detection. There are some circumstances (e.g., Big Delta and University AES), where clusters of missing data might be more likely to impact some trend tests. Results for these stations should be evaluated cautiously.

    4.6.2 Assessment of Simultaneous Significance

    [54] Across both analysis periods (1950–2010 and 1980–2010), all stations, seasons, and applicable tests, 62 significant (p ≤ 0.10) tests were found in 610 trend tests of homogenous series. At α = 0.10, about 10% of tests will be statistically significant by chance, similar to the proportion of significant trends found in the study. We resampled each of the time series (all homogenous stations and seasons over both analytical periods) 500 times and tested the shuffled time series for trend by all three methods. We found that 9.1% of the time series displayed a significant trend in at least one test, as expected. Only 1.1% of the shuffled time series displayed statistically significant trends detectable by all three tests. In the original data, 12 time series, or about 2% of all tests, displayed statistically significant trends in LAD, OLS (or OLStx), and MK. While it is difficult, if not impossible, to determine whether any particular result occurs by chance, trends detected by all three tests may be more robust.

    [55] In many cases, the location and timing of those trends deemed significant by all tests also supports their significance. Between 1950 and 2010, Kenai and King Salmon, both in south-central Alaska, displayed significant decreases in spring precipitation, consistent with nonsignificant regional decreases in Anchorage and Homer and with observations of ecological drying on the Kenai Peninsula [Klein et al., 2005]. Between 1980 and 2010, positive trends in spring and winter precipitation at Barrow and Bettles, respectively, are deemed significant by all tests used. Although the northernmost stations, Barrow, Bettles, and Kotzebue, have distinct climates, they all show increasing winter or spring precipitation. In the interior, Tanana, Talkeetna, and Big Delta all show significant decreases in annual precipitation, detected by all three tests. Many, though not all, other stations in the interior also display decreasing annual precipitation (Table 6), including University AES, where reductions in autumn precipitation are significant in all tests. Three other instances in which time series display significant trends in all three tests occur at Homer (winter, spring, and annual).

    4.6.3 Influence of Transformation on Ordinary Least Squares Regression

    [56] Among the stations with homogenous time series, transformation to accommodate nonnormality had little influence on the results of ordinary least squares regression. In the majority of cases where transformation was applied, the results were of the same sign and had equivalent statistical significance. There was no systematic shift when the statistical significance class did differ; transformation contributed to a lower p-value about as frequently as to a higher p-value. It also does not appear that transformation changed the magnitude of trends significantly and consistently.

    4.6.4 Comparison of MK, LAD, and OLS Regression

    [57] All of the trend tests employed here produced generally similar results, although there were some differences at some stations. There were six instances in which only one test identified a significant trend at a given station in a given season, and four times in which only one test indicated a nonsignificant trend, even though other tests were statistically significant. In cases where one test disagreed with the others, Mann-Kendall analysis was less likely to corroborate results from the other trend tests.

    [58] The MK test is substantially different than LAD or OLS regression, performing an analysis of the ranks that is not particularly sensitive to either outliers or missing data. Time series data were also detrended, prewhitened, and then blended with trend again prior to MK trend testing in order to avoid the impacts of serial correlation on trend detection, something that was not done on data subject to OLS or LAD regression techniques. However, the trend-free prewhitening may not provide a full explanation for the differences. In one of the instances where MK was the only test to not indicate a significant trend (Kodiak, JJA 1980–2010), the AR1 value was relatively high (r = 0.36), and Durbin-Watson testing found indication of autocorrelation in the residuals of OLS regression (p = 0.02), indicating that the OLS and LAD trend results might have been impacted by the serial correlation characteristics of the data. Autocorrelation did not appear to be a problem in the other three instances (Talkeetna, JJA 1980–2010 and Homer, MAM 1980–2010 and DJF 1950–2010). First-order autocorrelations for each station are shown in Table 4.

    [59] All trends deemed significant by at least one test had the same direction in all tests, except at Big Delta. Here transformed OLS identified a marginally significant negative trend; LAD and OLS indicated nonsignificant negative trends, while MK a trend of 0. Results from Big Delta should be interpreted circumspectly, as about a third of the daily data from that station are missing between 1991 and 1993. At any given station and season, the most dramatic trend could be produced by any of the tests applied, but overall, OLS regression often estimated the largest trend magnitudes. Mann-Kendall found smaller trends in time series that, although deemed homogenous in total annual precipitation, appeared to have a seasonally specific step change. These include the winter precipitation time series at Tanana and the 1950–2010 winter precipitation time series at Homer. Among nonsignificant tests, LAD was much more likely to disagree with the other two tests in the sign of the trends than OLS or MK. Comparison of these trend detection methods indicates that discrepancies between MK and regression-based trend testing may be helpful in identifying the presence of step changes or impacts of high serial correlation on trend results. Although there were few differences between the methods used in this study, we recommend the Mann-Kendall test for its robustness to outliers and its potentially reduced sensitivity to inhomogeneities. Many statistical analysis packages now provide support for nonparametric trend analysis, making it as simple to implement as OLS regression.

    4.7 Implications for Climate and Climate Change Assessment

    [60] The presence of inhomogeneities in data from a number of stations in Alaska and the sensitivity of regional precipitation trends to analytical period pose serious challenges for statewide climate and climate change assessments. Despite these difficulties, we can still provide useful climate information to the scientific community and to stakeholders. We suggest three broad strategies.

    [61] First, the widest possible net should be cast for stations, in order to provide redundancy. This may be the only way to refrain from attributing regional significance to what may be local trends. Ongoing efforts to document and collect data from multiple sources will be crucial here (e.g., Arctic LCC Hydrologic Database Team). Screening station metadata for obvious problems and obtaining expert advice and assessment from local NWS Forecast Offices will be helpful in identifying high-quality stations. Results from this study suggest that it may be possible to detect particularly robust trends in seasonal and annual precipitation from stations missing substantial amounts of daily data, up to about 20%, provided the missing data are not tightly clustered within a few years. Trend tests that are robust to outliers appear to be less impacted. In Alaska, where operating conditions can make collecting serially uninterrupted data challenging, the ability to relax data completeness criteria should expand the number of stations that can be used.

    [62] Some sources [e.g., Groisman and Easterling, 1994] suggest that compiling station data into a gridded product may reduce the occurrence of inhomogeneities. This may be a reasonable assumption when there is a dense network of stations, but it is unclear whether interpolating data can resolve inhomogeneities when there are very few contributing stations. In a companion manuscript, we focus our research on the evaluation of homogeneity and trend characteristics in three gridded precipitation datasets widely used for climatological and ecological research in Alaska [Guentchev et al., in prep].

    [63] Some of the inhomogeneities detected in this study appear to reflect actual climate variability. In places like Alaska where decadal to multidecadal climate variability is apparent, leading to strong dependence of trends on analytical period, and where serial autocorrelation can be substantial, linear trend analysis of annual time series may not be the most appropriate method for detecting changes. More advanced time series analysis techniques that evaluate changes at various frequencies may be useful, as might analyses that can incorporate information about prevailing atmosphere and ocean conditions [e.g., Cassano et al., 2011]. If linear trend analysis is used, we recommend evaluating the robustness of those trends to analytical period, as in Figure 4 of the current study, to ensure that decadal scale variability is not interpreted as a long-term trend.

    [64] Finally, the results of precipitation change analyses may not be particularly useful in isolation. Other sources of information can be helpful in assessing the reliability of the results. Reanalyses that combine data and simulations by physically based models of the atmosphere and regional climate modeling exercises can be used to evaluate whether observed changes in precipitation are consistent with large-scale or local forcings or with trends in simulated precipitation [e.g., L'Heureux et al., 2004; Mölders and Olson, 2004]. Related data such as streamflow records and observation of vegetation or wildlife change can be used to determine whether observed climate variability is reflected in regional physical or biological systems. Mismatch between observed climate trends and either the physical forcing of climate or on-the-ground hydrological and ecological conditions should prompt further evaluation of the detected trends.

    5 Conclusions

    [65] Consistent with previously published studies, we found few spatially coherent trends in precipitation in Alaska. The most notable trends include long-term (1950–2010) decreases in spring precipitation at Kenai and King Salmon in south-central Alaska. Over the last three decades (1980–2010), there is some indication of decreasing precipitation at some stations in the state's interior, with reductions in summer, autumn, and/or annual precipitation at Big Delta, Talkeetna, Tanana, and the University Agricultural Experiment Station. The northernmost stations (Barrow and Bettles) have experienced increasing precipitation in the spring and winter, respectively, while Homer saw decreasing precipitation—particularly in the winter—over those same three decades.

    [66] Differences between the trend detection methods used here were minimal. This suggests that although some trend detection tests may not always be appropriate, given nonnormality, missing values, outliers, and autocorrelation in the time series, they often produce comparable results. Most disagreement between studies is unlikely to arise from methodological differences or data source (e.g., NCDC versus WRCC). Rather, discrepancies stem primarily from the choice of analytical period and from the analysis of inhomogenous time series.

    [67] We found that inhomogeneity provides a significant challenge to trend detection in Alaska's precipitation record, particularly for stations along the southern and western coasts. Among coastal stations, initial homogeneity testing found few stations with long homogenous records. Further inspection found that even stations that were free of statistically detectable inhomogeneities in annual time series might contain inhomogeneities in seasonal time series. Most of these inhomogeneities coincided with operational changes, making it difficult to discern the impact of either multidecadal climate variability or anthropogenic climate change on precipitation in Alaska. Even where inhomogeneities do not coincide with operational changes, they may render classical trend analysis inappropriate.

    [68] We recommend three strategies for using Alaska precipitation records in climate analysis and assessment. First, we suggest using a broad range of stations, including those managed by industry, native, state, and federal land management agencies, and possibly those of citizen scientists—not just airport stations—while getting expert local knowledge about those stations. We also advocate imposing less stringent data completeness guidelines than are usually used in order to provide redundancy in the data set. For example, the only station on the North Slope that met our criteria was Barrow Post Rogers Airport. We have no way of knowing whether any trends or inhomogeneities detected at that station reflect regional changes in climate or peculiarities of that particular location or station. Second, techniques other than classical linear trend analysis may be more appropriate. These include a range of options from explicitly evaluating the impact of analytical period to more advanced time series methods that distinguish variability at multiple frequencies to synoptic analyses like those performed by Cassano et al. [2011] that allowed the authors to attribute variability in the temperature record to multiple sources of climate variability. Third, changes should be evaluated in the context of regional and local climate forcing to ensure that trends are consistent with the appropriate climatological forcings and/or with respect to variability in ecological or hydrological systems that might reasonably be expected to respond to changes in precipitation.

    Appendix A

    [69] In this case, it is unlikely that combining data from multiple sources (National Climatic Data Center, Western Regional Climate Center, and the National Weather Service) would influence our results. Although the data are compiled and distributed by different agencies, they are derived from the same measurements. Any differences between them would be the result of agency-specific quality control standards, different interpretation of handwritten observer forms, or random data processing errors. Replacing data with the period mean, as we did when no data were available is common practice [e.g., Linacre, 1992, p. 56] and is also unlikely to impact trend detection in most circumstances. As discussed in Section 4.6.1, trend detection can be robust to the inclusion of time series with missing days of data, under certain circumstances. We employed all of these techniques to provide trend analysis for the largest possible number of stations over the longest possible time periods. Because some readers may be interested in the frequency with which we used these techniques and their correspondence with inhomogeneities (see Section 4.2) or with influential years in the trend analysis (see Section 4.5), we have provided detailed documentation of gap filling, and data completeness here in Table A1.

    Table A1. Information on Gaps in Data From the National Climatic Data Center (NCDC) Filled With Information From the Western Regional Climate Center (WRCC) or With the Appropriate Mean and an Evaluation of Their Potential Impact on Trends Estimated by Ordinary Least Squares (OLS) Regression
    Station NameMonths Replaced From WRCC (# of Days Missing)Period Mean Used1950–2010 Trends Potentially Impacted by Replacement1980–2010 Trends Potentially Impacted by Replacement
    • a Number of days missing from Big Delta in 1991 (8, 8, 10, 9, 10, 10, 8, 9, 9, 8, 9, 9), 1992 (8, 9, 9, 2, 10, 7, 8, 10, 8, 9, 8, 9), and January–June 1993 (11, 8, 8, 8, 10, 8).
    Barrow1989: Mar (11)  1989: ANN
    2007: Jun (0)
    Bethel 1983: Jul  
    Bettles 1959: Jul  
    1996: Aug
    1998: Jan–Feb
    Big Deltaa1990: Feb–Dec (0,8,9)1994: Jun1990:SON1990: SON
    1991: Jan–Dec1998: Jan1994: SON1994: JJA, SON
    1992: Jan–Dec
    1993: Jan–Jun
    1998: Feb (24)
    Cold Bay1954: Sep (8), Nov–Dec (7–8)1950: Jan–Feb  
    1953: Nov–Dec
    College Observatory1993: Jan (4)1964: Sep2005: DJF (Dec 2004)2005: DJF (Dec 2004)
    2003: Dec (12)1983: Aug
    2004: Dec (15)
    Cordova1967: Apr (11)1991: Jan  
    1985: Nov (8)1995: Oct
    Eagle1980: Feb (0)   
    1982: Jun (1)
    1984: Apr(0)
    1985: Aug–Sep (1,1)
    1986: May (0)
    1987: Apr (0)
    1989: May–Jun (0,0)
    2006: Sep (0)
    2009: May (0)
    Gulkana1995: Apr–May (10,9)1995: Feb–Mar  
    1998: Sep
    Homer1973: Apr (23)1973: Mar  
    1998: Apr (20)1997: Dec
    1998: Jan–Mar
    Juneau1985: Sep–Nov (0,0,0)   
    1986: Apr–May (0,0)
    1995: Mar (0)
    Kenai1950: Sep (8) 1950: ANN 
    King Salmon    
    Kitoi Bay1998: Oct (26+)1995: May, Sep  
    1998: Feb
    Kotzebue 2007: Jun  
    Matanuska Ag Experiment Station 2004: Nov  
    2008: Mar
    2009: Jun
    McGrath2008: May (11), Oct (18)1976: May  
    Nome1988: Sep (9)   
    Port Alsworth 1991: Aug  
    1993: Dec
    2008: Dec
    2009: Aug, Nov–Dec
    Seward1975: Aug (6)1979: Jul  
    1994: Feb–Mar (8,8)1991: Jan
    St. Paul Island 1977: Dec1977: ANN 
    Talkeetna 1955: Oct  
    Tanana1995: Apr–May (10,8)   
    University Ag. Experimental Station1973: Aug (7)1969: Jul1982: SON, ANN1982: SON, ANN
    1977: Dec (9)1973: Dec2002: MAM (untransformed only)2002: MAM (untransformed only)
    1982: Oct (8)1974: Jan, Feb
    2002: Feb–Jun, Oct, Dec2003: Apr, Aug
    2003: Jan (22), Mar (20), May (26+), Jun (21), Sep (20), Dec
    2004: Jan–Mar (22,23,20)
    Yakutat1989: Aug (7)   
    1999: Jan–Dec (0)


    [70] We would like to thank Gary Hufford, Rick Thoman, Edward Plumb, Carl Dierking, and Ted Fathaeur for providing their insight on the operation of Alaska's weather stations and for providing data from the Cooperative station at Eagle. We would also like to thank John Walsh and an anonymous reviewer; their comments on the manuscript improved it greatly. Funding for this project was provided by the Arctic Landscape Conservation Cooperative and The Wilderness Society.


    1. In Table 1, under the Annette location cited by Alessa et al. [2011] the “pos” trend in the “Annual” column should be “neg”. Under the Barrow location cited by Stafford et al. [2000], the trend under “Spring” should be “neg-ns” instead of “neg95”.