SEARCH

SEARCH BY CITATION

Keywords:

  • California;
  • Noah;
  • WRF;
  • regional climate;
  • snow

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

[1] The depth and timing of snowpack in the Sierra Nevada Mountains are of fundamental importance to California water resource availability, and recent studies indicate a shift toward earlier snowmelt consistent with projected impacts of anthropogenic climate change. In order for future studies to assess snowpack variability on seasonal to centennial time scales, physically based models of snowpack evolution at high spatial resolution must be improved. Here we evaluate modeled snowpack accuracy for the central Sierra Nevada in the Weather Research and Forecasting regional climate model coupled to the Noah land surface model. A simulation with nested domains at 27, 9, and 3 km grid spacings is presented for November 2001 to July 2002. Model outputs are compared with daily snowpack observations at 41 locations, air temperature at 31 locations, and precipitation at 10 locations. Comparison of snowpack at different resolutions suggests that 27 km simulations substantially underestimate snowpack, while 9 and 3 km simulations are closer to observations. Regional snowpack accumulation is accurately simulated at these high resolutions, but model snowmelt occurs an average of 22–25 days early. Some error can be traced to differences in elevation and observation scale between point-based measurements and model grid cells, but these factors cannot explain the persistent bias toward early snowmelt. A high correlation between snowmelt and error in modeled surface air temperature is found, with melt coinciding systematically with excessively cold air temperatures. One possible source of bias is an imbalance in turbulent heat fluxes, erroneously warming the snowpack while cooling the surface atmosphere.

1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

[2] Water resource availability in much of the Northern Hemisphere is strongly dependent on wintertime storage and springtime melt of mountain snowpacks. Seasonal snowpacks and glaciers provide water resources to more than a sixth of the global population [Barnett et al., 2005], and global modeling studies suggest that warming temperatures and changing atmospheric circulation patterns associated with anthropogenic climate change will likely result in altered snowpack timing and magnitude in many areas [Arnell, 1999; Vicuna et al., 2007]. Among the regions most dependent on mountain snowpack is the American West, especially the state of California, where snowmelt releases stored water resources during seasons with little precipitation [Mote et al., 2005].

[3] Recent studies of historical snowpack in California suggest that a move toward earlier snowmelt and peak streamflow is already apparent [Kapnick and Hall, 2009, 2010; Hidalgo et al., 2009; Barnett et al., 2008; Weare and Du, 2008; Maurer et al., 2007; Howat and Tulaczyk, 2005a, 2005b; Barnett et al., 2004; Cayan et al., 2001; Dettinger and Cayan, 1995]. Most projections of future change in mountain snowpack in the American West have been based on global climate model (GCM) output, but GCMs alone often fail to correctly simulate mountain snowpacks because the highly complex orography associated with mountain belts cannot be adequately captured with coarse GCM grid spacings [Cayan et al., 2008]. As a result, most studies downscale GCM output via dynamical methods such as a regional climate model (RCM) [Salathé et al., 2010; Qian et al., 2009], through distributed hydrologic models such as the Variable Infiltration Capacity (VIC) model [VanRheenen et al., 2004; Vicuna et al., 2007; Maurer, 2007; Maurer et al., 2007; Hamlet et al., 2005; Barnett et al., 2008], or using other statistical techniques [Miller et al., 2003; Dettinger et al., 2004; Maurer and Duffy, 2005; Stewart et al., 2004]. Results of these recent downscaling studies suggest that as temperatures in the western United States warm, snowmelt is likely to occur earlier in the spring, more precipitation will fall as rain instead of snow, and overall snowpack will decline in thickness and duration. Beyond this general qualitative agreement, however, there is substantial quantitative uncertainty in future projections of the spatial and temporal extent of snowpack loss. The improvement of future snowpack projections may occur through improved statistical downscaling methods or increased accuracy of physically based models, but each approach has challenges. The accuracy of statistical downscaling approaches depends on the assumed stationarity of statistical relationships developed using existing data sets, an assumption that remains true under a changing climate to an unknown degree [Milly et al., 2008]. Meanwhile, greater accuracy in physically based modeling efforts depends on the more accurate simulation of snowpack physics in current and future climates.

[4] Among the most promising avenues for improving physically based simulation of current and future snowpacks is through regional climate models such as the Weather Research and Forecasting (WRF) model developed by the National Center for Atmospheric Research and a number of partner agencies [Skamarock et al., 2008]. WRF has been successfully coupled to several land surface models that incorporate snowpack physics, including the Noah model used here [Chen and Dudhia, 2001]. Evaluations of snowpack in earlier versions of WRF using a variety of land surface schemes and in other regional climate models suggest generally low skill in tracking the accumulation and ablation of snowpack water content over a snow season [e.g., Wang et al., 2010; Caldwell et al., 2009; Duffy et al., 2006]. Recent studies evaluating snowpack in Noah suggest that changes to albedo parameterizations and other model components may result in substantial increases in model skill [Livneh et al., 2010; Wang and Zeng, 2010; Barlage et al., 2010]. Some of the suggested changes presented in these studies have been incorporated into the version of WRF-Noah used here (version 3.1.1).

[5] Despite recent advances in WRF-Noah, detailed evaluation of snowpack simulations has been limited. Some studies have largely focused on qualitative comparisons of snowpack extent and snow water equivalent (SWE) across large areas [Qian et al., 2009], while others have conducted more extensive statistical testing at a handful of specific stations [Wang et al., 2010; Livneh et al., 2010; Feng et al., 2008]. Among recent studies, only the work of Barlage et al. [2010] and Ikeda et al. [2010] has conducted a quantitative comparison between WRF-Noah snowpack and observations across a large number of observation sites in Colorado. These studies utilize version WRF-Noah v. 3.0, and their principal recommendation is to add an improved time-varying albedo term. This recommendation has been followed in WRF-Noah v 3.1.1, which is used here. In part, validation of snowpack in WRF-Noah has been limited because evaluation of gridded model output against point-based snowpack measurements presents inherent difficulties. Mountain snowpacks exhibit substantial spatial heterogeneity on scales even smaller than that of high-resolution model grid cells (e.g., 3 km) [Liston, 2004; Anderton et al., 2004], so comparison of observed and modeled snowpack may result in apparent error simply due to this scaling mismatch, even where model simulations of conditions averaged at the grid scale are correct. However, all comparisons of gridded data with point observations rest on the assumption that this type of error is randomly distributed [Barlage et al., 2010; Ikeda et al., 2010]. Absent any sources of persistent bias, comparison of observed and modeled snowpack at a sufficiently large number of points should yield a distribution with a mean error close to zero if the model snowpack physics are correct and sufficiently detailed. An additional source of potential error is derived from the effects of vegetation on snowpack. Versions of WRF-Noah from v 3.1 onward include consideration of vegetation impacts on surface albedo in snow covered areas but do not account for snowpack interception or effects of the canopy on longwave radiation, both of which can be important in forested environments [Andreadis et al., 2009; Pomeroy et al., 2008, 2009]. However, because snowpack monitoring stations are generally sited in clearings rather than under forest canopy, it is likely that these processes do not substantially affect either the observations or model output used here.

[6] In this study, we simulate snowpack and climate using the WRF-Noah model over the central portion of the Sierra Nevada Mountains for the period November 2001 through July 2002. This time period was selected to have approximately mean April snowpack values (96% of normal from 1950 to 2008) across 60 monthly snow stations used by Kapnick and Hall [2010]. We compare model output against daily SWE observations from 41 stations. We hypothesize that by comparing observed and modeled snowpack at such a large number of geographically proximate locations we will be able to determine whether errors are due to a systematic model bias or are simply the result of different scales of observation. While errors at any individual station may result from the mismatched comparison of point-based to gridded data sets, bias in the distribution of errors at many stations can likely be attributed to problems with the model. Specifically, we address: (1) the impact of model grid spacing on SWE simulation, (2) the accuracy of WRF-Noah SWE simulation, and (3) sources of error in WRF-Noah SWE output.

2. Data and Methods

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

2.1. The WRF-Noah Modeling System

[7] The WRF model has been developed by the National Center for Atmospheric Research (NCAR) and a number of partner agencies [Skamarock et al., 2008]. WRF has been used extensively for both operational meteorological forecasts and regional studies of climate and meteorology. Here, we use WRF version 3.1.1 with the North American Regional Reanalysis (NARR) product [Mesinger et al., 2006] as lateral boundary conditions over the period November 2001 to July 2002 with output every 3 h. Simulations are conducted in three one-way nested domains at 27, 9, and 3 km horizontal resolution (see Figure 1). These domains are referred to in the text as D27, D9, and D3, respectively. The simulations are run for 9 months without reinitialization by updating the large-scale forcing along the lateral boundaries at 3 h intervals. NARR provides initial (at time step 0) and the regularly updated lateral boundary conditions for D27. The boundary conditions for D9 and D3 are obtained from the successively coarser model domain. As such, simulations of all three domains are run concurrently, with larger domains processed before smaller domains at each individual time step. There is no feedback of conditions from high-resolution domains back to coarser domains. The outermost domain (D27) covers the entire western coast of the U.S. and a large portion of the adjacent Northeast Pacific to capture moisture flow into the region. Elevation at the resolution of this domain is shown as contours in Figure 1. The gross features of the regional topography are captured at this resolution, but the finer definition of individual mountain ridges is not resolved. The middle domain (D9) covers the entire California Sierra Nevada. The innermost domain (D3) covers the central portion of the Sierra Nevada, including Yosemite National Park. Figure 2, showing the region covered in D3, demonstrates the substantially more detailed topography captured at 3 km compared to the D27 topography shown in Figure 1. This is unsurprising, since the entire D3 domain (3888 3 km grid cells) is covered by just 48 D27 grid cells.

image

Figure 1. WRF simulation domains at 27, 9, and 3 km (from the outer to inner grids). Elevation is given in meters at the resolution of the outermost domain.

Download figure to PowerPoint

image

Figure 2. Map of the study area covered by the 3, 9, and 27 km model domains. The grid shows the spacing of the 3 km domain. Labeled dots indicate the locations of observation measurement stations, with those representative stations used here in Figures 5, 7, and 8 in black.

Download figure to PowerPoint

[8] We use the following physics options in WRF: the Thompson et al. [2008] cloud microphysics scheme, the rapid radiative transfer model longwave scheme [Mlawer et al., 1997], the Dudhia shortwave scheme [Dudhia, 1989], the Yonsei University planetary boundary layer scheme [Hong et al., 2006], and the modified Kain-Fritsch convection parameterization for the two outer domains [Kain, 2004; Kain and Fritsch, 1990, 1993].

[9] We utilized the Noah land surface model [Chen and Dudhia, 2001; Ek et al., 2003], which has been fully coupled to the WRF atmosphere such that land surface conditions can feed back into the atmosphere. The Noah model uses a simplistic canopy model and a multilayer soil model, in which the topsoil layer is used to simulate surface soil conditions, snowpack, and the vegetation canopy. As such, Noah simulates snowpack as a single layer and does not separately consider canopy snowpack. All liquid water within the snowpack is immediately routed into the soil. However, Noah does simulate snowpack accumulation, snowpack ablation via sublimation and melting, and heat exchange between the snowpack, soil, and atmosphere [Koren et al., 1999]. Noah was selected over other land surface models coupled to WRF because of its simple physics and extensive use in the recent literature [e.g., Livneh et al., 2010; Barlage et al., 2010; Wang et al., 2010].

2.2. Observational Data

[10] Daily observations of SWE were acquired at 41 locations within the innermost (D3) WRF domain from the California Department of Water Resources Data Exchange Center (CDEC) at http://cdec.water.ca.gov/. These stations are mapped in Figure 2 and locations and elevations are summarized in Table 1. Twelve stations are operated by the U.S. Bureau of Reclamations, 19 by the California Department of Water Resources, 9 by the Natural Resources Conservation Service (NRCS), and one by the U.S. Army Corps of Engineers. Those operated by the NRCS are a part of the official network of SNOTEL stations that provide long-term snowpack measurements across the American West. At all stations, SWE observations were collected using pressure-sensing snow pillows. Observations were scanned for obvious data entry or sensor failure errors and in three cases single data points were removed and replaced with the mean of the two adjacent days. At 10 of the 41 SWE observation stations, total daily precipitation values are available for at least 95% of days during the study period (Table 1). Additional stations with intermittent precipitation data were not used here because of concerns that missing data could bias results. Finally, at 31 of the 41 SWE observation stations, hourly measures of surface air temperature are also available (Table 1). Temperature observations were averaged to provide mean daily temperature values. Station elevations are also provided in the CDEC archive and are included in Table 1.

Table 1. Summary Statistics for Observed and Modeled Snowpacka
 Full NameLatitudeLongitudeElevobs (m)Precipitation DataTemperature DataSCDObsb (days)SCDD27 (days)SCDD9 (days)SCDD3 (days)AccumObsc (mm)AccumD27 (mm)AccumD9 (mm)AccumD3 (mm)RD3dRAccumRMelt
  • a

    Bold lines denote the eight representative stations used in Figures 5, 9, and 10.

  • b

    SCD refers to the snowpack centroid date.

  • c

    Accum refers to total snowpack accumulation.

  • d

    R refers to correlation coefficients between observed and modeled snowpack, accumulation, and melt.

SPTSpratt Creek38.667−119.8181875696056722182492713880.970.730.79
BLSBlack Springs38.375−120.1921982YY1265160768594686717640.360.77−0.14
KIBLower Kibbie Ridge38.032−119.8772043Y1085054797823504907880.690.740.08
GRVGraveyard Meadow37.465−119.292104YY1069680745588276886700.660.72−0.02
PSRPoison Ridge37.403119.522104YY1104851588323933956300.340.800.01
HNTHuntington Lake37.228−119.2212134YY1108857726106724615620.530.84−0.08
GINGin Flat37.767−119.7732149Y1214974827423797257920.550.70−0.03
CHMChilkoot Meadow37.41119.492180YY1194882939303937479060.750.850.31
BLDBloods Creek38.45120.0332195YY1219711110587883912349970.880.780.42
LVMLeavitt Meadows38.305−119.5522195959283882844103234560.970.740.58
TMRTamarack Summit37.165−119.22302YY1088985647096916704970.430.78−0.14
PDSParadise Meadow38.047−119.672332Y12410199120125790996014920.990.780.89
SLMStanislaus Meadow38.505−119.9372363Y129971161141536839130112700.920.800.67
PSNPoison Flat38.501−119.631240910775106914982707454850.860.800.33
TNYTenaya Lake37.838−119.4482485Y11710110611189490388911930.970.850.80
STROstrander Lake37.637−119.552500Y12010174908969035017400.660.76−0.03
MNTMonitor Pass38.67−119.61525461087552584092701521760.380.81−0.19
GNLGianelli Meadow38.205−119.8922561YY1281011051151086923116912970.900.820.44
HRSHorse Meadow38.158−119.6622561Y1281011211171230909130912070.940.770.71
TUMTuolumne Meadows37.873119.352622Y106901141095093728329200.970.880.63
EBBEbbets Pass38.561−119.80826521247510810610902709669450.860.850.53
HHMHighland Meadow38.49−119.8052652Y129751161201400270114113900.950.800.78
SPSSonora Pass Bridge38.318−119.6012668124921051136534107548710.950.730.82
WWCW. Woodchuck Mead.37.03−118.9182774Y1178391958766398038260.830.880.40
KSPKaiser Point37.30−119.102805YY11188831007466725558440.950.880.69
LBDLobdell Lake38.44−119.3772805110537278396731572360.620.77−0.03
SLISlide Canyon38.092119.432805Y129941241211148455131412050.970.890.81
DDMDeadman Creek38.332−119.6532820Y1289210512079041075410500.930.690.84
MHPMammoth Pass37.61119.0332835YY1278910911497533081010550.920.870.66
VRGVirginia Lakes Ridge38.077−119.23428351209480954144552735420.730.830.08
AGPAgnew Pass37.728−119.1432881Y11189911007573304586790.930.780.54
LVTLeavitt Lake38.282−119.62129271399210511315494107549050.680.870.06
SLKSouth Lake37.176−118.5622927Y1118897903983223893130.850.780.60
UBCUpper Burnt Corral37.183−118.9372957Y1208998999766916937200.790.870.17
BGPBig Pine Creek37.128−118.4752988Y106316284379191512710.810.840.10
DANDana Meadows37.897−119.2572988118901031167013725847900.970.930.59
RCKRock Creek Lakes37.455−118.7433049Y1083410497315275374470.940.700.79
VLCVolcanic Knob37.388118.9033064Y12485110967643067955560.620.890.07
SWMSawmill37.162118.5623110Y10988971023893223894520.960.820.64
GEMGem Pass37.78−119.173277Y12689911088133304587030.800.740.24
BSHBishop Pass37.1−118.5573415Y130881101116143225396680.850.660.42
Mean   2607  1178191977774736787730.800.800.38

2.3. Analytical Methods

[11] To assess the effect of spatial resolution on modeled snowpack, we calculate the mean daily SWE for all three domains at all elevations over 1500 m within the boundaries of D3 (Figure 1). To ensure that identical areas are incorporated, D27 and D9 SWE grids are interpolated to the D3 grid using a nearest neighbor approach. Mean daily SWE is also computed for all model domains in four elevation classes (1500–2000 m, 2000–2500 m, 2500–3000 m, and 3000+ m). D3 elevations are used to determine the elevation class for each grid cell in all model domains to ensure consistency. Comparison among elevation classes allows us to determine whether the influence of spatial resolution on modeled snowpack varies with terrain height.

[12] To compare observations and modeled outputs, it is first necessary to develop a method for comparison of point-based and gridded data sets. In this case, we simply compare each observation station to the model grid cell in which it falls. An inverse distance weighting approach was also tested, in which values for the four nearest model grid cells were weighted according to the distance between the observation point and the grid cell center. This approach provided no discernible advantage over our simpler approach and was abandoned. At individual observation stations, model SWE error may increase with increasing model grid spacing as grid cell parameters such as elevation become less and less representative of observation locations.

[13] At each station, we compare modeled and observed snowpack using six distinct metrics:

[14] 1. To gauge model skill in determining snowpack timing we calculate the SWE centroid date (SCD) of accumulated SWE through the entire snowpack season in the observations and in each model domain. The SCD is defined as the date corresponding to center of mass of the annual snowpack and is measured in days from 1 November 2001, the beginning of our simulation period. Observational analysis has determined that SCD anomalies are roughly proportional to anomalies in the peak date of SWE (see Kapnick and Hall [2010] for more details). SCD is also similar in principal to center timing, a commonly used metric of streamflow seasonality [Maurer, 2007; Maurer et al., 2007; Stewart et al., 2005, 2004].

[15] 2. To directly compare skill in simulating snowpack magnitude, we determine total observed and modeled SWE accumulation for each model domain by summing daily SWE increases throughout the snow season. These values may exhibit a slight low bias because melt occurring on the same day as accumulation would reduce apparent accumulation. However, this effect is expected to be small since melt and accumulation events are generally independent of one another.

[16] 3. We calculate the Pearson's correlation coefficient (r) for the paired time series of observed and D3 SWE. Correlations are calculated for each station over the time period between the first day on which either observed or modeled SWE is greater than 1.0 mm and the last day on which SWE is greater than 1.0 mm. This metric provides an indication of model skill, but some caution is warranted because both observed and modeled time series exhibit a high degree of serial autocorrelation and because r is insensitive to differences in time series magnitude.

[17] 4. We compare observed and WRF-derived precipitation in order to determine whether any biases in modeled snowpack are related to an overabundance or under abundance of precipitation. Daily precipitation totals are available at 10 of the stations used here, and we use a paired Student's t test to determine whether WRF-derived annual precipitation totals differ significantly from observations at these locations. In addition, we separately compare D3, D9, and D27 precipitation at low elevations (<2000 m) and high elevations (>2000 m) to determine whether model resolution has any effect on total precipitation via greater topographic detail at higher resolutions. If so, then we expect to observe similar levels of precipitation at low altitudes in all model domains and substantially greater precipitation at high altitudes at finer grid spacings due to the effects of orography. If not, then either lower precipitation in D27 at high altitudes will be offset by greater precipitation at low altitudes, or all domains will generate similar amounts of precipitation at all altitudes.

[18] 5. We compare time series of snowpack accumulation (i.e., snowfall) and ablation (i.e., melt and sublimation) from WRF-Noah and observations. Despite the fact that they are both sensitive to temperature, the physics associated with accumulation and ablation processes are largely distinct. In WRF version 3.1.1, precipitation, the principal form of accumulation, is simulated by WRF directly while ablation processes are handled in Noah. As a result, we divide modeled and observed time series into periods of accumulation and ablation in order to assess model skill in tracking each process separately. Accumulation days are defined as those where measured SWE is greater than the previous day, while ablation days are those where accumulated SWE has declined relative to the previous day. Days on which there is both accumulation and ablation are not captured, but these are likely to be a small subset of accumulation and ablation days. We calculate Pearson's correlations between observed and simulated accumulation and ablation time series over the highest-resolution domain (D3) to determine model skill in these variables.

[19] 6. Because Noah predicts snowpack evolution using an energy-mass balance scheme [Ek et al., 2003; Feng et al., 2008; Wang et al., 2010], it is important to evaluate the model's ability to correctly track the observed energy balance. Unfortunately, a full suite of daily energy balance observations is unavailable at most snow pillow observation sites in the Sierra Nevada, with the exception of the Mammoth Mountain Energy Balance Monitoring Site (http://www.snow.ucsb.edu/cues/description.html). Still, the extent and timing of differences between observed and modeled surface air temperature may provide indicators of strengths and weaknesses of energy balance calculations in WRF-Noah. As we show in section 3.2, a comparison of the time series of modeled and observed surface air temperature allows us to conduct a preliminary assessment of errors in the snowpack energy balance in WRF-Noah. Because this metric differs substantially from the first five, we will consider it separately in section 3.2.

3. Results

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

3.1. Comparing Simulated and Observed Snowpack Quantity and Timing

[20] A comparison of simulated area-averaged daily mean SWE values over the course of the 2001–2002 snow season at 27, 9, and 3 km within the boundaries of D3 (Figure 3a) indicates a substantial difference between D27 and the two higher-resolution grids but little difference between D9 and D3. Figure 3b shows time series of SWE averaged across 41 observation stations listed in Table 1 as well as simulated SWE in the corresponding model grid cells at 3, 9, and 27 km resolutions. Figure 3 should be interpreted with caution, since observed and modeled snowpack may represent variability at different spatial scales, as noted in section 2.3, and are not directly comparable. Still, the comparison suggests that all model domains generally underestimate overall SWE, with D3 being the most realistic. During the early accumulation portion of the season, simulated SWE in the 3 km and 9 km solutions tracks the observations well. At about day 50, melt events begin to occur in the simulated time series that are either weaker or nonexistent in the observations. As melting becomes more common later in the season, simulated and observed time series diverge further, with the 3 km and 9 km solutions continuing to deliver better performance than the 27 km solution.

image

Figure 3. (a) Mean WRF simulated snowpack for all areas above 1500 m at 3, 9, and 27 km grid spacings. (b) Mean snowpack across 41 snowpack observation stations shown in Table 1 and the corresponding model grid cells at 3, 9, and 27 km grid spacings. Differences between 9 and 3 km are comparatively small, while snowpack is substantially lower in the 27 km simulation. Snowpack at all model resolutions is lower than observations, especially during the latter half of the snow season.

Download figure to PowerPoint

[21] Examination of SWE values within 500 m elevation bands (Figure 4) provides additional information about the performance of the three simulations as a function of elevation. Values of D27 SWE are substantially smaller than D3 and D9 SWE at high elevations but are comparable to, or even somewhat greater than, D9 and D3 SWE at lower elevations. This pattern is likely due to the fact that in some areas adjacent to high-elevation regions D27 elevations are higher than D3 or D9 elevations due to greater smoothing of topography at lower spatial resolution. By contrast, D9 and D3 SWE are relatively similar at all elevations, with the greatest differences occurring at elevations greater than 3000 m, where D3 SWE is about 10% greater than D9 SWE.

image

Figure 4. (a–d) Mean WRF simulated snowpack for four 500 m elevation windows at 3, 9, and 27 km grid spacings. Differences between 9 and 3 km are small for all windows, while snowpack is substantially smaller in the 27 km simulation only at high elevations. Areas for each elevation band are calculated on the basis of 3 km domain elevation values.

Download figure to PowerPoint

[22] A suite of statistics quantifying model skill in simulating snowpack evolution for the 2001–2002 snow season is provided in Table 1, and the complete snowpack time series for the D27, D9, and D3 are shown in Figure 5 for eight representative stations (mapped in Figure 2). While the fidelity of simulations exhibits substantial variability from station to station, at most stations D3 and D9 SWE are closer to observations than is D27. Additionally, at most stations (TUM excepted) simulated snowmelt generally occurs earlier in the season than does observed snowmelt. Mean observed SCD for all stations occurs on day 117 (24 February), while mean modeled SCD occurs on days 81, 91, and 97 for D27, D9, and D3, respectively (20 and 30 January and 6 February). However, at some individual stations observed and modeled SCD match closely, especially for D9 and D3, while at other stations modeled and observed SCD differ by more than 50 days. Simulation of SCD generally improves with increasing model resolution, but even the 3 km solution exhibits an approximately 21 day bias toward early SCD.

image

Figure 5. Daily SWE measurements for three WRF model domains and observations at eight representative stations (see Table 1 for summary statistics for all stations). In general, 3 and 9 km domains better match observations than does the 27 km domain. At most stations, all domains exhibit a bias toward early snowmelt relative to observations.

Download figure to PowerPoint

[23] Precipitation is among the most plausible sources of bias in WRF-Noah snowpack. A substantial underestimate of total precipitation could potentially explain observed negative snowpack biases. Figure 6a shows the differences between observed and modeled total precipitation during the study period at 10 observation stations. The mean value of modeled minus observed precipitation is 123 mm (11.9%) for D3, 28 mm (2.7%) for D9, and −169 mm (−16.4%) for D27. None of these differences is statistically significant at p = 0.05, which may be due to a small sample size and substantial spread in the data. However, D3 and D27 observations are significantly different at from each other p = 0.05. Mean correlations between daily observed and D3 (r = 0.79), D9 (r = 0.79), and D27 (r = 0.77) precipitation at all ten observations stations suggest that WRF successfully reproduces observed precipitation timing, which is likely due to the accuracy of the NARR boundary conditions and WRF physics parameterizations. Figures 6b and 6c show mean WRF precipitation for each model domains in portions of the D3 region below 2000 m (Figure 6b) and above 2000 m (Figure 6c). At low elevations, model resolution has almost no impact on precipitation, while at high elevations D27 precipitation is substantially lower than D9 and, especially, D3 precipitation. This contrast suggests that greater topographic detail available in mountainous landscapes in D3 and D9 results in greater orographic precipitation, while at low elevations precipitation is largely independent of model resolution.

image

Figure 6. (a) Total WRF precipitation minus observed precipitation at 10 stations with complete precipitation records. (b) Mean cumulative precipitation in portions of the 3 km WRF domain below 2000 m. (c) Mean cumulative precipitation in portions of the 3 km WRF domain above 2000 m.

Download figure to PowerPoint

[24] Comparison of observed and modeled total snow accumulation also suggests a substantial resolution dependence of model snowpack accuracy (Table 1). Mean observed SWE accumulation across the 41 test sites is 777 mm, and D3 SWE in the corresponding model grid cells matches this very closely at 773 mm. By contrast, D9 (678 mm) and especially D27 (473 mm) each generally underestimate total accumulation. At the highest resolution, WRF simulates accumulation timing with a high degree of fidelity as well. The mean correlation coefficient between observed and D3 accumulation is 0.80, with a range of 0.66 to 0.93 (Table 1). In most cases, simulations at lower resolutions also reproduce accumulation timing accurately, though somewhat less so than at D3 resolution. The mean correlations for D9 and D27 are 0.76 and 0.73, with ranges of 0.57 to 0.90 and 0.13 to 0.92, respectively.

[25] In contrast to snow accumulation, even D3 simulates snowmelt quite poorly at many observation stations. The mean correlation coefficient between observed and D3 ablation events is 0.38, with a range of −0.19 to 0.89 (Table 1). D9 and D27 simulations are even poorer, with mean values of 0.23 and 0.05, and ranges of −0.26 to 0.86 and −0.29 to 0.79, respectively. This wide disparity between the fidelity of modeled ablation and accumulation suggests that the primary source of error in WRF simulations of Sierra Nevada snowpack is simulation of ablation rather than accumulation. Correlations between observed and D3 total SWE range from 0.34 to 0.99, with a mean of 0.80 (Table 1), suggesting that despite major problems with simulation of ablation, high-resolution WRF-Noah simulations match observed variations in total SWE at many stations. Correlation coefficients for total SWE at individual stations are themselves highly correlated with the difference between observed and modeled snowpack SCD (r > 0.93 for all domains). Thus the correlation coefficient is essentially an additional measure of how well the simulation reproduces snowpack timing. Consequently, we do not include D9 and D27 snowpack correlations here and include D3 correlations principally to demonstrate the similarity to SCD differences.

[26] One potential source of error in WRF simulation of ablation is the effect of smoothing model elevations at low resolutions. If WRF elevations in mountain environments are, on average, substantially lower than observed elevations, it is likely that snowmelt will occur too early. Figure 7 shows scatterplots of model errors in SCD against model errors in elevation. In each domain, calculation of Pearson's correlation coefficients reveals a statistically significant (p = 0.05) and approximately linear relationship between SCD error and elevation error. Regression slopes are nearly identical in all domains. In D27 there is a clear systematic bias in model elevation (mean model elevation is 206 m lower than observed elevation). Given the slope of the elevation error/SCD error relationship, this bias can account for approximately 11 days of the 36 day early bias in SCD in the 27 km solution. In contrast, D9 and D3 have much smaller mean elevation biases of 11 and −44 m, respectively. Thus, the elevation bias can account for very little of the SCD bias in these simulations. For all three domains, the y intercept value of the linear regression equations in Figure 7 is strikingly similar, and indicates that even with no elevation error WRF will underestimate SCD by, on average, between 22 and 25 days. There is therefore a systematic bias in the WRF-Noah framework that would not disappear even if topography were resolved perfectly.

image

Figure 7. Scatterplots comparing the error in simulated SWE centroid date (SCD) and the error in simulated elevations relative to station observations. Each point represents one observation station (Table 1). Scatterplots are for three different model grid spacings: (top) 27 km, (middle) 9 km, and (bottom) 3 km. In all domains, there is a substantial relationship between elevation error and snowpack timing error. Also, each domain exhibits a significant bias toward early SCD.

Download figure to PowerPoint

[27] We perform a similar analysis for error in SWE accumulation to understand the impact of topographic fidelity on simulated snowpack. Figure 8 shows scatterplots of model errors in SWE accumulation (expressed as a percent of observed accumulation) against model elevation errors. In all domains, there is a strong relationship (r > 0.56) between accumulation errors and elevation errors, though the strength of the relationship decreases with increasing model resolution. D27 and D9 show systematic underestimates of SWE accumulation of −17.8% and −9.2%, respectively, while D3 shows virtually no systematic bias (−0.6%) in accumulation. This suggests that improved model resolution substantially improves simulation of accumulation up to 3 km. Further resolution increases are unlikely to be necessary to regional SWE accumulation accurately, though they may improve SWE accumulation at individual sites as elevation errors and simulation-scale discrepancies are reduced further.

image

Figure 8. Scatterplots comparing the error in simulated SWE accumulation and the error in simulated elevations relative to station observations. Each point represents one observation station (Table 1). Scatterplots are for three different model grid spacings: (top) 27 km, (middle) 9 km, and (bottom) 3 km. In all domains, there is a substantial relationship between SWE accumulation and elevation errors. However, the low bias in simulated accumulation is diminished as model resolution increases.

Download figure to PowerPoint

3.2. Evaluation of the Snowpack Energy Balance in WRF-Noah

[28] Because Noah computes snowpack ablation on the basis of an energy-mass balance scheme, a likely source of the systematic bias toward early snowmelt is in its simulation of the energy balance during melt events. Snowmelt in the Sierra Nevada is most responsive to increases in downwelling solar radiation associated with lengthening days during the spring season [Marks and Dozier, 1992]. This process is partially mediated by decreases in snowpack albedo associated with aging and partial melting of snowpack surfaces [Flanner and Zender, 2006]. Versions of WRF-Noah prior to 3.1 contained very simplistic snowpack albedo schemes, and analysis by Livneh et al. [2010] and Wang et al. [2010] suggested that improved simulation of snowpack albedo was necessary to improve model snowmelt. A new time-varying albedo scheme developed by Livneh et al. [2010] is included in WRF-Noah v.3.1, and subsequent analysis including this modification shows substantial improvements in snowpack simulation [Barlage et al., 2010]. In addition, however, recent evaluations of WRF-Noah snowpack also suggest that modification of other elements of the Noah snowpack-energy balance scheme can also lead to substantial improvements in snowpack simulation, including surface-atmosphere energy exchange, effects of vegetation on surface roughness, and simulation of wind speed [Livneh et al., 2010; Wang et al., 2010; Barlage et al., 2010]. Many weaknesses in WRF-Noah associated with these proposed changes are related to accuracy of turbulent heat fluxes between the snowpack and the atmosphere.

[29] The most widely available observational metric providing information related to snowpack energy balance is surface air temperature, which we use here to infer linkages between the energy balance and snowmelt in WRF-Noah. Evaluation of modeled albedo using point-scale measurements is far more problematic, and we do not attempt to do so here. A comparison of D3 and observed daily mean 2 m air temperatures shows that day-to-day and seasonal variations in WRF temperature closely match observations in most cases (Figure 9). The mean correlation between observed and D3 mean daily 2 m temperatures at all 31 stations with temperature data is r = 0.88 (individual station correlations not shown). However, D3 temperatures exhibit a negative bias relative to observations of between −1.1 and −5.6 K at 28 of 31 stations. Two stations showed substantial positive biases (2.9, 4.5 K), while one station showed a negative bias of less than −1 K (Table 2). The overall mean bias is −2.6 K. However, this bias is not evenly distributed throughout the snowpack season. A comparison of temperature error in D3 and modeled snowmelt timing shows a strong correspondence between melt events and strongly negative temperature errors (Figure 10). Indeed, the mean temperature error during modeled ablation events across all stations (−3.2 K) is nearly twice that for nonmelt periods (−1.7 K), and temperature error and modeled daily ablation are highly correlated, on average, at r = −0.71, with systematically high anticorrelations between these two quantities at nearly all stations (Table 2).

image

Figure 9. Observed mean daily surface air temperature (°C) at eight representative stations shown in black and 3 km WRF mean daily surface air temperature shown in red. Time series are highly correlated, but WRF temperatures are sometimes substantially lower than observations.

Download figure to PowerPoint

image

Figure 10. Errors in modeled surface air temperature relative to observations are shown in black, while modeled snowmelt is shown in red for eight representative observation locations. As discussed in section 2.3, snowmelt is inferred from day-to-day changes in total snowpack and is not directly measured. At all stations, there is a statistically significant relationship between simulated snowmelt and temperature error, suggesting a relationship between the timing of model snowmelt and the surface energy balance in WRF.

Download figure to PowerPoint

Table 2. Observed and 3 km Modeled Elevations and Mean Errors in Model Temperature
 Elevobs (m)Elev Diffa D3 (m)Elev Diffa D9 (m)Elev Diffa D27 (m)Mean D3 Temperature Error (K)Rmteb
  • a

    Elev Diff refers to the difference in elevation between the observation station and the corresponding model grid cell.

  • b

    Rmte is the Pearson's correlation between modeled snowmelt and temperature error.

BLS1982−121−197−481−1.860.75
KIB2043−184−446−573−1.520.90
GRV2104−113−86331−1.940.88
PSR2104−202−506−578−2.500.68
HNT213441−3193−1.910.80
GIN2149−138−148−604−1.950.93
CHM218025−87−654−1.310.66
BLD2195710624−3.500.67
TMR2302−14728135−2.490.72
PDS2332343−7370−4.520.62
SLM236311100−143−1.480.80
TNY248523833526−2.510.67
STR2500−154−26111−0.380.88
GNL2561−80−264−225−3.300.69
HRS2561254252−159−2.830.57
TUM2622253418172−2.470.66
HHM2652−41−84−288−2.780.64
WWC2774−136−273−433−2.370.60
KSP2805−40−346−477−4.410.59
SLI2805290192−3−4.210.58
DDM28202841−186−4.750.69
MHP2835−76−42−142−4.110.59
AGP288122−43−187−5.550.60
SLK2927378520−284.530.96
UBC2957−78−86−521−1.140.70
BGP29882839−6582.910.75
RCK3049161236−671−4.370.87
VLC3064−26110−357−2.800.65
SWM3110381337−211−5.140.40
GEM3277−3−439−584−3.910.72
BSH34159999−516−4.340.65
Mean261249−21−249−2.550.71

[30] The fact that errors in modeled surface air temperature coincide so closely with melt events indicates a possible physical link between the two variables. Rather than erroneously high temperatures leading to excessive melt, simulated air temperatures are too low during melt events. Zhang et al. [2009] found a similar cold bias in an earlier version of WRF during springtime, especially for maximum daily temperatures. In addition, Duffy et al. [2006] observed both cold biases and anomalously early snowmelt in at least two other regional climate models incorporating earlier versions of the Noah snowpack scheme. The source of this persistent low bias is unlikely to be pervasively low albedo values, since increased absorption of incoming solar radiation warms both the snowpack and the adjacent atmosphere. On the other hand, this finding is consistent with an erroneously positive net energy flux from the surface atmosphere into the snowpack, which would cool the atmosphere and warm the snowpack. This finding also reaffirms recent findings [Wang et al., 2010; Livneh et al., 2010; Barlage et al., 2010] that components of the energy balance other than albedo likely play a role in observed errors in WRF-Noah snowpack ablation. However, determination of the precise source of this error in the WRF-Noah snowpack energy balance scheme requires additional energy balance data not available at the observation sites used here.

4. Discussion and Conclusions

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

4.1. Comparing Simulated and Observed Snowpack Quantity and Timing

[31] There are three principal conclusions to be drawn from this analysis. The first is that model resolution substantially affects the quantity and timing of SWE accumulation and ablation in WRF simulation of mountain snowpacks. When run at a 27 km spacing, WRF appears to substantially underestimate snowpack persistence and magnitude at most stations (Figure 5). By contrast, model grid spacings of 9 km and 3 km come much closer to accurately simulating both accumulation and ablation processes (Table 1). Examination of Figures 6, 7, and 8 suggests that much of this discrepancy is related to the limited ability of WRF to capture the effects of elevation and orography on precipitation and ablation at 27 km in high-altitude environments (i.e., spatially diverse high-elevation mountain ridges are reduced to lower-elevation plateaus). This is apparent in Figure 4, where at relatively low elevations SWE at 27 km is comparable to that at higher resolutions, while above approximately 2500 m 27 km SWE estimates are dramatically lower. It is also possible that elevation differences among the domains result in differences in precipitation phase, which could also play a role in comparison of snowpack among the three domains. The importance of accurate topography in driving SWE simulations is also clear in Figures 7 and 8, which show clear relationships between model skill and elevation accuracy regardless of the grid spacing selected. However, the fact that observed station elevations are substantially higher than model elevations at 27 km (206 m), while at 9 and 3 km spacings mean elevation errors are much lower (44 and −12 m, respectively) supports the idea that a grid spacing sufficiently fine to accurately reflect topographic variations is critical to robust simulation of snowpack. For the Sierra Nevada, it appears that the optimal grid spacing is less than 27 km, which supports previous conclusions by Jin and Miller [2007]. A comparison of SWE in 9 km and 3 km WRF simulations with station data does show generally higher correspondence at 3 km (Table 1 and Figure 3b), especially in simulation of overall SWE accumulation. Moreover, the presence of a −9.2% systematic bias in D9 SWE accumulation (relative to −0.6% for D3) shown in Figure 8 suggest that snowpack may be better simulated in D3 than in D9. However, the close correspondence between 9 km and 3 km area-averaged SWE in Figure 3a (as opposed to SWE averaged over the observation locations in Figure 3b) suggests that this difference may be largely a result of improved correspondence between model grid cells and observations at higher resolutions as opposed to simulation of overall snowpack. This finding supports previous work by Ikeda et al. [2010], who found little difference in accumulation accuracy between 6 km and 2 km WRF-Noah simulations over the Colorado Plateau. Even though mean elevation errors in both D9 and D3 are close to 0 m, D3 likely better simulates the spatial extent of precipitation shadows and other orographic features. Overall, for climate-focused studies that do not require great geographic precision a 9 km grid spacing may be sufficient in mountainous environments such as the Sierra Nevada. A 3 km grid appears to provide further incremental improvements in simulated snowpack but only at the cost of substantially increased computational resources. A 9 km grid spacing is close to those used in some recent RCM studies of mountain climates [Suklitsch et al., 2010; Caldwell et al., 2009] but somewhat finer than those used in other studies [Qian et al., 2009; Weare and Du, 2008].

[32] The second principal conclusion of this work is that WRF-Noah simulates snowpack accumulation without major systematic bias, provided model resolution is sufficiently high, but fails to realistically capture melt processes. A comparison of observed and modeled precipitation at 10 stations yields no statistically significant differences and nonsignificant high biases at 3 km and 9 km resolutions, suggesting that erroneous precipitation is likely not the source of observed snowpack biases in WRF. The deficiency of WRF in simulating ablation processes has been observed by other recent studies [e.g., Livneh et al., 2010; Wang et al., 2010; Barlage et al., 2010], but in these studies accumulation and melt events have not been explicitly separated as they are here. The dichotomy between simulation of accumulation and ablation is first evident in Figure 3b, where D3 and D9 SWE closely match observations during the accumulation season but diverge during the melt season. At individual stations, timing and magnitude of accumulation events match closely between WRF and observations (mean r = 0.80), while most modeled ablation events occur earlier than observed melt (Table 1 and Figure 5). At some individual stations shown in Figure 5 this early melt is readily apparent (e.g., GIN and PSR), while at others there is a much closer match between observed and modeled melt timing (e.g., SLI and SWM). Some portion of the variation among stations is likely caused by differences between the grid cell and the point-scale observation in the spatial scale sampled. For example, past research suggests that a point-based snowpack observation (on the order of 1 m2) does not accurately reflect spatial variability at larger scales such as the 9 to 729 km2 model grid cells used here [Anderton et al., 2004]. However, this source of error likely has a mean near zero when averaged over a large number of observation locations [Ikeda et al., 2010; Barlage et al., 2010] and is thus more important from a validation perspective than for regional hydroclimate studies. Of greater concern are the systematic biases toward early snowmelt apparent when comparing observations and model output. At low resolutions (e.g., D27), a portion of this melt bias (approximately 11 days observed bias in SCD) is almost certainly related to differences in elevation between the model grid cell and observation location, as is apparent in Figure 7. Only a few hundred meters of difference in elevation can result in major differences in the timing of snowmelt. However, even with discrepancies in elevation taken into account, the model exhibits a systematic bias of 22–25 days toward earlier snowmelt. This bias can most easily be explained by problems arising from snowpack physics in WRF-Noah.

4.2. Evaluation of the Snowpack Energy Balance in WRF-Noah

[33] The third principal conclusion of this work is that an imbalance in heat fluxes is a likely contributor to the observed early snowmelt bias in WRF-Noah. At first glance, the cold bias in WRF-Noah surface air temperatures during melt events seems incompatible with the erroneously early simulated snowmelt. When viewed from an energy balance perspective, though, these seemingly contradictory observations suggest an erroneously positive heat flux from the atmosphere into the snowpack as one possible source of early melt bias. This hypothesis requires further testing, however, as currently available energy balance data at observation locations used here is insufficient to fully evaluate modeled fluxes. We recommend this aspect of the WRF-Noah system as a strong candidate for future study.

[34] In absence of validation data, examination of how snowmelt is simulated in Noah can provide guidance for future efforts to improve WRF-Noah. The primary energy balance equation for snowmelt in Noah is

  • equation image

All fluxes are in W/m2. QMelt is the snowmelt heat flux, Qdown is downwelling radiation, Qp is the heat flux between new precipitation and the snow surface, Qfr is the heat flux associated with freezing rain, Qup is upwelling longwave radiation, Qsoil is the soil heat flux, Qlh is the latent heat flux (limited to sublimation over snowpack), and Qsh is the sensible heat flux between the atmosphere and snowpack [Koren et al., 1999]. Several components of the snowmelt equation cannot adequately explain the observed low-temperature biases during snowmelt events. Qp and Qfr are associated with precipitation events and are thus unlikely to affect melt except in unusual cases (e.g., large rain-on-snow events). Problems with Qdown and Qsoil are unlikely to result in the negative air temperature observed specifically during melt events. This leaves Qup, Qsh, and Qlh as potential sources of error.

[35] The upwelling longwave flux is calculated in Noah using the Stefan-Boltzmann equation:

  • equation image

where ɛ is the snowpack emissivity, σ is the Stefan-Boltzmann constant, and T is the snowpack temperature in K. One potentially large source of error is the emissivity value selected. The range of snowpack emissivity values in the published literature is between approximately 0.94 and 0.99, with lower values occurring only in very limited cases such as bare ice [Hori et al., 2006]. While the maximum snowpack emissivity value in WRF-Noah is 0.95, at the low end of this range, model snowpack proved nearly identical when the maximum emissivity was increased to 1.0. As such, selection of the maximum snowpack emissivity is unlikely to be the source of observed error in surface air temperature and snowmelt timing.

[36] The other possible sources of error are the sensible and latent heat flux terms. In Noah, sensible heat is calculated as

  • equation image

where RCh is a surface heat exchange coefficient multiplied by the wind speed, air density, and the specific heat of water; Ts is the surface skin temperature; and θ is the air potential temperature. During major melt events, the air temperature is warmer than the surface skin temperature, leading to heating of the surface, or negative sensible heat flux from the surface to the atmosphere. The temperature component of equation (3), (Ts − θ), is unlikely to be the source of anomalously high snowmelt because while Ts is fixed near 0°C during melt events, simulated θ is too low. This scenario results in less heating of the snowpack by the atmosphere than if there were no temperature bias, thus decreasing melt. The other term in equation (3), RCh, is highly dependent on wind speed, and in order to achieve the excess snowmelt and negative bias in modeled air temperature, WRF wind speeds would likely have to be substantially too high. Although accurate in situ wind speed data is not available at the snowpack observation stations used here, past studies of wind speed in WRF in the western United States do suggest a bias toward anomalously high wind speeds [e.g., Cheng and Steeburgh, 2005]. Over snow covered areas, latent heat fluxes in Noah are limited to sublimation and frost, and there are no terms in the energy balance equations to account for evaporation of liquid water from the snowpack [Koren et al., 1999]. As melting snowpacks often contain substantial amounts of liquid water [Jordan, 1983], the inclusion of a term to explicitly account for evaporative heat flux from snow could also improve simulation of snowmelt in Noah.

[37] Ultimately, the source of the observed bias toward early snowmelt remains uncertain. Attempts to improve simulation of snowpack in earlier versions of WRF-Noah by adjusting the turbulent heat flux equations have met with some success, however. For example, Wang et al. [2010] show substantially increased snowpack when the effect of under-canopy resistance on wind speed is included. In addition, RCh is estimated in Noah using an iterative approach, and with a small number of iterations the model can at times fail to converge. Adding a larger number of iterations (thirty, increased from five) results in improved snowpack estimates [Wang et al., 2010]. In future studies, comparison of WRF-Noah and observations for a full suite of snowpack mass and energy terms should be conducted to determine the precise source of errors in both snowmelt timing and surface air temperature and to assess the best means of improving snowpack simulation. Regardless, this study has resolved some of the uncertainty in past studies [e.g., Duffy et al., 2006] regarding the nature of persistently early snowmelt combined with low biases in temperature. Duffy et al. [2006] speculate that either insufficiently fine model resolution or positive temperature errors on days with high precipitation totals could be the cause of observed snowpack biases. We have shown that neither of these is a likely source.

[38] Overall, comparison of our analysis with other recent simulations of snowpack in earlier versions of WRF and/or Noah [e.g., Livneh et al., 2010; Barlage et al., 2010; Wang et al., 2010] suggests that version 3.1.1 of the WRF-Noah modeling system represents a substantial improvement over earlier versions in simulating mountain snowpack. Skill in simulation of accumulation can largely be attributed to accurate simulation of precipitation processes in WRF. Examination of the recent literature suggests that increased fidelity in simulation of ablation processes, on the other hand, is likely due to improvements in handling of snowpack albedo in Noah [Livneh et al., 2010; Barlage et al., 2010]. Some apparent error will always be present when comparing model output with point-based snowpack observations because of a mismatch in observation scale, but recent studies suggest that this type of error is randomly distributed with a mean near zero when comparing model results against stations that are not directly influenced by a vegetation canopy [Ikeda et al., 2010; Barlage et al., 2010]. Our results show that at 3 km and 9 km resolutions there is little systematic bias in snowpack accumulation, but that systematic bias in snowpack ablation is still present in the WRF-Noah configuration used here. With additional improvements to the simulation of snowpack energy balance in Noah, it may be possible to substantially reduce systematic bias in WRF-Noah simulation of mountain snowpack. The result would be a more robust modeling system for simulation of changing snowpack and climate in mountainous regions such as the Sierra Nevada.

Acknowledgments

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

[39] Funding for this research was provided by the National Science Foundation grant AGS-0735056 in part by the NSF through TeraGrid resources provided by the Pittsburgh Supercomputing Center. This research also used resources of the National Energy Research Scientific Computing Center under contract m995, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-05CH11231. Sarah Kapnick is also supported by a NASA Earth and Space Science Graduate Fellowship (07-Earth07F-0232) and a 2010 Switzer Environmental Fellowship. Finally, we thank Michael Durand and two anonymous reviewers for their useful comments.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information

Supporting Information

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Data and Methods
  5. 3. Results
  6. 4. Discussion and Conclusions
  7. Acknowledgments
  8. References
  9. Supporting Information
FilenameFormatSizeDescription
jgrd17057-sup-0001-t01.txtplain text document4KTab-delimited Table 1.
jgrd17057-sup-0002-t02.txtplain text document1KTab-delimited Table 2.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.