An assessment of corrections for eddy covariance measured turbulent fluxes over snow in mountain environments



[1] Snow-covered complex terrain is an extremely important runoff-generating landscape in high-altitude and high-latitude environments, yet it is often considered nonviable for eddy covariance measurements of turbulent fluxes. Turbulent flux data are useful for evaluating the coupled snow cover mass and energy balance that control snow ablation and melt. In particular, detailed, multiseason analyses of eddy covariance data postprocessing requirements and resulting data quality for hydrological analyses in open and sheltered mountain sites have not been conducted. These analyses are needed since these landscapes typify those that generate snowmelt runoff in the mountain west of North America. Eddy covariance measurements taken from exposed hilltop and sheltered subcanopy snow research sites during three snow seasons underwent rigorous postprocessing and data quality assessments. Procedures included data filtering, air density corrections, sensor heating, axis rotation, and exclusion of nonstationary data. Data quality analysis showed that 77% of the sensible heat flux data and 95% of the latent heat flux data were of high quality. There was little interannual variability over three seasons in quality or improvements due to postprocessing results. A comparison of summary data based on a 30-min averaging period to postprocessed high-resolution flux data found that the postprocessed sensible heat fluxes were up to 14% less than the summary fluxes for the snow season. The results indicated that application of unattended eddy covariance techniques at these sites was viable, but the full suite of corrections and postprocessing are advisable to obtain flux observations of sufficient reliability for snow hydrology calculations.

1. Introduction

[2] In the western United States and Canada, snow in the mountains provides the majority of the annual water flow. Water is overallocated in many parts of this region; therefore, our understanding of how much water is stored in the seasonal snow cover, the energetics of ablation, and the timing and magnitude of melt is increasingly important. Snow cover energetics can be partitioned into radiative, turbulent (sensible and latent), conductive, and advective energy fluxes. The annual contribution of turbulent fluxes to the evolution of seasonal snow covers is second only to radiative fluxes [Marks and Dozier, 1992]. During rain-on-snow or high-wind events, turbulent fluxes can dominate the snow cover energy balance [Marks et al., 2001b], producing rapid ablation of seasonal snow covers that can generate large floods [Marks et al., 1998] and reduce the amount of water available during subsequent dry months [Grundstein and Leathers, 1999].

[3] Eddy covariance (EC) is the most direct method of measuring turbulent fluxes [Baldocchi et al., 1988], but it is seldom employed operationally in mountain snow studies because of the rigorous siting and data analysis requirements. Despite these challenges, these data can be a valuable component of mountain snow research programs when thoroughly corrected and assessed to ensure accuracy. An important application of these data is to quantify snowmelt mechanisms and to validate and improve physically based snow models. Improved snow models will ultimately provide a stronger foundation for water resources science and management by accurately quantifying the response of the seasonal snow cover to land cover and climate changes.

[4] The use of EC systems in the hydrological community is on the rise because of more rugged and reliable sensors and data loggers. These systems, however, are much more complex relative to standard hydrometeorological sensors and should be used with caution. Challenges associated with using EC systems can be separated into two categories: instrument siting and data processing. Instrument siting must optimize fetch and footprint while satisfying the assumptions of homogeneity and stationarity. For hydrological applications, instruments must often be sited near existing snow mass balance measurement sites where these assumptions may not be well met. Postprocessing procedures include appropriate averaging period determination, high- and low-frequency corrections, air density and sensor heating corrections, and coordinate rotation [Lee et al., 2004; Burba et al., 2008]. EC data postprocessing procedures have focused on warm season conditions with particular emphasis on carbon flux measurements [Baldocchi et al., 1988; Moncrieff et al., 1997]. Standardization of correction methods has long been a goal of the flux community primarily because of an interest in making intercomparisons between sites that comprise large flux networks (e.g., AmeriFlux, CarboEurope, and Fluxnet Canada) [Massman and Lee, 2002; Lee et al., 2004; Loescher et al., 2006; Papale et al., 2006]. As measurements over complex sites began to expand, so did interest in verifying the viability of EC technology and the reliability of EC-measured fluxes over nonhomogeneous sites [Wilson et al., 2002; Baldocchi, 2003; Turnipseed et al., 2003; Gockede et al., 2004]. This work culminated in several quality assessment schemes [Foken and Wichura, 1996; Foken et al., 2004; Gockede et al., 2004; Rebmann et al., 2005] to assess and compare data quality under differing site and climate conditions. Despite the large volume of research on EC methodologies focused on warm season conditions and ideal measurement sites, there are few studies focusing on data quality at sites in complex terrain and over seasonal snow covers.

[5] Application of EC in complex terrain is challenging; however, the technology has been employed over snow in terrain of varying complexity [Harding and Pomeroy, 1996; Nakai et al., 1999; Arck and Scherer, 2002; Pomeroy et al., 2003a; Turnipseed et al., 2003; Lee and Mahrt, 2004; Mahrt and Vickers, 2005b; Molotch et al., 2007; Marks et al., 2008]. Despite these examples, the method is relatively uncommon compared to studies focused on other components of the snow cover energy balance [Bales et al., 2006]. Typical studies over snow were initially conducted at relatively homogeneous sites such as frozen lakes and glaciers [Hicks and Martin, 1972; Andreas et al., 1979]. Findings from these studies illustrated that direct measurements of turbulent fluxes were preferable to other methods (e.g., bulk transfer and mean profile) [Hicks and Martin, 1972] that would require assumptions of surface roughness or equal turbulent transfer coefficients [Arck and Scherer, 2002]. One of the first studies to use EC measurements in complex terrain over seasonal snow analyzed fluxes in level, snow-covered forests and lakes [Harding and Pomeroy, 1996]. Other research focused on snow and vegetation interactions also employed EC measurements with success [Pomeroy et al., 2003b; Turnipseed et al., 2003; Lee and Mahrt, 2004; Mahrt and Vickers, 2005a, 2005b]. Several studies focused on sublimation both from seasonal snow covers and directly from vegetation also employed EC systems [Nakai et al., 1999; Parviainen and Pomeroy, 2000; Molotch et al., 2007; Marks et al., 2008]. The technique has been used in fewer studies over snow under a forest canopy [Launiainen et al., 2005; Marks et al., 2008], where vegetation strongly influences snow deposition and ablation patterns and moderates meteorological conditions. Studies of the forest understory fluxes over a snow-free forest floor found the EC method capable of accurately quantifying fluxes after thorough validation [Baldocchi et al., 2000].

[6] Most studies that employed EC systems over snow in complex terrain typically focused on short periods of time within a snow season at one site. However, analyses of data from multiple years and different site characteristics are important for comparisons of magnitude, corrections, and quality of measured fluxes between years and land covers. An assessment of the influence of corrections and data quality is essential to ensure the reliability and accuracy of EC-measured flux data over snow at complex sites. This will serve to advance our understanding of snow processes and ability to fully evaluate physically based snow models.

[7] Experiments using EC are expensive and time consuming because the systems generate large amounts of data; require more power than standard micrometeorological stations; and require complex data collection, storage, retrieval, and postprocessing routines. One way to minimize the cost of maintaining an EC system is to store only corrected covariance values calculated with a fixed averaging period within the data logger rather than storing the high-frequency (i.e., 10–20 Hz) time series data. This reduces the data volume by more than 5 orders of magnitude and extends the potential time for unattended operation but reduces options for assessing data quality and postprocessing the data after collection. Assessments of data quality and the effect of postprocessing procedures on EC-derived flux values are needed to determine the potential improvements gained by storing high-resolution data.

[8] The primary objectives of this research are to examine the effects of data processing routines used (1) to correct raw data and (2) to demonstrate how high-frequency data can be used to assess data quality for different research applications and to further improve measurement accuracy. These objectives are accomplished through analysis of turbulent flux data over three snow seasons at two sites that are long-term mountain snow observation locations. Both interannual variability and site differences are assessed to address the following specific objectives: (1) determine the influence that data filtering and correction have on turbulent fluxes, (2) assess flux data quality on the basis of post hoc analysis of high-frequency data, and (3) assess the EC data quality in comparison to other studies. This paper is also intended to serve as a basic guide for processing eddy covariance data for snow science applications and as a reference resource for researchers requiring a deeper understanding of EC theory and applications. These outcomes will help researchers employing these technologies to understand the data correction and assessment procedures necessary to enhance the accuracy and reliability of EC-measured fluxes over snow. Effective application of EC over snow will improve our understanding of and ability to accurately simulate snow cover energetics in mountainous environments.

2. Methods

[9] This section summarizes EC theory and methodology, describes the field sites and installations used for this study, and explains the raw data correction procedures and data quality assessment methods on the basis of high temporal resolution time series data.

2.1. Eddy Covariance Theory

[10] EC uses high frequency (10–20 Hz) measurements of wind vector and scalar quantities as the basis for flux computations. The use of EC to derive fluxes depends on several assumptions, including stationarity and homogeneity. Stationarity is defined as a condition under which the statistical properties of the flow do not change with time, and homogeneity is defined as not varying in space [Kaimal and Finnigan, 1994]. These assumptions are often violated in complex terrain, hence reducing the reliability of the measured fluxes. Tests to determine if the assumptions are satisfied and to determine the quality of measured fluxes are normally run on raw data, often after data correction. Unlike more homogeneous sites, the challenges of assuring measurement accuracy associated with application of EC technology in complex terrain make correction of EC-measured fluxes more critical.

[11] EC generates fluxes of an entity of interest from statistical analysis of high-frequency measured data. For example, the entity of interest can be heat and water [Arck and Scherer, 2002; Pomeroy et al., 2003a], carbon dioxide [Law et al., 2002; Baldocchi, 2003; Paw U et al., 2004], or other trace gases [Pressley et al., 2004, 2005]. As an eddy moves through a suite of sensors, high-frequency (typically 10–20 Hz) wind vector and scalar data are collected and processed to determine turbulent transport of the entity of interest. The calculation of the turbulence properties contained within an eddy is simplified through Reynolds averaging. Reynolds averaging states that at a given point in time, an entity consists of a mean value (equation image) and a fluctuation component (s′):

equation image

The mean value is taken over an appropriate averaging period, which is influenced by site characteristics and is determined through data analysis. Averaging periods between 10 and 60 min have been adopted by the atmospheric community [Lee et al., 2004]. The characteristics of an eddy are defined as the vertical velocity (w), air density (ρ), and volumetric content of an entity the eddy possesses (s). Eddy characteristics can be used to calculate a flux of a given entity (S) using Reynolds averaging, as in the following equation [Oke, 1978]:

equation image

Expansion of equation (2) yields

equation image

The reduction of equation (3) begins with the removal of all single primed quantities because, by definition, the fluctuation from the average is zero. Assuming constant density removes all portions of the equation with density fluctuation and yields

equation image

The assumption of no preferential vertical motion further reduces equation (4) to the final form of the generalized equation:

equation image

The focus of this paper is on the measurement of sensible (H) and latent (LvE) heat fluxes, which take on a similar form to equation (5) as follows:

equation image
equation image

where H is the sensible heat flux (W m−2), ρ is the air density (kg m−3), Cp is the specific heat of air (1005 J kg−1 deg−1), equation image is the time average of the instantaneous covariance of the vertical wind velocity w (m s−1) and potential temperature θ (K), LvE is the latent heat flux (W m−2), Lv is the latent heat of vaporization or of sublimation of water (J kg−1) determined as a function of air temperature, and equation image is the time average of the instantaneous covariance of w and the specific humidity q (kg kg−1). The surface temperature of the snow is the most accurate way to determine vapor phase at the exchange surface and hence which latent heat value to use; however, air temperature can be used as proxy, as was done in this study. The negative signs in front of equations (6) and (7) denote that the fluxes are in reference to the snow cover rather than the atmosphere. Sensible heat flux is therefore positive toward the snow surface and negative toward the atmosphere, and latent heat flux is positive during deposition or condensation and negative during sublimation or evaporation.

[12] The footprint or source area of a generated turbulent flux defines the contributing area of the EC measurement. If the contributing area is homogeneous, the underlying surface contributes equally to the flux, and the location of the sensors in relation to the surface is not an issue [Schmid, 2002]. However, at nonhomogeneous locations, such as the sites discussed in this paper, the influence of the surface depends on its proximity to the EC sensors. The footprint size and location then define the portion of the underlying surface that contributes to the EC measurements used to compute the turbulent flux. Footprint models can be used to determine the source area of a given EC sensor configuration. Footprint models are based on either Lagrangian trajectory or analytical solutions of the advection-diffusion equation in an Eulerian reference frame [Schmid, 1997]. The latter is much simpler and less computationally intensive and hence more commonly used. Advection-diffusion flux footprint models are only for conditions where flow is not disturbed by topography; therefore, the use of these models at sites characterized by complex topography and vegetation canopies should be done with caution [Schmid, 1997].

2.2. Site Description

[13] Data for this analysis were collected at two long-term snow mass balance measurement sites located within the Reynolds Creek Experimental Watershed (RCEW). RCEW is a research watershed located in the Owyhee Mountains approximately 80 km southwest of Boise, Idaho, United States (Figure 1). RCEW is typical of mountain ranges in the intermountain United States, which are characterized by moderate relief and seasonal snow cover approximately 4–6 months of the year. Elevations range from 1101 m above mean sea level (msl) at the outlet to 2241 m above msl. Precipitation ranges from 236 mm at the lower elevations in the northern part of the watershed to 1123 mm at the higher elevations in the southwestern portion of the watershed. The study area, Reynolds Mountain East (RME), is a small (0.38 km2) catchment in the southwestern corner of RCEW, ranging in elevation from 2028 to 2137 m above msl [Slaughter et al., 2001] (Figure 1). Average annual wind-corrected precipitation is 997 mm [Hanson, 2001], 60–90% of which falls as snow. The catchment is characterized by small patches of vegetation, with 34% of the catchment area in fir and aspen [Marks and Winstral, 2001].

Figure 1.

Reynolds Creek Experimental Watershed with inset of Reynolds Mountain East: exposed site (2094 m; –116.75813990, 43.06352896) and sheltered site (2049 m; –116.75360135, 43.06551839). The latitude and longitude are in North American Datum 27.

[14] The two EC measurement sites used for this study are located within RME. The sites are representative of the two major landscape units in the catchment and have been the focus of intensive, long-term snow observations and modeling to characterize the snow hydrology of RME. The exposed site is at a wind-exposed location (Figure 2a) and is dominated by mixed sagebrush. The sheltered site is below an aspen canopy in a small grove in a wind-protected location (Figure 2b). Neither site was ideal for flux measurements based on homogeneity, levelness, or size, but they are typical of long-term snow hydrology study sites in the mountainous west of North America, where flux measurements are needed to evaluate snow energetics calculations. The 45-year average annual wind-shielded precipitation is 557 mm at the exposed site and 832 mm at the sheltered site. The 35-year average peak snow water equivalent (SWE) is 560 mm at the sheltered site [Marks et al., 2001a].

Figure 2.

EC towers at (a) the exposed site and (b) the sheltered site.

[15] The EC sensors were located at 5 m above the ground surface at the exposed site and 4.5 m above the ground surface at the sheltered site. The footprint source area model [Schmid, 1997] was used to obtain an estimate of the source area. The results showed that the maximum source area locations were 125 and 108 m from the EC towers at the exposed and sheltered sites, respectively. The maximum source areas are characterized by topographic and vegetation structure that is similar to the respective EC tower sites. The relatively low placement of the sensors was intentional to capture the interaction between the snow surface and the atmosphere. Studies using similarly low sensor installations have been successful [Launiainen et al., 2005; Marks et al., 2008].

[16] In order to compute EC fluxes, high-frequency data must be collected with sensors that are characterized by fast response times and high precision. The sensors at the two EC sites include an open path infrared gas analyzer (IRGA) (LiCor 7500, Licor Biosciences,, a three-dimensional sonic anemometer (CSAT-3, Campbell Scientific,, and a data logger (CR5000, Campbell Scientific) programmed to collect fast response data at 10 Hz. The IRGA is tilted slightly to shed moisture that may accumulate on the sensor window. The sensors used for this study were calibrated in the field and in the laboratory as directed by the manufacturer.

[17] To assess interannual variability, data from the 2004, 2005, and 2006 snow seasons at the exposed and sheltered sites were compared. Only data from snow cover initiation to melt out (mid-November through late April or mid-May) were analyzed for this investigation. Generally, the snow cover is established earlier and completely ablates later at the sheltered site. Available data during these periods are not serially complete, and although methods are available for gap-filling data [Baldocchi, 2003], no large gaps were filled in the resulting data sets. This was done in order to retain the focus on correction and quality assessment of measured fluxes rather than the quality of gap-filling procedures.

2.3. Data Correction

[18] Correction of 10 Hz data (hereinafter referred to as time series data) included statistical analysis for data filtering using Quality Control Software version 3.0 [Vickers and Mahrt, 1997], sonic temperature correction [Schotanus et al., 1983], air density correction [Webb et al., 1980], coordinate rotation [Kaimal and Finnigan, 1994], and determination of the appropriate averaging period [Vickers and Mahrt, 2003]. Summary data based on 30-min averaging periods were also stored directly in the data loggers. Steps used to filter and correct raw 10 Hz and summary flux data are shown graphically in Figure 3 and are discussed below, with section headings corresponding to the boxes in Figure 3.

Figure 3.

Schematic representation of the data correction procedure applied to both 30-min summary and high-frequency time series data.

2.3.1. Data Filtering

[19] Minor gap filling and filtering of raw 10 Hz data are necessary to fill or remove occasional spurious data points and hence increase overall data quality prior to subsequent correction procedures. If less than 1% of an hour of data were missing, these values were replaced using a linear interpolation based on surrounding points. If more than 1% of an hour of data were missing, these data values were not used in the statistical calculations. The Quality Control Software [Vickers and Mahrt, 1997] included tests for data spikes, identification of values outside of absolute limits, skewness, kurtosis, discontinuities, and absolute variance to identify periods of questionable data. While data values that were outside of reasonable bounds for each test could be physically plausible under certain conditions, for consistency, they were not used for this study. Removal of a small portion of the data because of quality concerns results in a more robust calculation of fluxes uncontaminated by potentially spurious raw data. Summary data were filtered for extreme values.

2.3.2. Wind Direction Filtering

[20] Wind direction filtering is necessary to remove turbulence contamination from the tower, sensor boom, and sensors when wind originates behind the sensor tower. The predominant wind direction is approximately 215°, so the sensor boom was oriented toward 215°. Data collected with wind originating from the quadrant opposite the orientation of the boom (i.e., between 350° and 80°) were therefore removed from the data set.

2.3.3. Air Density Correction

[21] When sensible heat flux occurs, changes in the density of air and hence the concentration of water vapor in air simultaneously changes, potentially producing errors in EC-measured latent heat fluxes. To address this potential source of error, the covariance of water vapor was corrected for air density effects using the widely accepted Webb, Pearman, and Leuning (WPL) correction [Webb et al., 1980]:

equation image

where E is the WPL-corrected H2O flux (kg m−2 s−1), μ is the ratio of molar masses of air to water (μ = 1.6077), equation image is the mean dry air density (kg m−3), equation image is the mean water vapor density (kg m−3), E0 is the uncorrected H2O flux (kg m−2 s−1), and Ta is the ambient air temperature (K). The original presentation of the WPL correction used water vapor density instead of specific humidity; hence, water vapor density is used here for consistency with the published literature. Liebethal and Foken [2003] found that the incorporation of the WPL correction had more influence on the CO2 flux (20–30% of the flux) but was also significant in the computation of latent heat flux (2–3% of the flux). A revision to the WPL theory was put forth by Leuning [2004] for nonsteady conditions, and an alternative form of WPL was recently published [Liu, [2005] but was found to be incorrect [Leuning, 2007]. The standard WPL correction was applied to the data for this study.

2.3.4. Sensor Heating

[22] Infrared gas analyzers may warm relative to surrounding cooler air, and the resulting temperature variation can lead to an artificially high sensible heat flux between the sensor and the atmosphere [Grelle and Burba, 2007; Burba et al., 2008]. These temperature variations can affect the measurement of latent heat fluxes, whereas sensible heat fluxes that are measured directly by the sonic anemometer are unaffected. The correction has more influence on CO2 fluxes compared to the water vapor fluxes because of the small ratio of density variations relative to the mean for CO2 compared to the high ratio of density variations relative to the mean for water vapor [Webb et al., 1980]. The correction for the water vapor flux for all ambient conditions is expected to be less than 3 W m−2 and for most typical field conditions less than 1 W m−2 (G. G. Burba, personal communication, 2008).

[23] Temperature measurement within the path between the base and head of the open path IRGA sensor with specially constructed fine-wire thermometers or coupling of theoretical corrections with directly measured instrument surface temperatures is recommended for the sensor heating correction [Grelle and Burba, 2007]. Modeled instrument surface temperatures are the only surface temperatures available for this study, as fine-wire thermometers and measured instrument temperatures were unavailable. Vertical orientation of the gas analyzer is also recommended for modeled instrument surface temperatures. In this study, however, sensors were deployed with a slight tilt to reduce moisture accumulation on the window surface of the gas analyzer. Although there was a slight difference in mounting position, and direct temperature measurements were unavailable, the heating correction for the latent heat flux was calculated following the method described by Burba et al. [2008] to assess the potential effect on computed fluxes.

2.3.5. Vertical Wind Velocity

[24] Eddy covariance theory is based on the premise of no preferential vertical motion. Corrections to vertical wind velocity measurements are made to remove vertical motion that may result from sensor tilt, sloping topography, or flow distortion [Vickers and Mahrt, 2006]. Two rotation schemes were used on the time series data: double rotation [Kaimal and Finnigan, 1994] and tilt correction [Mahrt et al., 2000]. The double rotation sets equation image = 0 through rotation about the z axis and equation image = 0 through rotation about the y axis. Kaimal and Finnigan [1994] also describe a third rotation, but more recent research [Finnigan, 2004] warns against using the third rotation. Finnigan [2004] suggests aligning the z axis of the anemometer as close to perpendicular to the underlying surface and applying the first two rotations. The tilt correction [Mahrt et al., 2000] method bins the wind data on the basis of wind direction to determine a tilt angle for each direction bin. The tilt angle for each wind direction was applied to the data regardless of wind speed in order to remove mean vertical motion from the data. The planar fit approach [Wilczak et al., 2001], a special case of the tilt correction, is another rotation method that is available but was not tested specifically for this study. The double rotation and tilt correction were applied and evaluated to determine which would be used for the final data correction.

2.4. Data Quality Assessments

[25] For installations where high-frequency data can be stored, an assessment of data quality after data collection can be undertaken. Recent advances in data loggers that use removable flash memory, coupled with decreases in cost per unit of memory, enable quality assessments and postprocessing of data from remote sites where previously only longer-term summary data could be stored. Such assessments can be used to improve the accuracy of measurements by adjusting data collection rates and averaging periods and to inform the appropriate uses of data on the basis of quality metrics. A schematic diagram of data quality and postprocessing procedures for high-frequency flux measurements are shown in Figure 4.

Figure 4.

Schematic representation of the data quality assessment framework applied to high-frequency time series data.

[26] Covariances were generated from high-frequency measurements collected at 10 Hz using 2ndGen ( (D. Vickers, personal communication, 2004). The covariances of interest for this research were equation image and equation image, which were used to compute sensible and latent heat fluxes, respectively. The value equation image was calculated from the buoyancy flux of equation image and equation image with the following:

equation image

where equation image is the time average of the instantaneous covariance of w and the virtual temperature Tv (K). Fluxes of sensible and latent heat were calculated using equations (6) and (7) as described in section 2.1. 2ndGen generates covariances for three different averaging periods: 100 s, 5 min, and 10 min. Though not practical, the optimal averaging period may vary for different conditions. Time scales are loosely related to stability, where a longer subrecord time scale is associated with neutral or unstable conditions, and a shorter time scale is associated with more stable conditions [Vickers and Mahrt, 1997]. The covariances for each averaging period (100 s, 5 min, and 10 min) were used to calculate sensible and latent heat fluxes and then were averaged over a flux averaging time period of 1 h to reduce random sampling errors.

2.4.1. Data Collection Rate

[27] One concern, even with high-frequency measurements, is that some very high frequency flux may not be adequately sampled. Multiresolution decomposition (MRD) or Haar decomposition is a method that can be used to assess whether the data collection rate is appropriate. This method orthogonally decomposes geophysical variables by averaging time series using different temporal averaging periods [Howell and Mahrt, 1997]. MRD, unlike Fourier cospectra, uses a simple nonoverlapping moving average and thus satisfies Reynolds averaging. Analysis of scale dependence via MRD determines if the measurement frequency captures the scale of the fluxes. In addition, if the MRD at the smallest time scales is small compared to the peak, high-frequency flux may be considered negligible. If high-frequency flux is apparent, researchers can increase data collection rates (although 20 Hz is typically the practical maximum) or qualify data to indicate the measurement accuracy. MRD was used in this study to examine time scale dependence of the EC-generated fluxes and to verify the data collection rate.

2.4.2. Averaging Period Determination

[28] Determining of the appropriate averaging period is important to ensuring that the period is long enough to achieve statistical stationarity but short enough to exclude fluxes associated with mesoscale motion. Berger et al. [2001] found that as measurement height increases, a longer averaging period is necessary because of larger eddies making significant contributions to the total flux. A longer averaging period increases the probability of collecting nonstationary data and mesoscale motion. The atmospheric community has adopted averaging periods that range from 10 to 60 min [Lee et al., 2004].

[29] The assumption of stationarity (statistics that do not vary in time) is one of the critical assumptions of EC theory. Problems with inhomogeneity (statistics that vary in space) can manifest themselves as violations of stationarity [Foken and Wichura, 1996]. Inhomogeneity is often difficult to avoid in complex terrain but was minimized in this study by orienting the sensors into the dominant wind direction and maximizing fetch. The test for stationarity is accomplished by dividing time series data on the basis of a larger averaging period into smaller intervals [Foken and Wichura, 1996]. A percent difference between the covariance from the longer averaging period compared to the mean flux generated from the shorter averaging period is calculated. A percent difference of less than 30% is considered to be an indication of acceptable stationary [Foken and Wichura, 1996].

[30] The cospectral gap can also be used to determine the appropriate averaging period. The MRD can be used to identify the cospectral gap, defined as the time scale that separates the turbulent scales of the cospectra from mesoscale motion. Evidence of the cospectral gap is more readily discernible for stable conditions. Under these conditions, the effect of mesoscale motion is more important because the inclusion of this motion can drastically change the magnitude and potentially the sign of the calculated fluxes relative to unstable conditions [Vickers and Mahrt, 2006]. Furthermore, mesoscale motion is often poorly sampled and potentially degrades similarity relationships used to calculate fluxes via Monin-Obukhov similarity theory. Because EC flux data may be used to evaluate snow cover turbulent flux simulations on the basis of similarity theory, the inclusion of mesoscale motion is inappropriate, and hence, this correction is important to ensure data quality.

[31] As snow is typically under stable conditions during melt and because the cospectral gap is more readily discernible under stable conditions, the cospectral gap in the heat flux MRD for stable conditions was used to determine the appropriate averaging period. One limitation is that the results may not apply under unstable conditions; however, inclusion of mesoscale motion is less of a concern because the turbulent flux is typically larger under these conditions. Furthermore, given that the melt phase is of the greatest interest for snow research and that conditions are generally stable during melt, this was not considered a problem.

2.4.3. Turbulence Quality

[32] To determine the overall quality of the EC-measured fluxes, an assessment of turbulence quality is required to determine how well the assumptions of EC theory are satisfied. Data quality was further assessed using integral turbulence characteristics and correlation coefficients, which are both based on similarity theory [Foken and Wichura, 1996; Foken et al., 2004]. These metrics indicate the degree of turbulence and are either a constant value or a function of stability (Table 1) [Foken et al., 2004]. Stability is determined by z/L (where z is the height and L is the scaling length or Monin-Obukhov length), with z/L < 0 indicating unstable conditions, z/L = 0 indicating neutral conditions, and z/L > 0 indicating stable conditions. Integral turbulence characteristics, also referred to as flux variance similarity, are defined as the ratio of the standard deviation of a parameter and its turbulent flux. For example, the integral turbulence characteristic for vertical velocity is defined as σw/u*, where σw is the standard deviation of the vertical wind velocity (m s−1) and u* is the friction velocity (m s−1).

Table 1. Typical Values of Integral Turbulence Characteristics σw/u* and Correlation Coefficient ruw Values Used for Comparison to Measured Values
ReferenceIntegral Turbulence CharacteristicaCorrelation Coefficienta
  • a

    NA means not applicable.

Foken et al. [2004]1.3 (0 > z/L > −0.032)NA
Foken et al. [2004]2.0(z/L)1/8 (z/L < −0.032)NA
Kaimal et al. [1990]NA−0.3
Kaimal and Finnigan [1994]NA−0.35
Arya [2001]NA−0.15

[33] The correlation coefficient for momentum is defined as follows:

equation image

where ruw is the correlation coefficient of horizontal (u) and vertical (w) wind velocity, equation image is the covariance between the horizontal and vertical wind velocity (m2 s−2), and σu is the standard deviation of the horizontal wind velocity (m s−1).

[34] Typical values for both integral turbulence characteristics and correlation coefficients used for comparison to measured values are provided in Table 1. The values shown in Table 1 were derived primarily from analysis of data collected during a meteorological experiment in Kansas [Izumi, 1971]. Well-developed turbulence can be assumed when the test parameter ITCσ, defined as the percent difference between the typical value of the integral turbulence characteristic compared to the measured value, is less than 30%. Under very weak turbulence, methods based on surface layer similarities may not be valid [Foken et al., 2004]. When sensors are located in close proximity to the ground, as in this study, the integral turbulence characteristics may not yield similar values to those previously published [Johansson et al., 2001; Hogstrom et al., 2002], which were collected at approximately 22 m above the ground [Businger et al., 1971]. Analysis of integral turbulence characteristics and correlation coefficients permit a comparison to other studies; however, values outside of previously established ranges are not cause enough to remove these data points.

2.4.4. Data Quality Assessment Scheme

[35] Overall data quality assessment is important for research employing EC because it provides guidance for the appropriate use of the data. The tests for stationarity and well-developed turbulence were combined in a data assessment scheme adopted from Rebmann et al. [2005] (Table 2). Data quality for the momentum and sensible and latent heat fluxes were determined by a stationarity tests for each flux and by comparison of σw/u* to typical values [Foken et al., 2004]. This ranking system further classifies the data as high, moderate, or low quality. High quality is considered usable for fundamental research and moderate quality is usable for long-term observation studies, while low-quality data should be removed or gap filled [Mauder and Foken, 2004].

Table 2. Quality Flags and Potential Use of Data Defined by a Combination of the Test for Stationarity and Well-Developed Turbulence Using Integral Turbulence Characteristics
StationarityaIntegral Turbulence CharacteristicsaQuality FlagPotential Use
  • a

    Deviation is given in percent.

0–300–301high-quality data, use in fundamental research possible
0–30>30–752high-quality data, use in fundamental research possible
>30–75>30–753moderate-quality data, use in long-term observation programs
>30–75>75–2504moderate-quality data, use in long-term observation programs
>75>2505low-quality data, removed

3. Results and Discussion

3.1. Data Summary

[36] The mean wind speed for the three selected snow seasons averaged 4.9 m s−1 at the exposed site and 1.2 m s−1 at the sheltered site. The mean wind speed at the exposed site for the snow seasons studied was 4 times the mean wind speed measured at the sheltered site. Mean snow season air temperatures at both sites were generally similar during the years studied, with a 3-year snow season mean of −1.9 and −1.6°C at the exposed and sheltered sites, respectively. There was a wider range of temperatures measured at the sheltered site compared to the exposed site, with the mean diurnal air temperature range of 5.4 and 7.0°C at the exposed and sheltered sites, respectively.

[37] Overall, the exposed site yielded measured sensible and latent heat fluxes that were 5 and 2 times the magnitude of those at the sheltered site, respectively. Mean snow season sensible heat flux was approximately 20 and 4 W m−2 and mean latent heat flux was approximately −10 and −5 W m−2 at the exposed and sheltered sites, respectively. At the exposed site, maximum sensible heat flux measurements were approximately 120 W m−2, while more typical midday values were 40–70 W m−2. Latent heat fluxes at both sites exhibited greater seasonality relative to sensible heat fluxes. In the early part of the snow season, typical midday latent heat flux values at the exposed site were less than −30 W m−2, whereas they were nearly twice that magnitude later in the season.

3.2. Data Correction

3.2.1. Data Quality Filtering

[38] Violations of the statistical tests in Quality Control Software [Vickers and Mahrt, 1997] occurred in 0.9 and 0.3% of the wind data at the exposed and sheltered sites, respectively. The larger percent of violations from the wind data at the exposed site was due mainly to violations of absolute limits and large discontinuities compared to the sheltered site. Violations of the statistical tests occurred in 0.9 and 0.5% of the temperature and 3.0 and 4.5% of the specific humidity data for the exposed and sheltered sites, respectively. The larger percent of violations from the temperature data at the exposed site was due mainly to large data spikes. Violations in the specific humidity data were likely due to snow or moisture deposited on the measurement surface of the IRGA that is used to measure specific humidity. Moisture on the sensor was more readily removed at the exposed site because of exposure to solar radiation and wind and hence produced fewer violations when compared to the sheltered site. Violations of data quality tests were consistent across the three snow seasons and exhibited little interannual variability.

3.2.2. Wind Direction Filtering

[39] Approximately 5 and 3% of the data collected at the exposed and sheltered sites, respectively, were removed because of wind direction filtering. In comparison, missing data due to sensor failure or battery problems was 31.2% in 2004, 21.5% in 2005, and 14.2% in 2006.

3.2.3. Air Density Correction

[40] The WPL correction for air density effects reduced the mean latent heat flux over the three snow seasons by 4.9% (range of 3.5–6.2%) at the exposed site and by 2.7% (range of1.0–4.3%) at the sheltered site for the 3 years of this investigation (Table 3). At the exposed site, the monthly difference between uncorrected latent heat flux and WPL-corrected latent heat flux ranged from less than 0.5 to 1.0 W m−2 for all months analyzed. The monthly difference between uncorrected latent heat flux and WPL corrected latent heat flux at the sheltered site was less than 0.5 W m−2 during February, March, and April and ranged from less than 0.5 to 2.0 W m−2 in December and January.

Table 3. Summary of Corrections to Sensible and Latent Heat Flux at the Exposed and Sheltered Sitea
CorrectionExposed Site (%)Sheltered Site (%)
  • a

    Values are the difference between corrected and uncorrected fluxes.

WPL correction (latent heat flux)4.92.7
Sensor heating (latent heat flux)<0.2<0.2
Rotation (sensible heat flux)13
Rotation (latent heat flux)21

[41] The slightly larger uncorrected latent heat flux values would have accumulated to a relatively small overestimation of sublimation by 0.3–2.1 mm per season (∼0.01 mm d−1). Considering the seasonal sublimation as a percentage of the total precipitation, the WPL correction had little influence. At the exposed site, during the dry 2005 snow season, precipitation was 284 mm, and the WPL-corrected fluxes result in seasonal sublimation of 17.3% of precipitation compared to 18% with uncorrected data. Although these differences were not large, the WPL correction is a physically meaningful and widely accepted correction step, so this correction was used in the final calculation of the fluxes.

3.2.4. Sensor Heating

[42] The correction due to sensor heating [Burba et al., 2008] resulted in an addition of less than 1 W m−2 (<0.2%) to the latent heat flux at both sites and during the three snow seasons studied (Table 3). The sensor heating correction was on the same order of magnitude as the WPL correction for moisture. This correction was not applied in the final calculation of fluxes because it is intended for vertically oriented sensors, is not widely used by the research community at the present time, and has a very small effect on the latent energy fluxes.

3.2.5. Vertical Wind Velocity

[43] Prior to rotation, the average vertical wind velocities were very similar at both sites for the years analyzed. The average vertical wind velocity was 0.06, 0.03, and 0.04 m s−1 for the winter seasons of 2004, 2005, and 2006, respectively. The double rotation [Kaimal and Finnigan, 1994] yielded average vertical wind velocities of 0 at both sites for the years analyzed. The tilt correction [Vickers and Mahrt, 2003] yielded average vertical wind velocities with larger variances and a range from 0.02 to −0.06 m s−1 at the exposed site and from −0.04 to −0.08 m s−1 at the sheltered site. Tilt-corrected data are often filtered on the basis of a threshold horizontal wind velocity prior to application [Mahrt et al., 2000]; however, this was not done because of the low horizontal wind velocities at the sheltered site and an interest in applying consistent corrections at both sites. This may explain why the tilt-corrected average vertical wind velocities were of the same order of magnitude as the uncorrected vertical wind velocities. Double rotation was used in the final calculation of the corrected fluxes.

[44] Coordinate rotation slightly modified the magnitude of snow season fluxes (Table 3). Double rotation at the exposed site increased the magnitude of the sensible and latent heat fluxes by 1 and 2%, respectively. At the sheltered site, the sensible and latent heat fluxes were reduced after rotation by 3 and 1%, respectively. The influence of rotation at monthly time scales is greater, particularly at the sheltered site. For example, at the sheltered site, double rotation reduced February sensible heat fluxes by approximately 12%, compared to a March increase of 2%. Conversely, at the exposed site, EC-generated sensible heat fluxes with the same rotation adjustment ranged from 0% to a 2% increase for February and March, respectively.

[45] The sheltered site exhibited a wider range of response (both increase and reduction in fluxes) to rotation than the exposed site because of the topographic and canopy complexity of the location. Despite this larger response to corrections, the low wind speeds at this site produce small snow season fluxes; hence, rotation had less influence on the magnitude of the calculated fluxes relative to the exposed site.

3.3. Data Quality

3.3.1. Data Collection Rate

[46] MRD was used to determine the time scale at which most of the flux originated and to determine if high-frequency flux was being lost because of the chosen collection rate of 10 Hz. Figure 5 is a plot of sensible and latent heat flux cospectra. The plot illustrates that peak values were much higher than the smallest time scale (10 Hz), so it can be assumed that high-frequency flux loss is most likely negligible. Most of the flux comes from motions associated with the time scale of the peak values; therefore, a data collection rate of 10 Hz at both sites was appropriate.

Figure 5.

(a) Cospectra of vertical wind velocity and temperature and (b) cospectra of vertical wind velocity and specific humidity generated from multiresolution decomposition for all conditions during the snow season of 2006 at the exposed site.

3.3.2. Stationarity and Averaging Period Determination

[47] Stationarity values based on 5-, 10-, and 30-min averaging periods for momentum, heat, and moisture were determined for 2004, 2005, and 2006. Turbulent fluxes over snow are relatively small, and division by similarly small values frequently yields large percent differences; therefore, additional filtering to exclude extremely small covariances was applied. Excluding these very small covariance values modified the seasonal averages by approximately 1 W m−2 but provided a better assessment of overall data quality by including only values that account for the majority of the total flux. Percent stationarity of both the unfiltered and filtered fluxes are given in Table 4 to show the effect of filtering extremely small covariances. There was little interannual variability in the stationarity values; hence, the averages of the three snow seasons are presented in Table 4. Removal of small covariances influenced the data from the sheltered site more than the exposed site because of a greater proportion of small fluxes at the sheltered location. This filtering similarly had a greater influence on the latent heat fluxes, relative to the sensible heat flux values, because of smaller latent heat fluxes. Considerably more unfiltered data were stationary at the exposed site compared to the sheltered site; however, there was little difference between the sites for the filtered data. The sheltered site is within an aspen stand that is characterized by lower wind speeds and short time-scale gusts, which is likely why these differences were observed. The filtered data indicate that the majority of momentum, heat and moisture was stationary, with over 90% stationary values for moisture.

Table 4. Average Percent of Stationary Snow Season Measurements From 2004, 2005, and 2006 for Momentum, Heat, and Moisture at the Exposed and Sheltered Sites for 5-, 10-, and 30-min Averaging Periodsa
Averaging PeriodMomentum (%)Heat (%)Moisture (%)
  • a

    Shown is percent difference between 0 and 30%. Values in parentheses are with an additional filter of the exclusion of small covariances.

Exposed Site
5 min39 (55)50 (75)42 (94)
10 min43 (58)53 (77)44 (94)
30 min15 (41)70 (90)58 (96)
Sheltered Site
5 min30 (60)29 (77)26 (96)
10 min35 (63)32 (77)28 (96)
30 min21 (63)37 (80)34 (94)

[48] Use of 10-min averaging periods for momentum, heat, and moisture either increased (by up to 3%) or had no effect on the percent of stationary values relative to 5-min averaging periods as shown by the filtered data values in Table 4. Use of 30-min compared to 10-min averaging periods increased the stationary values for heat at the exposed site by 13% but had a very small effect for heat at the sheltered site (+3%) and for moisture at both sites (±2%). The difference in averaging periods for filtered momentum flux values was negligible at the sheltered site but caused the percent stationary values for the exposed site to decline by 17%. Similar large declines between averaging periods are also evident for the unfiltered values. This reduction in stationarity for momentum may signal the potential degradation of similarity relationships at a 30-min averaging period through the inclusion of mesoscale motion.

[49] A change in sign of the heat flux multiresolution decomposition (MRD) identifies the time scale for a cospectral gap indicating the separation between small-scale turbulence and mesoscale motion. MRD is one tool available to support the determination of an appropriate averaging period. The exposed site suggested a slight change in MRD sign between 6.6 and 13.3 min (shown by the solid circle in Figure 6a), while at the sheltered site (Figure 6b) the shift from negative to positive is inconclusive but somewhat suggestive of a gap around 13.3 min. The MRD for the sheltered site showed little to no variability with time scale compared to the exposed site because of lower wind speeds and hence a smaller magnitude of calculated fluxes.

Figure 6.

Sensible heat flux multiresolution decomposition (MRD) from (a) the exposed site and (b) the sheltered site from the snow season of 2006 for stable conditions. The solid circles suggest the approximate time scale of the cospectral gap.

[50] The appropriate averaging period was determined from the tests for stationarity with supporting evidence from the analysis of the heat flux MRD. The percentage of momentum, heat, and moisture values that were stationary increased slightly from a 5-min to a 10-min averaging period (Table 4). Although there was an improvement in stationarity with an averaging period of 30 min for heat at the exposed site, there was also a clear degradation in stationarity for momentum. The findings from the MRD analysis suggest an appropriate averaging period of roughly 10 min for both sites. On the basis of the combined analyses of stationarity and MRD, an appropriate averaging period of 10 min was selected for both sites. The finding of a relatively short averaging period may also be due to relatively low sensor placement and predominantly stable conditions.

3.3.3. Turbulence Quality

[51] For the three snow seasons analyzed, the average integral turbulence characteristics for momentum were 1.4 and 1.6 for neutral conditions (defined as 0 > z/L > −0.032) and 2.0 and 2.6 for unstable conditions (defined as z/L < −0.032) at the exposed and sheltered sites, respectively. Panofsky and Dutton [1984] reported integral turbulence characteristics for momentum of 1.24 and between 1.1 and 1.4 for mountain and level terrain, respectively, for neutral conditions. Though the measured values were slightly larger those than reported by Panofsky and Dutton [1984], the values were similar. There was little difference in measured integral turbulence characteristics between the three snow seasons studied. Well-developed turbulence, as indicated by ITCσ < 30%, occurs only approximately 35% of the time, with most of the data outside of the well-developed turbulence range. This finding was not unexpected given the nature of the surface (i.e., typically colder than air) and predominantly stable surface conditions.

[52] The mean correlation coefficient for momentum ruw during winter conditions at the exposed and sheltered site was −0.17 and −0.16, respectively. These values were consistently smaller than typical values of −0.3 [Kaimal et al., 1990], −0.32 [Hicks, 1981], −0.35 [Kaimal and Finnigan, 1994] but closer to the value of −0.15 [Arya, 2001] observed in other studies. Roughly 40% of the measured values were within the range between −0.15 and −0.35, which was consistent with the integral turbulence characteristics assessment. Though smaller than the published values, these values were within the range expected, and well-developed turbulence can be assumed for less than half of the data points from both sites. Horizontal standard deviations tend to increase in larger proportion than vertical wind velocities closer to the ground because of the blocking effect on the flow [Johansson et al., 2001; Hogstrom et al., 2002]. This could account for the smaller measured values of ruw relative to the modeled values.

3.3.4. Data Quality

[53] The combination of tests for stationarity and well-developed turbulence allowed for a quality assessment of the postprocessed fluxes (Table 5). This assessment indicates the temporal character of the turbulence compared to the mean values discussed in the previous sections. The data quality assessment for stationarity (excluding small covariance values) and well-developed turbulence indicated high data quality (quality flags 1 and 2) for 58, 76, and 92% of the data at the exposed site and 60, 74, and 94% of the data at the sheltered site for momentum, heat, and moisture, respectively. The flagging scheme for high quality was dominated by the test for stationarity, as the majority of the integral turbulence data showed ITCσ values less than 75%. Quality flags 1 and 2 are deemed of sufficient quality for fundamental research [Rebmann et al., 2005]. Approximately 1–18% of the data were categorized as moderate quality. Less than 1% of the data were categorized as low quality. Because of the categorization pairings, some of the data did not fall into one of the five categories specified and ranged from 5 to 24% of the data. Little difference in data quality was shown between sites once small covariances were removed from the stationarity analysis. The data quality results are very encouraging given that the latent heat flux data that are of great interest for snow research were typically high quality, whereas sensible heat flux data that are often of less interest were also dominated by high-quality values.

Table 5. Percent of the Snow Season Data Categorized by Quality Flags as a Combination of Tests for Stationarity and Well-Developed Turbulence Using Integral Turbulence Characteristics for the Exposed and Sheltered Sites for Momentum, Heat, and Moisture
FlagExposed SiteSheltered Site
Momentum (%)Heat (%)Moisture (%)Momentum (%)Heat (%)Moisture (%)
No category2411524155

3.3.5. Data Quality Comparison

[54] These findings compare well to results from Rebmann et al. [2005], who evaluated 18 CARBOEUROFLUX sites for data quality using the same rating system presented in Table 2. High-quality data comprised 86% of the values on average for all 18 sites studied. The range of high-quality momentum data for the 18 individual sites was 51–95% of the measurements. High-quality values for momentum flux from the two sites evaluated in this paper ranged from 58 to 60%. There was little interannual variability in the data quality of the momentum flux regardless of site.

[55] Rebmann et al. [2005] assessed the quality of calculated sensible and latent heat fluxes from a range of forested systems solely on the basis of stationarity and did not include integral turbulence characteristics. On average, for the 18 sites studied, stationary values occurred in 83 and 68% of the sensible and latent heat flux data, respectively. The stationarity values reported for individual sites ranged from 60 to 90% of the data for sensible heat flux and from 40 to 96% of the data for latent heat flux. For the two sites analyzed here, stationary values using a 10-min averaging period occurred in 77% of the calculated sensible heat flux data at both sites and in 94 and 96% of the latent heat flux data at the exposed and sheltered site, respectively (Table 4). Overall, the data quality values from both sites were similar to those reported by Rebmann et al. [2005] and are encouraging for use of EC-measured fluxes for fundamental snow hydrology research.

3.4. Comparison of Summary Versus Postprocessed Data

[56] A comparison of corrected 30-min summary and corrected postprocessed data using 10-min averaging periods determined from the data quality assessment is provided to assess the effect of using different averaging periods. Use of summary data would be typical for installations where high-resolution temporal data could not be stored, whereas postprocessed data are for cases where high-resolution data could be analyzed and processed after field collection. Linear regression between the summary and postprocessed EC-generated sensible heat flux indicates that the summary sensible heat fluxes were 7–9% larger than the postprocessed sensible heat fluxes for the three snow seasons analyzed, excluding points that were removed because of stationarity violations (Figure 7). When nonstationary data points were included in the analysis, the summary sensible heat fluxes were 9–14% larger than the postprocessed sensible heat fluxes. These results indicate the sequential improvement in measurement accuracy that was gained by data correction and stationarity filtering procedures on the time series data. There was little difference in the linear regressions of the summary and postprocessed fluxes during the three snow seasons studied.

Figure 7.

Summary and postprocessed fluxes for the snow seasons of 2005 and 2006 for (a) sensible heat flux at the exposed site, (b) latent heat flux at the exposed site, (c) sensible heat flux at the sheltered site, and (d) latent heat flux at the sheltered site with 1:1 line.

[57] The 30-min averaging period upon which the summary fluxes were based includes the effect of mesoscale motion and likely causes them to be larger than the postprocessed fluxes. This is evident in Figures 7a and 7c for larger events, where the magnitude of the sensible heat flux plots above the 1:1 line when positive and below the line when negative, illustrating smaller magnitudes of postprocessed fluxes. Though a 30-min averaging period may be desirable and is often employed, the analysis presented here yielded a shorter averaging period and slightly smaller fluxes.

[58] Overall, differences between summary and postprocessed latent heat fluxes were minimal (Figures 7b and 7d), particularly at the exposed site. At both sites, there were more cases where there were large differences between the summary and postprocessed values. Although some large differences were observed, the magnitude of the difference between the summary and postprocessed data was less than 1 mm of sublimation. This magnitude of difference is well within the error of the measurement. Differences in both interannual and monthly sublimation values were similarly minimal. As noted above, latent heat flux is typically of greater interest than sensible heat flux for snow research; therefore, the results of this analysis are encouraging given that monthly and seasonal latent heat fluxes appear to be relatively insensitive to the averaging period. The greater differences between latent flux values from the different averaging periods suggest that researchers should carefully assess the appropriate averaging period for investigations focusing on short-duration events.

4. Summary and Conclusions

[59] Eddy covariance systems allow for the most direct measurement of sensible and latent heat flux but require raw data filtering, correction, quality assessment, and possibly postprocessing to ensure measurement accuracy. This work presented EC-measured sensible and latent heat fluxes at two complex mountain locations for three snow seasons and detailed the steps taken to correct the measured fluxes and to assess data quality. Corrections due to air density effects, sensor heating, and axis rotation had a relatively small influence on the magnitude of the measured fluxes at the two study sites during the snow seasons investigated. The influence of these corrections could have a larger influence for shorter periods, at different sites, and under different conditions. These corrections should therefore be implemented to ensure the highest data quality for a given site.

[60] At the two study sites, analysis of stationarity and the cospectral gap suggested an appropriate averaging period of 10 min for both sites compared to the typical 30-min averaging period upon which summary fluxes are often generated. These analyses suggest that mesoscale motion is included in the measurements generated using 30-min averaging periods. Flux data computed with 30-min, as opposed to 10-min, averaging periods indicate annual sensible heat flux differences of up to 14%, as opposed to minimal differences in annual latent heat fluxes. These results indicate that 30-min summary data may be acceptable for many hydrological investigations (e.g., comparisons of relative site differences); however, high-resolution data should be stored and postprocessed to maximize data quality for energy balance computations. This is especially important when working with snow models that incorporate similarity theory for the calculation of turbulent fluxes, since similarity theory does not include mesoscale motion. These procedures are also particularly important for investigations that include conditions where sublimation or condensation represents large mass transfers between the snow cover and the atmosphere.

[61] Meteorological conditions between the snow seasons studied varied; however, little interannual variability was evident in the tests for data quality at the two sites studied. This suggests that data quality at the study sites is inherent to site condition and varies slightly with interannual meteorological conditions. After the removal of extremely small covariances, data quality analysis indicated that approximately 77% of the sensible heat flux and 95% of the measured latent heat flux values were of high quality (Table 2). This indicates that much of the data could be used for fundamental research, such as snow mass and energy balance studies. This is notable given that latent heat fluxes are especially of interest to snow hydrologists to quantify snow sublimation and to improve physically based snow models.


[62] The research and analysis presented in this paper were funded by the USDA Agricultural Research Service with support from the NOAA GEWEX Americas Prediction Project (GAPP) (project GC03-404). The Idaho EPSCoR and University of Idaho Graduate Fellowship Programs provided additional support to the lead author. Special thanks go to Dean Vickers and to four anonymous reviewers whose comments greatly improved the quality of this manuscript. Mention of trade names or commercial products does not constitute endorsement or recommendation.