• Open Access

In-situ CO2 monitoring network evaluation and design: A criterion based on atmospheric CO2 variability



[1] Estimates of surface fluxes of carbon dioxide (CO2) can be derived from atmospheric CO2 concentration measurements through the solution of an inverse problem, but the sparseness of the existing CO2 monitoring network is often cited as a main limiting factor in constraining fluxes. Existing methods for assessing or designing monitoring networks either primarily rely on expert knowledge, or are sensitive to the large number of modeling choices and assumptions inherent to the solution of inverse problems. This study proposes a monitoring network evaluation and design approach based on the quantification of the spatial variability in modeled atmospheric CO2. The approach is used to evaluate the 2004–2008 North American network expansion and to create two hypothetical further expansions. The less stringent expansion guarantees a monitoring tower within one correlation length (CL) of each location (1 CL), requiring an additional eight towers relative to 2008. The more stringent network includes a tower within one half of a CL (½ CL) and requires 35 towers beyond the 1 CL network. The two proposed networks are evaluated against the network in 2008, which temporarily had the most continuous monitoring sites in North America thanks to the Mid-Continent Intensive project. Evaluation using a synthetic data inversion shows a marked improvement in the ability to constrain both continental- and biome-scale fluxes, especially in areas that are currently under-sampled. The proposed approach is flexible, computationally inexpensive, and provides a quantitative design tool that can be used in concert with existing tools to inform atmospheric monitoring needs.

1 Introduction

[2] Knowledge of regional carbon dioxide (CO2) sources and sinks is necessary for understanding the drivers and feedbacks controlling carbon exchange, and for evaluating carbon management strategies [Weiss and Prinn, 2011]. The current atmospheric CO2 monitoring network, however, has been cited as not being sufficient for constraining fluxes at this scale [Marquis and Tans, 2008; Scholes et al., 2009; Manning, 2011; Weiss and Prinn, 2011]. Early atmospheric CO2 observation locations were sited away from areas with strong sources or sinks, because the goal was primarily to monitor global atmospheric trends [Keeling et al., 1976a, 1976b]. As the need to understand CO2 fluxes at finer spatial and temporal scales increased, however, so did the need to increase the spatial and temporal density of atmospheric CO2 measurements [GLOBALVIEW-CO2, 2010]. Although the expansion of the measurement network has improved our understanding of source and sink activity, atmospheric inversions and data assimilation studies still cite a lack of CO2 concentration data as a primary hindrance to the improvement of flux estimates [Gurney et al., 2002; Baker et al., 2006; Gourdji et al., 2008; Mueller et al., 2008; Schuh et al., 2009; Peters et al., 2010]. Furthermore, the number of additional monitoring locations needed to constrain fluxes on subcontinental scales is unclear, and the optimal locations for these additional monitoring sites are difficult to assess. The well-planned expansion of the current measurement network is critical to furthering our understanding of the carbon cycle system and to evaluating the success of any future efforts at emissions reductions.

[3] Previous methods proposed to assess the existing monitoring network and to determine the number and optimal locations of additional CO2 monitoring sites have relied heavily on either expert opinion [e.g., Tans et al., 1996], which is critical but potentially subjective and qualitative, or on inverse-modeling or data assimilation observational system simulation experiments (OSSEs) [Rayner et al., 1996; Gloor et al., 2000; Patra and Maksyutov, 2002; Rayner, 2004; Gurney et al., 2008; Kaminski et al., 2012], which are computationally expensive and sensitive to specific model assumptions. A common OSSE approach relies on augmenting the existing network within inversion schemes to examine the effectiveness of alternate network configurations and expansions [Rayner et al., 1996; Gloor et al., 2000; Patra and Maksyutov, 2002; Rayner, 2004; Gurney et al., 2008; Kaminski and Rayner, 2008; Kaminski et al., 2012]. This approach is effective at examining the impact of specific additions to a network, but less so at selecting new locations.

[4] Rayner et al. [1996] first proposed simulated annealing [Kirkpatrick et al., 1983] as a solution to the CO2 network design optimization problem, while Patra and Maksyutov [2002] later suggested that an incremental optimization approach may perform better in terms of both reducing flux uncertainty per new station and improving the computational efficiency of network design. Both of these methods, however, rely on solving the inverse problem of flux estimation thousands of times, and each inverse problem requires a large number of runs of the atmospheric transport model. While this is feasible for inversions that estimate fluxes for large regions and at coarse time scales (e.g., Gurney et al. [2002]), the computational limitations become prohibitive when constraining fluxes on finer spatiotemporal scales [e.g., Göckede et al., 2010; Gourdji et al., 2012; Schuh et al., 2010], which is needed to understand variability at process- and policy-relevant scales, as well as to limit the impact of spatial and temporal aggregation errors [e.g., Kaminski et al., 2001; Gourdji et al., 2010].

[5] Beyond the computational limitations, the implementation of OSSE-based approaches within an inverse-modeling or data assimilation framework also intrinsically ties the network design to the specific choices made in the setup of the estimation problem, including, among others, the resolution at which fluxes are estimated, the choice of a priori flux information, the assessment of the a priori error statistics, and the choice of a specific atmospheric transport model. These choices have been shown to strongly affect flux estimates and their uncertainties [e.g., Kaminski et al., 2001; Engelen et al., 2002; Gurney et al., 2002; Baker et al., 2006; Gourdji et al., 2012] and therefore by extension would be expected to strongly impact the network design. Ideally, however, the assessment of an existing network, or design recommendation for its expansion, should reflect the network's information content irrespective of the particular choices that accompany specific flux estimation approaches.

[6] We present a computationally and conceptually simpler network assessment and design approach that is not based on a particular inverse-modeling or data assimilation framework. This method is applicable as a quantitative exploratory tool to benefit both OSSE and expert-opinion-based approaches. The proposed method is based on the simple criterion that a network must be able to resolve the atmospheric CO2 variability or “signal” if it is to inform any subsequent numerical analysis. We posit that if the measurement network can capture a sufficient degree of the atmospheric signal, then subsequent inverse-modeling or data assimilation approaches could accurately quantify the underlying flux field. In this approach, the scales on which the CO2 signal is correlated, i.e., the scales that must be captured by the network, are quantified by performing a local variogram analysis on CO2 concentrations generated from carbon flux and atmospheric transport models. The local variogram analysis can be used to detail the degree of spatial variability in the underlying field. Network assessment and design is then based on adequately sampling a region based on the local heterogeneity, with sampling density increasing in more heterogeneous regions and decreasing in more homogeneous regions.

[7] The main advantage of this approach is that it relies less strongly on specific modeling assumptions tied to a flux estimation system, e.g., the spatiotemporal resolution at which flux estimation would ultimately occur, the choice of an inverse-modeling framework, the statistics of a priori flux errors, etc. While the approach does still rely on modeled CO2 concentrations, and therefore the underlying flux and atmospheric transport assumptions, the sensitivity to these choices is easy to assess by repeating the analysis using alternate representations of carbon flux and atmospheric transport. The computational cost of doing so is minimal and makes it possible to assess the sensitivity of the network coverage and design to alternate possible “true” flux distributions and atmospheric transport representations. In addition, the computational cost of the approach is massively lower than network design based on the repeated solution of inverse problems even for a single set of fluxes and a single atmospheric transport model, because the approach requires only a single run of the transport model and does not require the solution of the flux estimation inverse problem. Conversely, however, the proposed approach is not directly tied to the sensitivity footprints of observations and does not directly use the traditional metric of flux uncertainty as a criterion; the method is therefore also evaluated against these more direct metrics of network performance.

[8] We apply the proposed approach to the analysis of the expansion of the North American (NA) CO2 monitoring network from 2004 to 2008, and to the siting of additional monitoring locations. The year 2008 was chosen as an endpoint because of the relatively large coverage available in that year thanks to the expansion of the core monitoring network [e.g., Lauvaux et al., 2012a; Miles et al., 2012] augmented further by the presence of temporary monitoring sites in the Midwest as part of the Mid-Continent Intensive (MCI) experiment [Ogle et al., 2006], which temporarily yielded the largest network available to date. The sensitivity to the choice of flux and transport model is evaluated by comparing the main analysis using the PCTM/GEOS-4/CASA-GFED model, to an analysis based on an alternate set of fluxes and an alternate representation of atmospheric transport. The tower placement algorithm is further evaluated by removing the temporary towers of the MCI region and using the algorithm to propose replacement towers. The usefulness of the approach as a screening tool for identifying potential additional monitoring locations is evaluated through a comparison to more traditional network assessment metrics, including the calculation of the sensitivity footprints of network locations and the impact of network expansions on constraining flux estimates within an inverse-modeling framework.

2 Data and Methods

[9] This section is organized as follows. Section 2.1 details the models used to produce the simulated CO2 concentrations. Sections 2.2–2.4 outline the geostatistical methods used to analyze the CO2 concentrations, the correlation length (CL) criterion, and the approach used for network design.

2.1 Modeled CO2 Concentrations

[10] Because sampling the actual atmospheric CO2 variability everywhere is not possible, modeled CO2 concentrations are used as a representation of true atmospheric variability. The main analysis was performed on CO2 concentrations simulated using the PCTM/GEOS-4 transport model [Kawa et al., 2004], with prescribed surface fluxes from fossil fuels from Andres et al. [1996], oceans from Takahashi et al. [1999], biomass burning from Duncan [2003], and the biosphere from the CASA model [Randerson et al., 1997] for the year 2006. This model is herein referred to as PCTM-CASA. The model has a horizontal resolution of 1.25° longitude by 1° latitude with 28 vertical levels and a temporal resolution of 1 h. The lowest vertical level of the model, representative of roughly the lowest 80 m of the atmospheric column, was used to represent CO2 concentrations with synoptic variability representative of measurements taken at a measurement tower. For this study, the focus is on the NA land domain of 10°N to 72°N degrees latitude and 50°W to 170°W degrees longitude, excluding the Caribbean. The CO2 fields from the year 2003 were analyzed in addition to the year 2006, to test for sensitivity of the conclusions to inter-annual model differences.

[11] A second set of modeled CO2 concentrations is used to test the sensitivity of the network design to the specific modeled CO2 concentrations. This model uses a different set of surface fluxes, with fossil fuels from EDGAR 3.0 [Olivier and Berdowski, 2001], ocean fluxes from Takahashi et al. [2002], wildfire emissions from Global Fire Emissions Database version 2 [van der Werf et al., 2006], and terrestrial fluxes from ORCHIDEE [Krinner, 2005], as well as a different transport model, namely the European Center for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System [IFS documentation CY37r2, 2011]. This model is herein referred to as ECMWF-ORCHIDEE. The model output is on a 1° longitude by 1° latitude grid with 60 vertical levels and a temporal resolution of 3 h [Engelen et al., 2009]. A pressure-weighted average of the three lowest model layers was used, which approximately correspond to the lowest 80 m of the atmosphere and is therefore equivalent to the lowest layer in PCTM-CASA. The modeled fields were analyzed throughout the year 2003 in a manner identical to that used for PCTM-CASA.

2.2 Local Variogram Analysis

[12] The purpose of the local variogram analysis is to assess the spatial variability of the modeled CO2 fields over NA. The analysis is similar to that of Alkhaled et al. [2008] and Hammerling et al. [2012] who found that the degree of spatial variability in column-averaged CO2 concentrations is not constant in space but varies from location to location. Following these earlier studies, the spatial variability was determined for each model grid cell by comparing concentrations within the surrounding 2000 km window to each other and to the concentrations outside of the window. Contrary to these previous studies, however, the surface layer of modeled CO2 concentrations was examined, corresponding to the typical height of a measurement tower.

[13] The variogram analysis consists of several steps, the details of which are presented in Alkhaled et al. [2008] and Hammerling et al. [2012]. The first step is to construct a raw variogram where latitudinally detrended squared CO2 concentration differences are plotted as a function of their separation distance. Next, an exponential variogram is fitted to these data, based on the work by Alkhaled et al. [2008], to represent the spatial correlation structure of the CO2 concentrations. The fitting was done using a nonlinear least squares method to determine two fitting parameters, the variance σ2 and correlation length L. The two parameters are used to define the exponential covariance model:

display math(1)

where h and C(h) represent separation distance and covariance, respectively.

[14] The relevance of the CL parameter as an indicator of the information content of a measurement is based upon the covariance function formulation in equation ((1)). As separation distance increases, the covariance between any two CO2 concentrations asymptotically decays towards zero. At a separation distance of h = L, the spatial correlation has dropped to less than 5%, and CO2 concentrations separated by this distance can be said to be nearly independent, and therefore uninformative of one another. The separation distance h = L is thus referred to as the practical range or simply the CL. The CL can thus be used to inform the density of a network by providing information on the scales of variability of the concentration field. The main goal of network design within the framework proposed here is to capture the scales of heterogeneity, and the analysis therefore focuses on L rather than the variance of the field σ2.

[15] To determine the highest degree of spatial variability in CO2 across a year, the local variogram analysis was performed on concentrations from multiple days in January, April, July, and September. Concentrations at 2200 UTC were examined because they represent well-mixed midafternoon conditions for much of the continent (5 pm EST; 2 pm PST). These times of the day are also consistent with recommended measurement times used in atmospheric inversions [Haszpra, 1999; Geels et al., 2007]. The local variogram analysis was carried out for each grid cell at the model grid resolution. This allowed for an investigation of seasonal changes in CO2 spatial variability and the determination of the shortest CLs throughout the year for each grid cell, corresponding to the most variable times of the year at each location. The within-month variation in observed CLs was greater for the transition months (April and September), but the shortest CLs were consistently those observed in July throughout the domain. This is consistent with the expected high CO2 variability due to strong biospheric activity during the summer. The minimum CL observed for each location (Figure 1) is used as the basis for all subsequent CL-based network design analysis.

Figure 1.

Minimum July correlation lengths (CLs), obtained from local variogram analysis of PCTM-CASA CO2 concentrations. Circles represent scales of spatial variability.

2.3 CL Criterion

[16] The minimum CL (section 2.2) is used to define a set of criteria for network design, the logic being that a network able to capture a signal that varied on the smallest scales would also be able to capture all other signals. As the CLs in this study were determined from a 1° by 1.25° model, the scales of variability represented in the CLs are those observable at this resolution, which is comparable to the resolution of atmospheric transport models currently in use for global and continental inversion studies. Higher-resolution models may be necessary for network design focusing on regional studies. The minimum CL can provide a useful proxy to assess the information content of a CO2 monitoring network, because it can be used to deduce areas where CO2 concentrations have high (low) spatial variability and thus need to be sampled more (less) frequently in space. A fractional CL scale is therefore proposed as a metric to gauge network performance by coupling knowledge of the spatial variability with knowledge of the network configuration. The fractional CL scale (Figure 2) is defined at each location as the distance to the nearest tower normalized by the minimum CL observed for that grid cell (Figure 1). The coverage criterion is based on this fractional CL for each grid cell: if a tower is within some fraction of a CL of a given location (e.g., ¾, ½, ¼, etc.), that location is said to be covered under that corresponding fractional CL criterion.

Figure 2.

Expansion of the existing tower network (black stars) from 2004 to 2008. Colors represent degree of coverage and are based on each grid cell's distance to nearest tower (hi) divided by the grid cell's correlation length (CL).

2.4 Network Design

[17] The 2008 network was augmented to create hypothetical networks by using a simple objective: that a network should be able to capture the signal it is being used to measure, namely the CO2 concentration field. The CL criterion (section 2.3) was used to define how well a network captures that signal. To determine the number of measurement locations that would be needed to capture the (modeled) atmospheric CO2 concentration field, a minimum requirement based on the covariance between monitoring locations and possible estimation locations was developed. A minimum coverage requirement of 1 CL ensures a minimum of 5% correlation between the concentrations at every estimation location and those at at least one measurement tower. An analogy is drawn from signal processing to define a more stringent ½ CL sampling requirement, analogous to the requirement of providing two samples per cycle (e.g., Nyquist frequency) used to avoid aliasing [Franklin et al., 2006]. Finally, a ¼ CL requirement was found to be representative of regional networks, based on coverage of the MCI region (section 3).

[18] A simple algorithm was implemented to ensure that the hypothetical networks satisfied the full coverage criteria, i.e., ensuring that each grid cell is within a predefined fraction of a CL (e.g., 1 CL, ½ CL) from the nearest tower, with tower placement being restricted to land. The algorithm sequentially steps through all possible tower locations (all NA land grid cells) adding towers that provide the most benefit to the network. The following steps detail the sequence:

  1. [19] Conduct exhaustive search to find the grid cell (i.e., location) where an additional tower would maximize the increase in the covered area

  2. [20] Place tower at selected location

  3. [21] If any portion of the domain remains not covered under the selected CL criterion, return to Step 1.

[22] This selection algorithm builds the network incrementally, and towers placed earlier in the sequence therefore provide more incremental coverage relative to towers placed later. The relative performance of the simple algorithm is verified through a final culling step, which removes each tower in random order and uses the search algorithm to verify that the replacement tower matches the location of the removed tower. This final step ensured no redundant towers in the network design. While future work could include a more rigorous approach to the optimization scheme, network design differences due to differences in CO2 fields between models (e.g., PCTM-CASA vs. ECMWF-ORCHIDEE) are expected to be larger than possible gains from the implementation of alternate optimization schemes for tower placement. Although a rich literature exists on efficient methods to solve combinatorial optimization problems [e.g., Dorigo et al., 1999], the nonstationarity of the field further complicates the implementation of traditional approaches [e.g., Fuentes et al., 2007]. More sophisticated approaches do exist that target nonstationary fields [e.g., Cortes et al., 2004], but their applicability to the present problem would need to be explored in future work. Nevertheless, the simple approach implemented here offers an efficient solution that guarantees the network to have full coverage under the CL criterion.

3 Results

3.1 CL Map

[23] Using 2006 PCTM-CASA output, July is found to be the month with highest observed CO2 variability for all locations within North America, and the point-wise lowest CL observed within July is presented in Figure 1. The features of the minimum CL map are a result of the variability caused by the various components of CO2 surface fluxes (section 2.1) and atmospheric transport and correspond well with physical understanding of a region dominated by an active Northern Hemisphere (NH) summer growing season. The influence of fossil fuels emissions on variability may be apparent over the eastern coast of the U.S., but these are difficult to differentiate from the stronger biospheric signals. CLs are shorter in the eastern region of the U.S., the Midwest region, as well as the western Canadian regions. The high variability can be related to the large and variable uptake in the NH summer coupled with atmospheric transport across these regions. A sharp gradient from short to long CLs appears in Quebec and Labrador and can be attributed to the change from temperate broadleaf and mixed forests, to boreal forest and the mostly snow-covered taiga further north. The gradient may also be due in part to transport pathways that travel north along the eastern coast and out into the Atlantic, mainly bypassing Quebec. The longer CLs in the northeastern regions of Canada, Quebec, and Newfoundland and Labrador can be attributed to less variable CO2 concentrations due to more homogeneous CO2 fluxes in these regions. A similar gradient appears between longer CLs from desert and xeric shrub lands areas of the northern Mexico and southwestern U.S. and shorter CLs from the highly productive and heterogeneous temperate forests in northwestern U.S. and western Canada. Overall, the characteristics of the minimum CL map are consistent with the processes that drive the variability in the underlying CO2 fluxes and atmospheric transport.

[24] The sensitivity to the underlying modeled CO2 concentrations was assessed by repeating the analysis using modeled CO2 data for 2003 from both the PCTM-CASA and ECMWF-ORCHIDEE models. This analysis allowed for an examination of both model inter-annual variability (2003 vs. 2006 PCTM-CASA) and variability between different models (2003 PCTM-CASA vs. 2003 ECMWF-ORCHIDEE). The sensitivity to inter-annual variability within a model output was minor, and the minimum CL map for 2003 PCTM-CASA was found to exhibit similar spatial patterns to that of 2006 PCTM-CASA. The overall lengths, while slightly longer, were quite comparable. The sensitivity to between-model differences was more pronounced, with the minimum CLs for the 2003 ECMWF-ORCHIDEE modeled CO2 being considerably shorter overall, but displaying similar spatial patterns to those from PCTM-CASA. The lower minimum CLs can be attributed to the higher flux variability known to exist in the ORCHIDEE biospheric model [D. Huntzinger, personal communication]. This ability to easily perform such sensitivity analyses is one benefit of the proposed approach, as it offers flexibility in testing the robustness of the network analysis.

3.2 2004 to 2008 Network Expansion

[25] The CL information was used to assess the impact of the expansion of the atmospheric monitoring network, which grew from nine towers with continuous observations in 2004 to 39 towers in 2008. The expanding network and the corresponding expansion of coverage were evaluated using the fractional CL scale presented in Figure 2. The green areas are those having a tower located within ¼ CL of a given location and therefore represent the regions with the best network coverage. The green regions made up 6% of the NA land area in 2004 and increased to 21% in 2008. For the MCI region (Figure 5), the ¼ CL coverage increased from 32% in 2006 to 92% in 2007, demonstrating the effectiveness of the temporary network set up in this region. The increase in coverage depends not only on the number of new tower locations, but also on the CLs near the towers. For example, in the southern and eastern portions of the United States, the well-constrained areas are smaller and tend to be limited to areas immediately surrounding towers, which can be attributed to shorter CLs (i.e., higher atmospheric CO2 spatial variability) in these regions.

[26] The coverage of the ½ CL region (green and yellow) increased from 22% to 51% of the domain and captured nearly the entire continental U.S. with the 2008 network. The 1 CL region (all colors) increased from 56% to 76% of the continent in 2008; however, nearly a quarter of the continent remained outside of the network's coverage, including CO2 sink regions in northern Canada, Alaska, and subtropical Mexico. Overall, the 2008 network provided extensive coverage over the continent. While the regions outside of the 2008 network's coverage are typically less active regions, including the tundra in northern Canada and arid areas of Mexico, placing towers in these regions does help to constrain the continental CO2 flux budget (section 3.5). Additionally, the northern Canadian tundra regions are of significant interest with changing climate [e.g., Schaefer et al., 2011], and therefore baseline monitoring for these regions is of primary importance.

[27] The incremental additional coverage provided per tower was also examined to give an assessment of per-tower coverage expansion. The overall trend shows a decrease in incremental coverage per tower from 2004–2008 in the 1 CL coverage regime. However, the incremental coverage increase per tower in the ¼ CL coverage regime remained relatively constant over the same period. This suggests that during the 2004–2008 expansion, additional towers offered similar per-tower increases in coverage for regional analysis (¼ CL), while the per-tower increase in the continental coverage regime (1 CL) diminished. This is understandable, as the recent expansions in the network have been targeted towards investigations of regional CO2 activity [e.g., Göckede et al., 2010; Gourdji et al., 2012; Lauvaux et al., 2012a, 2012b]. Additionally, several new towers have been established since 2008 and 50 new measurement sites are planned for the U.S. as a part of Earth Networks’ Greenhouse Gas Network (http://earthnetworks.com/OurNetworks/GreenhouseGasNetwork.aspx).

[28] While many external factors are involved in determining the placement of additional towers, whether they be scientific or logistical, regions with no existing towers clearly offer the highest information gain per tower added when focusing on continental budgets. Therefore, a hypothetical scenario was explored in the following section where tower placement is optimized to provide full coverage over NA.

3.3 Augmented Network Design

[29] To investigate a CL-criterion-based expansion of the monitoring network, two continental-scale hypothetical networks were created. The 1 CL and ½ CL networks were designed with the goal of expanding coverage over the entire NA land domain. The 1 CL network was designed using the 2008 existing network as a base, and adding towers using the algorithm in section 2.5. The 1 CL network represents an initial modest expansion of the network. The ½ CL criterion network was then created using the 1 CL network as its base, and expanding until ½ CL criterion was fulfilled over the entire domain. The ½ CL network simulates a substantial expansion of the network following the initial expansion. The tower locations of the three networks are illustrated in Figure 3 and listed in the Supporting Material. While the fractional CL criteria are used to conceptually define the network scenarios in this study, the method could similarly be used to instead allocate a prespecified number of additional towers.

Figure 3.

The (a) 1 CL and (b) ½ CL network expansions, which added 8 and a further 35 towers, respectively. As per the CL criterion requirement, the entire continent is covered to within 1 and ½ CLs for the 1 CL and ½ CL networks, respectively.

[30] Under the 1 CL criterion network expansion, all of North America can be observed with an additional eight towers, bringing the NA total to 47 (Figure 3a). The towers are placed in Mexico (Baja California, Chiapas, and Oaxaca), the US (Florida, Alaska), and Canada (British Columbia, Yukon, Nunavut). The locations of the eight added towers correspond to regions with known deficiencies in the current network [e.g., Gourdji et al., 2012]. On average, each additional tower would expand the 1 CL coverage region by 3% of the NA continental area, making it the largest per-tower expansion to the network coverage when compared to the 2004–2008 expansions. The expansion in the coverage at the ½ CL level is 1.6% per tower, comparable to the actual network expansions in 2005 and 2006. One limitation in using the simple algorithm implemented in this study is that two towers may be placed in very close proximity to one another in an attempt to provide 100% coverage of the domain. This occurs, for example, with two nearby towers placed in Mexico for the 1 CL network, where the removal of either tower would cause only a small fraction of the continent to be unobserved. We note that further analysis would be needed to define the exact locations of additional towers; nevertheless, the removal of either of the closely placed towers is found (section 3.5) to bias the synthetic data inversion estimates, especially for the regional (Tropical and Subtropics) estimates. Overall, the total number of recommended towers under the 1 CL network is consistent with previous assessments, namely 40–50 towers within North America [Tans et al., 1996].

[31] To increase the coverage to the ½ CL level, an additional 35 towers are needed beyond the eight tower expansion of the 1 CL network, increasing the total number of towers over the continent to 82 (Figure 3b). The majority of the towers added are located in the northern latitudes in Canada (20) and Alaska (9). Towers in Mexico (6) and the southern U.S. (5) fill out the southern portion of the continent. Overall, the additional towers are again placed in regions with known data gaps. The additions increase the total number of towers in Canada to 27, which is comparable to the total number of towers in the continental U.S. (38). Each additional tower provides an average increase of 1% of land area covered at the ½ CL level. Again, the hypothetical locations are not expected to represent precise tower placements, as the identification of suitable locations for towers involves many practical considerations (e.g., accessibility due to terrain, remoteness of the location, etc.). Nevertheless, the proposed network offers insight into desirable locations and the appropriate density of an expanded ground-based NA network.

[32] The proposed network expansions are also sensitive to the model used to represent atmospheric CO2, with the 1 CL network based on the ECMWF-ORCHIDEE model requiring 17 new towers, rather than the eight obtained using PCTM-CASA. The larger number of towers follows from the higher variability in ECMWF-ORCHIDEE modeled CO2 field discussed in section 3.1. The locations of new towers (northeastern Canada, Alaska, Mexico, and southeastern U.S.) are consistent between models, however. While the specific model used to create the network does affect the number of hypothetical towers placed, such a comparison is relatively simple and computationally efficient for the CL criterion method. The flexibility to interchange models is a desirable trait when considering both the spread in modeled biospheric CO2 predictions [e.g., Huntzinger et al., 2012] and possible transport model differences [e.g., Gurney et al., 2002]. Sensitivity to inter-annual variability, on the other hand, was again found to be minor, with the hypothetical 1 CL network created from the 2003 PCTM-CASA model also requiring eight towers, placed in very similar locations to those presented in Figure 3a. The similar network design between model years supports the analysis and method as being fairly robust to inter-annual variability.

[33] One further network design investigation was carried out examining the CL criterion required for a more refined regional network. This was performed over the MCI region by removing the five temporary towers added as part of the MCI effort, and then using the proposed algorithm to place towers until the region was covered back to the same ¼ CL coverage (92%) of 2008. This provided an opportunity to investigate the appropriate CL criterion for intensive regional studies and to evaluate the performance of the network design algorithm. Four towers were placed in the region to replace the MCI ring using the PCTM-CASA-based CLs, while seven were placed using the ECMWF-ORCHIDEE-based CLs. These results help to verify the ability of the algorithm to place towers in a manner consistent with the expert knowledge used to place towers in the MCI region and also suggests that networks designed using a ¼ CL criterion could achieve results comparable to studies performed using the 2008 MCI tower arrangement [Lauvaux et al., 2012a].

[34] Finally, one possible limitation of the CL criterion method is its tendency to place towers along the coast, which follows from the shorter CLs found in these areas due to higher spatial variability. Coastal towers are influenced by ocean air and are often not especially informative of land fluxes in inversion studies. On the other hand, coastal towers provide needed constraints on boundary conditions for regional inversions, shown to be an important factor in constraining regional CO2 estimates [Göckede et al., 2010; Gourdji et al., 2012]. Overall, the addition of new monitoring towers always involves many external decision factors that merit further investigation, and we therefore are not advocating the tools proposed here as a standalone method. For example, preexisting telecommunication towers are desirable candidates, because they minimize costs. In this context, the CL criterion could also be used to aid in the evaluation of the potential coverage provided by a set of preexisting candidate towers.

3.4 Comparison to Coverage as Implied by Sensitivity Footprints

[35] In this section, we compare the network coverage as implied using the CL criterion against source-receptor sensitivity footprints calculated for the 2004 nine-tower network. The purpose of comparing the fractional CL scale to sensitivity footprints obtained from a transport model is twofold: (1) to better understand the information provided by the CL analysis and (2) to compare the CL criterion to a familiar metric used in inverse modeling, namely the sensitivity of observations to underlying fluxes. Inverse-modeling methods use transport models to determine the influence of surface fluxes on observations at monitoring locations, essentially tracing the pathways of the air masses that encounter measurement towers. The footprint map, Figure 4, represents the average sensitivity of the 2004 atmospheric measurement network to surface CO2 fluxes, as evaluated by Gourdji et al. [2010] using the Stochastic Time-Inverted Lagrangian Transport (STILT) model [Lin, 2003] driven by meteorological fields from the Weather Research and Forecast (WRF) model [Skamarock et al., 2005], herein STILT. The high-sensitivity regions in Figure 4, shown in green, represent regions where surface fluxes have the most influence on the tower measurements. Conversely, fluxes from low-sensitivity regions, shown in white, are not influencing measurements, and the network captures little or no information about the surface fluxes in those regions.

Figure 4.

Sensitivity of the 2004 nine tower measurement network to surface fluxes. The four sensitivity cutoffs were created to match the corresponding areas of the fractional CL map (Figure 2 top panel). The sensitivity regions represent areas where the sensitivity thresholds were met for at least 85% of the year.

[36] To aid in the visualization, four sensitivity cutoffs, in ppm/(µmol m−2 s−1), were selected to create equal area regions between the fractional CL scale and the sensitivity cutoffs. Thus, the areas in green in both Figure 4 and the top row in Figure 2 are equal, with the green area in Figure 2 bounded by ¼ CL, and the green area in Figure 3 bounded by a sensitivity value of 0.434 ppm/(µmol m−2 s−1). By creating equal areas for each color across the figures, the similarity between the coverage as implied by the two approaches can be compared and contrasted, while keeping in mind that the transport model, STILT, used to create the sensitivity map differs from that used to develop the CLs, PCTM. In addition, the CL-based maps in Figure 2 also depend on the variability of the fluxes in PCTM-CASA and thus not only on the atmospheric transport.

[37] Evaluating Figure 4, differences can be seen based on the nature of the two approaches. Transport model sensitivities in Figure 4 directly represent the directionality of atmospheric transport, and thus mainly adhere to the direction of prevailing winds. The CL-based coverage, on the other hand, is based on the spatial variability of atmospheric CO2 concentrations, which are influenced by both flux variability and transport. The highest sensitivity/coverage areas (green) are in good agreement between the two approaches, showing that the areas that are identified as being best covered agree between the two assessment approaches. However, differences arise in the lowest-sensitivity/coverage areas: the northwestern region of North America, the southwestern U.S and northern Mexico, and northern Quebec, Newfoundland and Labrador. The fractional CL map is more conservative in its estimated coverage of northwestern North America, whereas the sensitivity map shows the influence of incoming air from this region, moving in the southeastern direction across the continent. The CL map is less conservative over the northern Quebec, Newfoundland and Labrador regions, and the southwestern U.S./Mexico region, where the lower spatial variability in these regions implies broader network coverage and reflects the somewhat less active surface flux regions. By incorporating the influence of underlying fluxes, the CL-based coverage requires fewer measurements in areas with low flux variability. The STILT sensitivity analysis, on the other hand, cannot discern the representativeness of the measurements in terms of the underlying fluxes, but uses the directionality of wind fields to define what the network explicitly “sees.”

[38] The comparison between the sensitivity footprints and CL criterion regions offers insight into the possible benefits and shortcomings of using the CL criterion as a network design tool. By including the influence of the variability in underlying fluxes and atmospheric transport, the CL criterion implies that less monitoring is needed in areas with more uniform fluxes, such as in Quebec, Newfoundland and Labrador, as well as portions of south central US and northern Mexico. However, because the approach does not consider wind direction, more towers may be placed in regions upwind of the existing towers in northwestern Canada. While neither approach is a direct measure of network coverage, the comparison does provide an examination of the information provided by CL criterion approach relative to the sensitivity footprints.

[39] Although, in principle, one could use a sensitivity footprint analysis directly as a criterion for network design by selecting a minimal sensitivity to be achieved for all locations within the domain, such an approach would only reflect the role of atmospheric transport, and would not factor in the variability of the underlying flux fields. Additionally, the computational expense of running the transport model repeatedly, as is required to obtain the sensitivity maps even for one tower and one month of observations, is substantial. Running the model for all possible tower locations and measurement times would be nearly unfeasible. The computational problems would be compounded if multiple flux and/or atmospheric transport models were to be considered.

3.5 Synthetic Data Inversion

[40] A synthetic data atmospheric inversion was carried out using the 2008, 1 CL, and ½ CL networks. The purpose of the synthetic data study was to evaluate the CL-criterion-designed networks in an inversion setup and to provide an assessment of the performance in terms of CO2 flux estimates. Synthetic CO2 observations were generated using a set of “true” CO2 fluxes. These synthetic data were then used in an inversion to recover the CO2 flux field. The synthetic data inversion methodology used is based on the geostatistical inversion method (GIM) [Michalak et al., 2004] as applied by Gourdji et al. [2010]. The GIM methodology was chosen as it minimizes the use of prior flux information on the inverse estimates and thus helps to isolate the effects of the additional measurements [e.g., Mueller et al., 2008]. Specifically, because GIM does not require a prior flux estimate to define the underlying flux pattern, the influence of additional measurements on resolving spatial patterns can be more clearly identified. GIM does require an a priori estimate of the spatiotemporal covariance of the fluxes, but these can be derived from the atmospheric data themselves. The fluxes used for the underlying “truth” included the CASA biospheric model from Randerson et al. [1997] with GFED version 2 fire emissions from van der Werf et al. [2006] from 3–30 July 2004. To create the synthetic concentration data, these fluxes were transported using the STILT transport model [Lin et al., 2003; Skamarock et al., 2005]. Note that the transport model used to create the modeled CO2 concentrations differs from the transport model used for designing the monitoring network, thus offering further independence between the information gathered by the CL analysis and the inversion setup.

[41] The overall setup of the synthetic data inversion was designed to focus on the impacts of the added measurement towers, minimize the variation in set-up choices, and recreate realistic data choices [Gourdji et al., 2010]. To this aim, the same transport model used to create the synthetic data, STILT, was used in the inversion to find the sensitivity of the atmospheric measurements to surface fluxes. The a priori flux covariance parameters were held constant for the three inversions. Two model-data mismatch variance cases were used: one idealized case where the variance of the model-data mismatch was set to 0.01 ppm2 for all measurement locations, and a second case where realistic model-data mismatch parameters were used. For this second case, the model-data mismatch variances for existing towers were estimated using real data as described in Gourdji et al. [2012] (Table S1). For the new proposed towers, the median model-data mismatch variance among existing towers within the same biomes (Figure 5) was used (Table S2). White noise with these variances was added to the synthetic observations.

Figure 5.

Seven biomes used to evaluate the synthetic data inversion results at regional scales. The black stars represent the existing 2008 network (39), circled ×'s the 1 CL additions (8), and the solid cyan circles the ½ CL additions (35). The dotted outline represents the MCI region.

[42] The a priori flux covariance parameters, which describe the correlations between the estimated fluxes from the inversion in both time and space, were estimated using Restricted Maximum Likelihood [Kitanidis, 1995; Michalak et al., 2004] implemented as detailed by Gourdji et al. [2010]. The parameters were optimized using the synthetic atmospheric data from the ½ CL network and were applied to all three cases. Data choices for the hypothetical towers follow the established approach of using only well-mixed midafternoon data for short towers (<100 m), including all the new proposed towers, and all 24 h of data for the tall towers (>300 m) in the existing network. The fluxes were estimated on a 1° by 1° spatial grid with a 3 hourly temporal resolution. For more information regarding the details of the setup choices, see Gourdji et al. [2010, 2012].

[43] The a posteriori inversion-recovered fluxes were aggregated temporally to the entire month of July and spatially to the NA domain and to seven biomes modified from Olson et al. [2001] (Figure 5), to analyze the performance at the continental and regional scales. At the continental scale, improvements resulting from the augmented networks are evident, as seen by the reduction in the actual absolute error of the recovered flux estimates (Figure 6), as well as the estimated reduction in the a posteriori uncertainties (Table 1). The reported a posteriori uncertainty reductions were determined by comparing the posterior uncertainty variances between the inversions using the existing and expanded networks. Results were consistent between the two model-data mismatch cases, and the discussion that follows primarily focuses on the inversion with minimal model-data mismatch.

Figure 6.

Synthetic data inversion results averaged for the month of July and aggregated to seven biomes and North America. Three network configurations are shown with increasing network size as well as the true flux. The closed symbols represent the minimal model-data mismatch case while open symbols indicate the realistic model-data mismatch case. The error bars represent the 95% uncertainty range.

Table 1. Percent Reduction in Biome-Scale A Posteriori Uncertainty, Expressed as a Variance, Relative to the A Posteriori Uncertainty Associated with the 2008 Networka
Biome1 CL (%)½ CL (%)
  • a

    Values outside parentheses represent the minimal model-data mismatch case; values inside parentheses represent the realistic model-data mismatch case.

Tropical and Subtropical69 (57)80 (79)
Temperate Broadleaf and Mixed Forests1 (1)28 (28)
Temperate Coniferous Forests18 (15)71 (72)
Boreal Forest & Taiga28 (24)65 (62)
Tundra44 (40)82 (81)
Temperate Grasslands, Savannas, and Shrublands4 (3)47 (49)
Desert and Xeric Shrublands19 (11)61 (64)
North America47 (39)79 (77)

[44] The total “true” CASA sink for NA in July is 671 TgC. Using the 2008 network, the sink estimate for the NA flux is 727 TgC, an overestimate of the sink by 8%. The 1 CL network, with an additional eight new towers, improves the estimate of the sink to 685 TgC, bringing the best estimate to within 2% of the truth. The ½ CL network, which added a further 35 towers, yields an estimate of the NA sink of 674 TgC, or within 0.5% of the true flux. Additionally, the 1 CL and ½ CL networks reduces the a posteriori uncertainty on the continental estimates by 47% and 79%, respectively, relative to the a posteriori uncertainty associated with the 2008 network, as seen in Table 1. Overall, at the continental scale, the 1 CL and ½ CL networks provide relatively strong constraints for a monthly carbon budget over NA.

[45] At the biome scale, the greatest overall improvements are found in the poorly constrained biomes, namely the Tropical and Subtropical region, the Tundra, and the Desert and Xeric Shrublands. The a posteriori uncertainties relative to the 2008 network show significant reductions, 69% in the Tropics and 44% in the Tundra for the 1 CL network and over 80% in both biomes for the ½ CL network (Table 1). With the ½ CL network expansion, the poorly constrained regions begin to approach the same a posteriori uncertainty ranges as the well-constrained biomes. The Tundra region shows the largest reduction in absolute error. Using the 2008 network, the sink in the Tundra region is estimated to be 92 TgC, an overestimate of 67% from the true CASA sink of 55 TgC. This error is reduced to within 34% (1 CL) and 18% (1/2 CL) of the truth, with the two proposed networks. Only modest improvements are found in the biomes that were considered well constrained in the 2008 network, e.g., Temperate Broadleaf Mixed Forest, Boreal Forests and Taiga, Temperate Grassland and Shrubs. This is expected as the new towers mostly inform the under-constrained biome regions. We also addressed the concern over the close placement of towers (section 3.3) and examined the effect of removing the tower in Chiapas, Mexico, one of the closely placed towers in the Tropical and Subtropics, on the flux estimates. The removal of the tower increases both the a posteriori uncertainty (17%) and absolute error (8%) of the regional Tropical and Subtropics estimate, but has only minor effects on continental uncertainty. Thus, towers in under-constrained regions do provide useful information at the biome scale even when placed in close proximity.

[46] While many of the biomes converge to the true monthly flux, in the ½ CL network, the estimate for the Temperate and Coniferous Forest biome does not. Using the ½ CL network, the estimate for the Temperate and Coniferous Forest sink is 73 TgC, an underestimate of approximately 11% from the true CASA sink. This error is likely a consequence of defining and aggregating noncontiguous subregions within the domain. Whereas the addition of towers in previously unconstrained regions improves the recovery of large-scale spatial patterns, small-scale features may still be difficult to recover without a much denser tower network (~ ¼ CL as seen in the MCI) or the addition of auxiliary environmental variables into the inversion [e.g., Gourdji et al., 2012]. Because the GIM inversion used here does not include auxiliary variables, it tends to smooth sharp features, and may lead to smearing of fluxes across discrete boundaries. For example, the overestimation of the sink in the Tundra is balanced by an underestimation of the sink in the neighboring Temperate Coniferous Forest region. This is therefore primarily a limitation of the specific implementation of GIM used here, rather than of the proposed network. Nevertheless, over large domains, the overestimates and underestimates balance out to bring the best estimate of the entire domain closer to the truth.

[47] The 1 CL and ½ CL network scenarios offer examples as to what improvements could be gained from both a modest and more comprehensive network expansion based on a CL-derived coverage criterion. The synthetic data inversions also point to a possible overestimation of the NA summer sink when using the 2008 network. If the goal of the current CO2 monitoring network is to constrain continental-sized regions, a coverage regime of approximately 1 CL may be sufficient, if atmospheric transport model errors can be reduced. If, however, the goal is to accurately resolve flux estimates at regional biome-sized scales, the measurement network would likely need to provide a coverage condition approaching or possibly exceeding ½ CL.

4 Conclusions

[48] A primary purpose of an atmospheric CO2 monitoring network is to produce accurate flux estimates at relevant spatial and temporal scales. With this ultimate goal in mind, we propose a network design approach based on the fact that the ability to produce accurate flux estimates is related to the network's ability to capture the heterogeneity of the atmospheric CO2 field. The CL approach does not directly incorporate an inverse-modeling framework in the design process, but instead integrates information previously underutilized for the CO2 network design problem, namely the spatial scales of variability of atmospheric CO2 concentrations. This method represents a computationally efficient approach that offers flexibility beyond that afforded by existing network design tools. The CL approach also benefits from an independence from many of the assumptions (e.g., flux estimation resolution, a priori error statistics, statistical and numerical solution approach, etc.) inherent to inverse-modeling studies. While the CLs are dependent on the underlying flux and atmospheric transport models used to generate the surrogate concentration field, the approach has the capacity to assess the sensitivity to different models with minimal computational cost relative to existing approaches.

[49] The CL approach is proposed as a tool for informing the evaluation and design of atmospheric CO2 monitoring networks. To this aim, a fractional CL coverage criterion was developed using the minimum CLs observed at each location throughout the year and used to evaluate the 2004–2008 NA network expansion as an example application. The method appears robust to inter-annual model differences, yet as with other techniques is sensitive to differences in estimates between models. The coverage as implied by the CL criterion was also compared with the 2004 network sensitivity footprints to elucidate the similarities and differences of the two metrics. While the coverage based on sensitivity footprints is derived solely from atmospheric transport and the coverage based on CLs reflect the variability of both the underlying fluxes and atmospheric transport, the majority of the coverage regions between the two metrics coincide.

[50] The 2008 network was used as a baseline for proposing expanded networks that provide improved coverage, especially in poorly sampled regions such as Canada, Mexico, and portions of the US (Alaska, Florida). A simple algorithm was proposed to augment the network to provide full coverage over NA under a chosen fractional CL criterion. The network augmentations called for an addition of eight towers for a minimal 1 CL network and 35 further towers for a stricter ½ CL network. The placement of towers was consistent with areas shown to lack data for CO2 flux estimation [e.g., Gourdji et al., 2012]. The number of towers was also similar to that of proposed goals for NA coverage [Tans et al., 1996].

[51] The augmented networks were further evaluated through a synthetic data inverse study and were shown to substantially improve flux estimates and reduce a posteriori uncertainties relative to the 2008 network. The results from the 1 month inversions showed reductions in absolute error of the continental flux estimate from 8% to 2% and 0.5% for the 1 CL and ½ CL networks, respectively, and reduction of monthly grid-scale a posteriori uncertainty of 47% and 79% relative to that of the 2008 network. At regional scales, flux estimates of under-constrained biomes also showed large improvements in most regions. The ½ CL expansion reduced monthly grid-scale a posteriori uncertainties in the poorly constrained regions by upwards of 82% relative to the 2008 network. Results from the synthetic data inversions support the potential of the CL criterion in providing information on network design studies.

[52] Overall, the CL approach shows promise as a tool to inform network evaluation and design independently of a specific inverse-modeling or data-assimilation framework, and can be used in concert with other expert information in the survey of candidate monitoring locations and design iterations. Additionally, the design of a monitoring network is ultimately dependent on the specific goal or question; thus, the actual fractional CL criteria could change for differing applications, as shown by the observed ¼ CL coverage of the MCI region in 2008. Overall, the CL-defined network analysis and design method presented here offers an exploratory tool that explicitly incorporates a quantification of spatial variability into the solution of the CO2 monitoring network evaluation and design problem.


[53] This manuscript is based upon work supported by the National Aeronautics and Space Administration under grant NNX12A890G. Partial support for Yoichi Shiga was also provided by the University of Michigan Rackham Merit Fellowship. The ECMWF-ORCHIDEE model simulation was done as part of the MACC project, which was funded by the European Commission under the Seventh Research Framework Program, contract number 218793. We would like to acknowledge Anna Agusti-Panareda for her support with the ECMWF-ORCHIDEE model simulations; Abhishek Chatterjee, Dorit Hammerling, Kim Mueller, Sharon Gourdji, Deborah Huntzinger, and Vineet Yadav for their invaluable expertise, patience, and assistance; Arlyn Andrews for her insights and expertise regarding the in-situ CO2 measurement network; Thomas Nehrkorn, John Henderson, and Janusz Eluszkiewicz for completing the WRF simulations; and three anonymous reviewers for valuable input on the manuscript.