Estimation of plume distribution for carbon sequestration using parameter estimation with limited monitoring data

Authors


Abstract

[1] This study develops and evaluates an integrated methodology including parameter zonation, efficient global optimization, and multiple aquifer realizations that gives a useful forecast of the pressure distribution and the CO2 plume migration through a heterogeneous storage formation, over time, using only a few monitoring locations in a feasibly short amount of computing time, while dealing with data error. Geological characteristics of the example application are similar to a CO2 sequestration pilot test conducted in a fluvial-deltaic geological setting. In the example, the CO2 injection problem is simulated with the TOUGH2 code, a numerical simulator for multiphase (gas and aqueous), multicomponent flow and transport. The inverse problem is difficult because the GCS optimization estimation problem has multiple local minima and the simulation is computationally expensive. The efficient surrogate response surface global optimization algorithm “Stochastic RBF” (stochastic radial basis function) is used to calibrate the model parameters. Results are averaged over three saline aquifer realizations. In the third numerical experiment using only pressure data, the CO2 plume could be determined with an average correlation coefficient compared to the actual plume of up to R2 = 0.916 (for current plume at t = 1.5 years) and average R2 = 0.80 (for forecasted plume estimated at t = 7.5 years years, using only the first 1.5 years of monitoring data). Adding gas saturation data improved the 6 year forecast somewhat (increasing average R2 = 0.85) but it was not significantly helpful in estimating the current plume. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous sedimentary formations.

1. Introduction and Literature Review

1.1. Introduction

[2] Geological carbon sequestration (GCS) has been proposed as a means to reduce greenhouse gas emissions. Department of Energy (DOE's) National Energy Technology Laboratory [2007] estimated that North America could store 900 years worth of carbon dioxide at the current North American emission levels. The most widely available sites are saline aquifers. Supercritical CO2 is injected at a depth of at least 800 m underneath a “caprock” of very low permeability. In the aquifer, CO2 density is lower than the surrounding brine at typical subsurface pressure and temperature conditions. There is a small risk that as the injection goes on, the CO2 plume may reach some fracture in the caprock or abandoned wells with faulty seals, in which case the CO2 can rise, potentially contaminating fresh water aquifers or leaking into the atmosphere. Industrial-scale carbon sequestration will generate a plume that will extend over a horizontal area covering many square kilometers, making it very difficult and costly to monitor for leaks.

[3] Figure 1 shows a schematic diagram of the sort of CO2 plumes that may develop in the storage formation for different permeability conditions. The great variety demonstrates that it is crucial to have a good understanding of the permeability distribution in order to predict how and where the CO2 plume moves, in order to focus attention on weak areas (like abandoned wells or known fault lines) that the CO2 plume reaches. One means of estimating plume movement is through monitoring wells, but since they are expensive to construct, plume estimation needs to be based on data from relatively few sampling points.

Figure 1.

Schematic diagrams of CO2 plumes (light grey) for various permeability distributions: (a) homogeneous low permeability; (b) homogeneous high permeability; (c) homogeneous high permeability in a sloping aquifer; (d) heterogeneous permeability with continuous shale layers; and (e) heterogeneous permeability with discontinuous shale lenses. The black bar indicates the injection interval in the lower half of the formation.

[4] The goal of this paper is to present an effective and computationally efficient method for obtaining estimates of CO2 and pressure fields from very sparse monitoring data and to demonstrate its effectiveness on a realistic GCS case for which the amount of data available would be spatially very limited. The major steps are as follows: (a) use available geological data to the extent possible to identify permeability zones and ranges of parameter values to describe likely permeability ranges within each zone, (b) formulate one or more models of the site that use a process-based simulation model with a relatively low number of parameters for the zones, (c) estimate the values of the parameters using the time series of pressure data (and possibly CO2 gas saturation) over a short period from only a few monitoring locations using an efficient global optimization method, (d) use the calibrated model to then estimate the current and future pressure field and location of CO2 plume, and (e) utilize this information to determine the region where risk assessment should be focused and consider if there is any evidence that the system is not working properly.

[5] The approach of using a predictive model with relatively few parameters is necessary because the very limited amount of available data in a GCS situation cannot statistically support the estimation of a large number of spatially distributed parameters. In this paper the model is the TOUGH2 simulation code and the ECO2N module and requires about 2 h/simulation. Therefore, an optimization algorithm for calibration cannot require a large number of simulations. The computational efficiency is achieved in this paper with the use of a surrogate response surface global optimization algorithm, which reduces the number of computationally expensive simulations required to find parameters that are the global minimum of a sum of squared errors (maximum likelihood) function. The sparsity of the monitoring data is addressed by using preexisting geological data and by adopting a zonation strategy that is reasonable given the paucity of spatially distributed monitoring data. The resulting calibrated model is then used to estimate the current and future CO2 plumes for risk assessment. Pressure data are much easier to obtain than CO2 gas saturation so we examine the differences in the accuracy of estimation with and without CO2 gas saturation data. Note that we call CO2 gas saturation the amount of free phase CO2 per unit volume, not dissolved CO2; actually, CO2 will be supercritical, but for brevity, it will be referred to as gas.

[6] There are multiple issues that make this a very difficult problem: (a) the lack of data requires a model with relatively few parameters; (b) calibration of the GCS model is a problem with multiple local minima [Espinet, 2012], which requires then a global optimization method; and (c) the computational cost of the simulation model forces us to look for a method that requires few simulations, which is not possible with popular heuristic global optimization methods like genetic algorithms, and so requires a more efficient global optimization method like one of the surrogate surface optimization (SSO) methods used here.

[7] There is no paper in the peer-reviewed journal literature with a previous application of this inverse methodology (including with global optimization) for a computationally expensive (e.g., at least 1 h/simulation), multimodal GCS monitoring, and estimation problem. The global optimization method we use is mathematically proven to converge asymptotically to the global minimum [Regis and Shoemaker, 2007] and has been extended to solve problems with up to 150 decision variables [Regis and Shoemaker, 2013].

1.2. Literature Review

[8] Weir et al. [1996] and Pruess and Garcia [2002] have carried out numerical simulations of CO2 injection in homogeneous formations, and Doughty and Pruess [2004], Juanes et al. [2006], and Flett et al. [2007] and others numerically simulated three-dimensional heterogeneous formations for CO2 sequestration. The impact of heterogeneity and anisotropy on CO2 plume development, trapping mechanisms, and storage capacity have been examined in Doughty et al. [2001], Doughty [2010], and Green and Ennis-King [2010]. The focus of these papers is on physical processes occurring during CO2 storage and the impact of various characteristics of the geological formation on CO2 sequestration capabilities. These detailed numerical models are known as process-based forward models. However, in each case, the geology is an input and is always treated as known.

[9] In real cases however, the geology (e.g., the spatial distribution and permeabilities of various facies) is not known precisely. This is especially true for saline formations that have not previously had any economic value and so have not been thoroughly characterized. This situation makes inverse methods, in which monitoring data are used to estimate hydrogeological properties, appealing.

[10] There is a large history of parameter estimation in petroleum engineering that has been expanding at an increasing rate in the last 20 years. We acknowledge the work of Oliver and Chen [2011] for part of this literature search. An overview of algorithms and applications can be found in Makhlouf et al. [1993] and a more recent overview in the book from Oliver et al. [2008].

[11] Since it is often the case that the available data do not support estimating large numbers of parameters, we need a way to reduce the number of parameters. Approaches include using a linear combination of the original parameters or a zonation coupled with sensitivity-based approach (as chosen in this study), similar to the adaptive multiscale estimation approach [Grimstad et al., 2001]. Zonation is the oldest way to parameterize models, and the first studies can be found in Jacquard and Pain [1965] and Shah et al. [1978]. Numerous studies can be found, each with different mathematical approach: Rodrigues [2006], Aanonsen and Eydinov [2006], Cominelli et al. [2007], Zandvliet et al. [2008], Jafarpour and McLaughlin [2009a], and Bhark et al. [2011]. A third way to parameterize is based on prior knowledge. Approaches include the use of pilot points [Wen et al., 2006], spline interpolation [Lee et al., 1986], wavelets [Sahni and Horne, 2005], or sparsity information [Jafarpour et al., 2010]. More complex methods and applications can be found in Liu and Oliver [2005], Zhao et al. [2008], and Agbalaka and Oliver [2008].

[12] Once the parameters to be estimated are defined, optimization algorithms can find the parameter set that gives the best fit. It is important to note that the optimization problem of minimizing the sum of squared errors of a nonlinear model almost certainly has multiple local minima (i.e., is “multimodal”). For example, the sum of squared errors of a quadratic (as the simplest example) model leads to a fourth-order polynomial, which will have two modes in one dimension and as many as 2N in N dimensions. In particular, in our many optimization runs of the carbon sequestration model for different formations and numbers of parameters, we found many local minima, which is corroborated by Finsterle [2005].

[13] In petroleum reservoir engineering, different approaches for optimization in parameter estimation have been suggested. The oldest way to calibrate a model is manually, but manual calibration can take a great deal of time and does not necessarily result in better prediction ability. A fairly recent study can be found in Agarwal et al. [2000], which took an entire year in order to successfully match 25 years of production data. Doughty et al. [2008] and Daley et al. [2011] manually calibrated numerical models to well logs, pressure measurements, seismic data, and fluid samples, using trial and error to best match field measurements. This only allows a small number of parameters to be estimated. Automatic calibration has become more widely used and can be categorized in several ways, depending on the type of optimization algorithm used.

[14] Heuristic optimization methods like simulated annealing, genetic algorithms or fuzzy programming have been used on multiphase flow problems [Kobayashi et al., 2008; Serhat and Demiral, 1998; Romero and Carter, 2001; Schulze-Riegert and Ghedan, 2007; He et al., 2008]; the difficulty with heuristics like simulated annealing and genetic algorithms is that they require a large number of simulations of the multiphase simulation model (typically thousands) to get a good answer for a problem with even a few (e.g., 10) real-valued parameters as was indicated earlier.

[15] Our GCS model takes 2 h/simulation, so running it thousands of times is undesirable and would be even less so for more complex multiphase models. For example, Kobayashi et al. [2008] shows that the computing time is 292 days for a real-world model (7 CPU hours to run the model and 1000 iterations for the optimization).

[16] One method for history matching is the gradual deformation technique, which uses a linear combination of several reservoir geological properties and optimizes the coefficient of this combination in order to improve the history match. Several studies have used it: Hu et al. [2001], Caers [2003], and Caers and Hoffman [2006] (probability perturbation method is used in conjunction with the gradual deformation method to generate realizations that improve data match).

[17] Gradient optimization approaches require that derivatives be calculated. We will cite the adjoint system approach used by Li et al. [2003] and Eydinov et al. [2008]. They use streamline-based method to compute approximate sensitivities [Kulkarni and Datta-Gupta, 2000; Datta-Gupta et al., 2001] and more recently applied by Stenerud et al. [2007] and Oyerinde et al. [2009]. Main methods for gradient-based optimization are the Levenberg-Marquardt formulation and the conjugate gradient or quasi-Newton approaches. For the Levenberg-Marquardt formulation, studies and algorithm development are included by Zhang et al. [2003], Tonkin and Doherty [2005], Finsterle and Kowalsky [2011], and Finsterle and Zhang [2011]. For the conjugate gradient or quasi-Newton approach, Cheng et al. [2003] and Oyerinde et al. [2009] are examples. Derivative-free local optimization methods exist such as optimization by radial basis function interpolation in trust-regions (ORBIT) [Wild et al., 2008] that have the advantage of not requiring derivatives but have the same theoretical convergence properties as derivative-based local optimizers with the same assumptions (as proven in Wild and Shoemaker [2011]). Gradient-based algorithms are usually local optimization algorithms unless they have a restart option. Some other studies using linear or nonlinear local optimization algorithms are by Hazra and Schulz [2002], Bieker et al. [2006], Jansen [2011], Senger et al. [2003], and Suwartadi et al. [2010]. All local optimization methods will stop at the first local minimum found, which depends on the initial starting value of the search. Hence, there is a good chance of not finding the best solution unless the initial starting value is close to the best solution, which is unlikely for our problem.

[18] Ensemble Kalman filter can deal with a large number of variables and possesses the advantage of offering uncertainty analysis features. A downside is that this is not a global optimization method, which means that it will not always find the best parameter estimate if there are local minima. References to ensemble Kalman filter can be found in Zafari and Reynolds [2005] and Aanonsen et al. [2009]. Reservoir history matching studies using the ensemble Kalman filter can be found in Skjervheim et al. [2005], Arroyo-Negrete et al. [2006], Chen and Oliver [2010], and Liu and Oliver [2005]. However, one of the downside of the ensemble Kalman filter is that it is “not well suited for variables with multimodal distributions unless transformations are possible” [Oliver and Chen, 2011]. This statement corroborated by Jafarpour and McLaughlin [2009a] when they write “If the ensemble replicates are derived from training images that do not describe the channel geometry properly, the Kalman filter has difficulty identifying the correct permeability field,” which means that the ensemble Kalman filter could have trouble finding the correct answer unless the initial starting values in the search are close to the (unknown) correct answer.

[19] The last category of optimization methods is derivative-free global optimization methods, i.e., global optimization methods using surrogate response surface methods (e.g., stochastic radial basis function (RBF) from Regis and Shoemaker [2007] and efficient global optimization (EGO) from Jones et al. [1998]). These methods, in addition to being global optimization methods (and therefore very well suited for reservoir history matching), are also advantageous from a computational standpoint because they are built to reuse information of previous model runs and therefore usually require fewer simulations than traditional methods. In this study, we therefore focus on the stochastic RBF method cited earlier. It is worthwhile to cite Horowitz et al. [2011] who used EGO, cited earlier, to optimize up to 24 parameters. However, the simulation run times and number of simulations required are not mentioned, making it difficult for the authors to use this study as a benchmark. In addition, the results from the local and global optimization are the same, implying that the application is a special case that does not indicate performance on the more general case with multiple local minima. Espinet and Shoemaker [2013] compare the efficacy of five different optimization algorithms for calibration of a multiphase GCS problem for three different reservoir configurations. They find that only their simplest example, which is homogenous, has a single local minimum. The more realistic heterogeneous examples are both multimodal. The surrogate global optimizers including stochastic RBF worked best on the multimodal problems. This earlier paper addresses the narrower issue of computational efficiency of alternate algorithms for calibration of model parameters on a deterministic problem, whereas the present paper differs by considering the entire process of designing and evaluating a modeling and monitoring system (including well location and types of data) in terms of its ability to estimate and forecast plumes, given limited data, data error, and uncertainty in heterogeneous-aquifer permeability patterns (represented by multiple realizations).

[20] Another approach to dealing with optimization of computationally expensive functions is to use a standard optimization method operating on a “lower fidelity model” designed to mimic the original computationally expensive model. The lower fidelity model is somehow simplified (e.g., by using a coarser grid for partial differential equations) from the original forward model to be fast to compute. In some cases response surfaces are used to create a surrogate (faster lower fidelity) forward model. However, lower fidelity models can lead to inaccuracies in the simulation predictions. For example, Vasco and Datta-Gupta [1997] propose a method for integrating field production history into reservoir characterization and use a faster (lower fidelity) three-dimensional streamline simulator to carry out the inversion. However, given the large size of current geological models, this approach requires too many simulations. Tran et al. [1999] propose using a (lower fidelity) coarse-scale inversion to reduce the number of parameters with the sequential self-calibration method [Gomez-Hernandez et al., 1997; Hosseini et al., 2011] and then applying downscaling to capture small-scale heterogeneities. However, the computational costs as well as the amount of data necessary to carry out the inversion are not discussed.

[21] Our approach with surrogate surface optimization (SSO) is different from the “lower fidelity model” because it is designed to save computational effort while computing the full fidelity forward model. SSO reduces computation by reducing the number of simulations required to reach an optimal value, which the SSO algorithm does by building an iterative approximation of the objective function value (e.g., the goodness of fit for a set of parameters) using the results from each prior simulation. In the surrogate surface optimization approach the approximation is changed in each iteration of the optimization. However, it is possible to combine our method with a lower fidelity model to further reduce computational effort.

[22] This paper focuses on estimating CO2 plume evolution based on limited monitoring data with use of a process-based model, which is the same numerical code that would be used for forward simulation. This involves using numerical simulations with optimization to solve the inverse problem, i.e., to characterize the unknown geology. Then the calibrated model is simulated forward in time with appropriate parameters to predict the movement of the CO2 plume. We focus on optimization methods that are computationally efficient for processed-based simulation models. Model calibrations for geological carbon sequestration models have been carried out by Bickle et al. [2007], but the model was analytical (i.e., not numerical simulations) and specific to the studied formation. Our optimization algorithm makes the estimation process fully automatic and more efficient in terms of both human time and computer time. The automatic calibration process can be repeated many times for updating as new monitoring data become available.

[23] In section 2, we pose the forward problem formulation, which includes describing the numerical simulator, the geological model, and its implementation in a numerical model. In section 3, we explain the inverse methodology to tackle the problem, which includes the choice of observation data and formulation of the objective function, the choice of parameters and optimization algorithm, and finally, the setup of the calibration problem and method to measure the goodness of the results. In section 4, we present our results. Section 5 provides further discussion of the results and how they may be applied to other sites.

2. Formulation of Forward Problem

[24] This section presents the simulator, the three-dimensional geological model, and the numerical model for the forward problem.

2.1. Numerical Model

[25] The numerical model is 100 m thick and consists of 10 layers, with 875 grid blocks/layer, arranged in a 35 × 25 rectangular grid. Figures 2-4 show the central 1 km × 1 km portion of the grid, within which the CO2 plume is expected to remain throughout the simulation. Lateral grid-block dimensions are 50 m × 50 m in this central portion, except in the vicinity of the injection well, where grid spacing is finer to resolve the higher pressure gradients. In the y direction, the grid has closed lateral boundaries. In the x direction, the grid coarsens and extends about 7 km to constant-pressure boundaries, to accommodate the pressure response, which extends much farther than the CO2 plume itself. The top and bottom boundaries of the model are closed, to represent perfectly sealing cap and bedrocks.

[26] Usually sandstone aquifers are bounded above and below by shale, which is simulated by a no-flow boundary in our case. In reality, shale confining layers may have nonnegligible permeability, which will enable pressure release, but on the relatively short time scales of the present problem (7.5 years), such effects are very small. Lateral boundaries are imposed far enough away from the injection location (0.5 km for closed boundaries and 3.5 km for constant-pressure boundaries) such that they have almost no effect on the CO2 distribution and produce only a constant shift on pressure measurements. Thus, they have no significant effect on the calibration. In fact, we have done sensitivity studies varying the outer boundary conditions, and the essential behavior of the CO2 plume is unchanged whether the model is a rectangular channel or is open on all sides. Initial conditions for the simulations are a single-phase liquid brine with a hydrostatic pressure gradient and constant temperature and salinity.

[27] All model properties and initial conditions are given in Table 1a and 1b. The TOUGH2 simulation assumes that CO2 is injected at a constant rate of 100,000 tons/yr for 7.5 years through a single injection well that injects only in the lower half of the formation. TOUGH2 allows us to check the total amount of CO2 present in the formation. We find that the amount of CO2 present at t = 7.5 years is constant for all simulations within 0.007%, guaranteeing that no numerical mass conservation error has developed during computation.

Table 1a. Geological Model Properties
Simplified ConceptualizationDepositional SettingFaciesPorosityHorizontal PermeabilityVertical PermeabilityResidual Liquid Saturation
Nonshaly layerBarrier barBarrier core0.32700 mD700 mD0.15
Washover0.29200 mD50 mD0.25
Shale0.100.001 mD0.0001 mD0.30
Nonshaly layerDistributary channelChannel-20.3400 mD100 mD0.20
Splay-20.30250 mD100 mD0.25
Shale0.100.001 mD0.0001 mD0.30
Shaly layerInterdistributary bayfillChannel-10.30400 mD100 mD0.20
Splay-10.28150 mD30 mD0.30
Shale0.100.001 mD0.0001 mD0.30
Table 1b. Initial Conditionsa
 Values
  1. a

    Recall that only the CO2 injection period is simulated; higher value would be expected for postinjection period.

Dimensions1 km × 1 km for the fine grid and 100 m thickness
Initial pressureHydrostatic, average: 150 bars
Initial salinity0.03 mass fraction of salt in brine
Temperature60°C isothermal
Injection rate of CO2100,000 tons/yr or 3.16 kg/s
Other parameters 
van Genuchten (m)0.457
Residual gas saturation0.01a
Shale capillary pressure strength (bars)200
Nonshale capillary pressure strength (bars)0.20

3. Inverse Methodology

[28] In this section, we formulate the objective function, present the innovative optimization algorithm that allows minimizing the objective function, and explain the generation of the observed data and the choice of locations where it is obtained.

3.1. Objective Function

[29] Generally, calibration of parameters can be formulated as a box-constrained minimization problem as follows:

display math(1)

subject to

display math(2)

where math formula denotes a vector of model parameters, pmax and pmin are vectors of the upper and lower bounds for parameters that usually come from physically feasible range of parameters prior information, and expert experience. In equation (1), f (p) is the error function in calibration procedures as described in equation (3). We note that f (p) is expected to be multimodal (have many local minima), which requires the use of a global optimization method.

[30] Our objective function is a classical weighted sum-of-squares formulation, where we take the sum of the squares of the difference between the observed data and the simulated data at different times and locations. This approach is valid because the measurement error is normally distributed and independent as explained later. This formulation might not be adequate for measurements with a skewed error term. Hence, we can write

display math(3)

where p is the vector of input parameter values that we are trying to calibrate. For example, math formula, where p j is the permeability of facies j (facies is represented by a color on Figures 2-4), math formula is one of n actual data points at a discrete point i in space and time, and zi (p) is the corresponding simulator output variable, generated with input parameter p. The weighting coefficient σi is standard deviation of the measurement error distribution associated with the observation math formula. We choose σi to be equal to 7000 Pa for pressure measurements and 0.01 for gas saturation measurements, as explained in section 3.3.

[31] For several reasons including the nonlinearity of this CO2 model and the limited data, there is no guarantee that there is a single set of parameters that gives the best solution to equation (3). Hence, the optimization problem has not only multiple local minima, but there could also be multiple points or regions in the parameter space where the global minimum is achieved as shown by Espinet [2012]. Our goal is to find a point in parameter space that is close enough to a global minimum point to be able to give a good prediction of the CO2 plume. The later result in section 4 indicates that we have achieved that goal.

3.2. Model Parameterization

[32] Our goal is to obtain good estimates of the current and future plume locations, and it is desirable to do so by estimating relatively few parameters. Appendix F describes a sensitivity analysis that determined that the most important parameters to be calibrated are the facies permeability.

[33] Moreover, the sensitivity analysis confirmed our intuition that the variability among the nonshale facies is insignificant compared to the permeability difference between shale and nonshale facies. We assume that the depths and thickness of shaly layers have been identified through well logs, but the lateral continuity of the shale lenses within these layers is unknown. As illustrated in Figures 1d and 1e, the CO2 plume movement will differ drastically between cases with continuous shale lenses and cases with discontinuous shale lenses separated by channels consisting of high-permeability sand. The key question then becomes, can the calibration provide any information on the continuity of the shale lenses? That is why we do not directly calibrate the permeabilities of the models in Figures 2-4. Instead we group some of the facies using zonation (shale, channel, and sand) and estimate the permeability of the zone. These zones are detailed in the following paragraphs.

[34] We divide our work into three cases, summarized in Table 2: (a) Case 1 (Figure 5) has only two parameters: shale permeability and nonshale (called sand for convenience) permeability; (b) Case 2 (Figure 6) has three parameters: the shale permeability, the channel permeability (rocks that may form permeable pathways through the shaly layer), and the permeability of the rest, called “sand”; and (c) Case 3 (Figure 7 has seven parameters: each shaly layer, i.e., with more than 50% of shale in surface area (which are layers 1, 3, and 6) has its own shale and channel permeability (yielding six parameters) and one parameter for the permeability of the rest (called sand for convenience). Figures 5-7 illustrate realization A and show one color for each of the M permeability zones, where M = 2, 3, or 7. All other parameters are fixed, as well as the porosity and capillary pressure strength, which depend on the permeability value as described in Appendix F.

Table 2. Description of Cases 1–3, as Illustrated in Figures 5-7, Respectivelya
CaseNumber of ParametersDescription of Parameters
  1. a

    All permeabilities are average over the class of facies and unless otherwise stated, the class includes all layers.

Case 12p1 = shale permeability
p2 = sand permeability
Case 23p1 = shale permeability
p2 = sand permeability
p3 = channel permeability
Case 37p1 = shale permeability at layer 1
p2 = shale permeability at layer 3
p2 = shale permeability at layer 6
p2 = channel permeability at layer 1
p2 = channel permeability at layer 3
p2 = channel permeability at layer 6
p3 = sand permeability

[35] Case 1 does not really address the issue of shale lens continuity because by only allowing two unknown parameters, the shale lens continuity is fixed (i.e., the channels will always have the same permeability as the sand). This case may be considered a preamble to our real study. Its successful calibration is necessary but not sufficient to demonstrate the value of our method. Case 2 addresses our key question: by allowing the permeability of the channels to vary independently of the shale and sand, can the model represent continuous shale lenses (when channel permeability is as low as shale permeability) or discontinuous shale lenses (when channel permeability is comparable to sand permeability)? Thus, this case tests whether the observation data are sensitive to the continuity of the shale lenses. Case 3 is a refined version of Case 2, in which the continuity of the shale lenses in each of the three shaly layers is estimated individually.

[36] Note that for Cases 2 and 3, the location of each channel in the shaly layer is assumed to be known and that channels are made up of different materials in the models on Figures 2-4. The channel locations vary for the different realizations (see Figures 2-4), but for each calibration the correct locations are set and the calibration determines channel permeability, i.e., whether the channel is a conduit or barrier to flow. In the field, seismic imaging may provide information on channel location, but in all likelihood it will not be high enough resolution. Asking the calibration to identify channel location in addition to permeability is a significant problem that is an important next step but beyond the scope of the present paper. Some possible approaches are discussed in the “future work” part of section 6.

3.3. Observation Data

[37] Since no data are available for this hypothetical site, synthetic observations of the pressure were made by running TOUGH2 with a given set of permeabilities ptrue for models in Figures 2-4. This methodology aims at mimicking an idealized case, where data would be obtained by field measurements. Because the data are synthetic, we can modify our objective function in (3) to

display math(4)

where math formula is the measurement error term, ptrue is the value of the “true” parameter vector, and p refers to the estimated value of the parameter set. Note that since we use a zonation strategy, ptrue and p are not the same size. For example, the realization A in Figure 2 has seven facies and hence seven horizontal permeabilities as part of its input (so the dimension of ptrue is 7), while Case 1 for realization A, which is a zonation-based parameterization of realization A, only has an input p of dimension 2. Similarly, Case 2 and Case 3 for realization A have an input p of dimensions 3 and 7, respectively. Hence, when we run a model with ptrue, it is always using the models in Figure 2-4, while when we run a model with the parameter set p, we are running Cases 1, 2, or 3 which are in Figures 5, 6, or 7, respectively, that describe the zonation that defines p for each case.

Figure 2.

Storage formation realization A, each color corresponds to a different facies.

Figure 3.

Storage formation realization B, each color corresponds to a different facies.

Figure 4.

Storage formation realization C, each color corresponds to a different facies.

Figure 5.

Case 1, two parameters: the shale and the sand permeabilities. Only shown for realization A.

Figure 6.

Case 2, three parameters: the shale, the channel, and the sand permeabilities. Only shown for realization A.

Figure 7.

Case 3, seven parameters, 1100 data points (pressure and gas saturation): the shale and channel permeabilities at layers 1, 3, and 6 and the sand permeability. Only shown for realization A.

[38] Once the synthetic data are generated with ptrue, a normal zero-mean random noise with standard deviation 7000 Pa (0.07 bars) was added to the synthetic pressure data, to mimic measurement error. This represents an error between ±16,282 Pa (0.2 bars) 99.6% of the time. The observations take place at the bottom of the injection well and at most three other depths in one or two observation wells. The number of observation wells is limited to two because in practice, they are very expensive to drill and maintain. During the calibration period of 1.5 years, a time series of 100 samples is drawn from the observation locations. More samples are drawn during the transient phase and less after 7 months, i.e., when the pressure increases more slowly (see Figure F1). This sampling frequency allows a good definition of the pressure response (meaning that the pressure response curve looks smooth). In practice, pressure samples could be drawn much more frequently, but numerous trials showed that the higher-frequency sampling did not yield better results.

[39] We also investigated the effect of observing the gas saturation in addition to the pressure. As for pressure, synthetic data obtained by simulating the model with the “true” parameters were used. For the gas saturation, we use an error model that is log-normally distributed with a standard deviation of 0.01. An explanation of this value is provided in Appendix D.

[40] In order to assess how much and what kind of observed data are needed to successfully estimate the plume position for the different cases, we ran multiple calibrations with different observation types (pressure only and pressure and gas saturation together), and with different numbers of monitoring locations (single depth, multidepth with one or two monitoring wells). The rationale behind choice of location in the formation for the observation data is detailed in Appendix D. Practically, the observation well used most frequently and noted “OW” (“observation well”) in Tables 3-5 is 50 m from the injection well. We monitor from “OW” at layers 2, 4, and 7 as discussed in Appendix D. It was necessary to make it so close to the injection well in order to see a gas saturation response by the end of the calibration period. For a greater amount of injection and a resulting larger plume, the monitoring well could be further away from the injection site. We also use a second observation well in cases 2 and 3, noted “far OW” in Tables 3-5, which is 400 m from the injection well. We also monitor at layers 2, 4, and 7. This observation well is only used for pressure information, as the well is outside the plume and the gas saturation at that location at the end of the calibration period is still zero (the CO2 plume has not yet reached that point). But the pressure response at the “far OW” is positive since the pressure field extends much farther than the CO2 plume (i.e., into the brine) as discussed by Zhou et al. [2008]. Note that the two observation wells are located on the same line from the injection well.

3.4. Optimization-Calibration

[41] We assume that the calibration period is the first 1.5 years of the CO2 injection period. The observation data (noted as z* in equation (3) and equal to math formula in equation (4)) are used to build the objective function as described earlier.

[42] Then the optimization algorithm minimizes the objective function, i.e., finds the set of parameters that makes the simulation model produce an output (noted zi (p) in equation (3)) that matches best the observed set of data z*. For all three cases, we fix the maximum number of simulations used to estimate the parameter values. Additionally, we stop the optimization if there is no improvement over many iterations, because the objective function is unlikely to be minimized further by additional evaluations. In practice, the minimum value of the objective function found by the optimization algorithm will drop quickly and then decreases more slowly as the number of evaluations increases. This does not guarantee that the optimization algorithm has found the global minimum, but it makes it more probable that the best solution is close (e.g., within 1%) to the exact solution. It is worthwhile mentioning that on average the algorithm found the reported minimum of the objective function in half the number of evaluations reported, suggesting that enough simulations were performed for the optimization algorithms to find a solution. Once the optimization algorithm (stochastic RBF) reaches the allotted number of simulations, the best set of parameters is selected, and a forward run is executed with this set of parameters, simulating 7.5 years of injection. Note that only 1.5 years of data at the limited number of monitoring points are used to predict the plume position at a 7.5 year horizon.

[43] In order to assess the accuracy with which we determined the plume position, we plot the observed gas saturation for all grid blocks (i.e., the gas saturation produced by the TOUGH2 simulation with the true parameters ptrue) versus the gas saturation for all grid blocks generated after running TOUGH2 with the calibrated set of parameters (called calibrated gas saturation for convenience) for 7.5 years. Then, we remove the data points for which both the observed and calibrated gas saturation are zero. Otherwise, the correlation coefficient would be overinflated by the fact that a lot of points would have the coordinate (0,0). Figure 8 shows an example of such a plot.

Figure 8.

Scatterplot of simulated gas saturations at 7.5 years for true and calibrated models, illustrating how the calibration performance is assessed.

4. Results

[44] The results for Case 1, summarized in Table 3, confirm that the lower parameter model can produce a good forecast with little spatial data available by using the proposed method, including the surrogate global optimization. Permeabilities of the sand and shale facies can be successfully determined, which is necessary for establishing the position of the CO2 plume. With only two unknown parameters, the correlation coefficient was about 0.68–0.77 for the plume at the end of the calibration period (R2 reported using the calibration assessment technique described in section 3.4). Forecasting the plume 5.5 years into the future yielded R2 ranging from 0.63 to 0.73. For two parameters, we notice that few data are needed. Monitoring the pressure at two locations (bottom of the injection well and at an observation well 300 m from the injection well, just underneath the top shaly layer) is sufficient to correctly estimate the geological parameters. Hence, the shale lenses are identified as such (low permeability, 1.9 µD on average for the three realizations), and the rest of the geology is identified as a higher-permeability region (453 mD on average for the three realizations). Measuring the gas saturation for Case 1 results in a gain of only 2% for the correlation coefficient, which is negligible in relation to the cost of sampling the fluid. Recall that Case 1 is really not a stringent test of the inverse method because the continuity of the shale lenses is fixed.

Table 3. Calibrated Parameter Set and Goodness of Fit for Case 1 (Figure 5)a
Case 1Real. AReal. BReal. C
  1. a

    P, pressure; SG, gas saturation; IW, injection well; OW, observation well; L2, layer 2; Real., realization; Observed data, data points measured and included in objective function as stated in equation (4); Number of TOUGH2 runs, number of simulations; Normalized Obj., objective function as stated in equation (4) divided by the number of data points; R2 (SSE), correlation coefficient computed as explained in section 3.4 and corresponding sum of squared Errors.

Observed Data: 200 Data Points: P Only at IW L10, OW L2
Number of TOUGH2 runs50
Estimated shale permeability3.6 µD0.28 µD1.7 µD
Estimated sand permeability385 mD475 mD498 mD
Normalized Obj.1.673.703.02
R2 at t = 1.5 years (SSE)0.77 (9.56)0.70 (12.26)0.68 (9.65)
R2 at t = 7.5 years (SSE)0.73 (7.79)0.63 (13.38)0.64 (10.86)
Observed Data 400 Data Points: P, SG at IW L10, OW L2
Number of TOUGH2 runs50
Estimated shale permeability5.9 µD0.25 µD2.6 µD
Estimated sand permeability390 mD477 mD478 mD
Normalized Obj.1.513.572.34
R2 at t = 1.5 years (SSE)0.82 (8.12)0.70 (11.85)0.70 (9.52)
R2 at t = 7.5 years (SSE)0.75 (8.12)0.64 (11.85)0.67 (9.47)

[45] The difference between Case 1 and Case 2 is that we separated the estimation of the permeability of the channels in the shaly layers from the shale and the sand, so the calibration has the opportunity to determine the continuity of the shale lenses. In Case 2 (Table 4), we observe that the plume position is determined accurately and that reasonable permeabilities are identified for all materials. Multilayer measurements are needed for the calibration in Case 2 and more function evaluations must be performed, which is due to the increased number of parameters to estimate. In Case 2, pressure measurements alone suffice to allow the optimization algorithm a good solution with a R2 ranging from 0.78 to 0.83 for the current plume and from 0.74 to 0.78 for the future plume. This drop in R2 is normal given that any deviation from the true parameters will be amplified as time passes. We can also conclude that adding gas saturation data only improves the fit slightly (2% on average), while costing a significant increase in cost for fluid sampling and analysis. It is noteworthy that Case 2 (three parameters) with 400 pressure measurements yields a substantial improvement in fit compared to Case 1 (two parameters) with 400 pressure and gas saturation measurements. Another important conclusion here is that meaningful aggregation of the parameters allows the objective function to be sensitive enough for the optimization algorithm to find a good solution with limited data. In this case, the successful aggregation is based on recognizing that the channels between the shale are more important to characterize than the many types of high-permeability sands.

Table 4. Calibrated Parameter Set and Goodness of Fit for Case 2 (Figure 6)a
Case 2Real. AReal. BReal. C
  1. a

    P, pressure; SG, gas saturation; IW, injection well; OW, observation well; L2, layer 2; Real., realization; Observed data, data points measured and included in objective function as stated in equation (4); Number of TOUGH2 runs, number of simulations; Normalized Obj., objective function as stated in equation (4) divided by the number of data points; R2 (SSE), correlation coefficient computed as explained in section 3.4 and corresponding sum of squared errors.

Observed Data: 400 Data Points Total—P at IW L10, far OW L2, 4, 7
Number of TOUGH2 runs120
Estimated shale permeability0.12 µD2.8 µD0.4 µD
Estimated channel permeability67 mD66 mD27 mD
Estimated sand permeability392 mD402 mD431 mD
Normalized Obj.1.321.721.55
R2 at t = 1.5 years (SSE)0.83 (5.23)0.78 (6.19)0.80 (6.37)
R2 at t = 7.5 years (SSE)0.78 (6.21)0.74 (7.30)0.75 (7.71)
Observed Data 800 Data Points Total—P, SG at IW L10, OW L2, 4, 7
Number of TOUGH2 runs120
Estimated shale permeability3.6 µD1.0 µD5.6 µD
Estimated channel permeability73 mD28 mD45 mD
Estimated sand permeability373 mD395 mD452 mD
Normalized Obj.1.111.411.35
R2 at t = 1.5 years (SSE)0.84 (4.44)0.81 (5.07)0.84 (4.85)
R2 at t = 7.5 years (SSE)0.80 (5.51)0.77 (6.32)0.77 (6.40)

[46] Case 3 (Table 5) is the most general case. It does not assume that all the shale lenses or all the channels have the same permeability; thus, the continuity of shale lenses within each shaly layer is determined independently. This brings the number of parameters to seven (three layers with two permeabilities (shale and channel) each and the sand permeability). In order to obtain a fit that is comparable to Cases 1 and 2, more data and more function evaluations are needed for Case 3 because the number of parameters has increased.

Table 5. Calibrated Parameter Set and Goodness of Fit for Case 3 (Figure 7)a
Case 3Real. AReal. BReal. C
  1. a

    P, pressure; SG, gas saturation; IW, injection well; OW, observation well; L2, layer 2; Real., realization; Observed data, data points measured and included in objective function as stated in equation (4); Number of TOUGH2 runs, number of simulations; Normalized Obj., objective function as stated in equation (4) divided by the number of data points; R2 (SSE), correlation coefficient computed as explained in section 3.4 and corresponding sum of squared Errors.

Observed Data 700 Data Points: P at IW L10, OW L2, 4, 7 and P at Far OW L2, 4, 7
Number of TOUGH2 runs200
Estimated shale L1 permeability1.7 mD1.7 mD1.4 mD
Estimated shale L3 permeability0.31 µD1.4 µD0.1 µD
Estimated shale L7 permeability0.22 µD1.2 µD0.1 mD
Estimated channel L1 permeability3 mD19 mD5 mD
Estimated channel L3 permeability32 mD33 mD150 mD
Estimated channel L7 permeability44 mD51 mD27 mD
Estimated sand permeability324 mD360 mD347 mD
Normalized Obj.0.971.461.15
R2 at t = 1.5 years (SSE)0.94 (2.13)0.90 (3.72)0.91 (2.25)
R2 at t = 7.5 years (SSE)0.83 (4.53)0.77 (6.26)0.80 (5.94)
Observed Data: 1100 Data Points Total—P, SG at IW L10, OW L2, 4, 7 and P at far OW L2, 4, 7
Number of TOUGH2 runs200
Estimated shale L1 permeability1.2 µD1.1 µD2.5 µD
Estimated shale L3 permeability0.4 µD1.5 µD5 µD
Estimated shale L7 permeability0.6 µD3.2 µD0.01 mD
Estimated channel L1 permeability20 mD32 mD13 mD
Estimated channel L3 permeability42 mD55 mD73 mD
Estimated channel L7 permeability65 mD47 mD38 mD
Estimated sand permeability355 mD378 mD354 mD
Normalized Obj.0.971.461.15
Normalized Obj.0.871.100.99
R2 at t = 1.5 years (SSE)0.95 (2.09)0.90 (2.58)0.91 (2.91)
R2 at t = 7.5 years (SSE)0.87 (3.82)0.81 (6.32)0.87 (6.33)

[47] With 1100 data points and 200 function evaluations, the calibration of Case 3 exceeds the fit level obtained in Case 2, with up to a 10% gain in correlation coefficient in the plume position at the end of the prediction period t = 7.5 years (see Table 5). The estimate of the plume at t = 1.5 years in Case 3 is excellent ( math formula for the three realizations) with only 700 data points (only pressure data from two monitoring wells and the injection well). We can see that much more data are needed than for previous cases because the Case 3 model has more parameters. In particular, another observation well is needed for additional pressure measurement. With pressure data only (700 data points), the average fit at the end of the prediction (t = 7.5 years) period is improved from 0.76 to 0.80 compared to Case 2, but not as much as with additional gas saturation data.

[48] Figure 9 shows the forecasted plume at t = 7.5 years, at layer 4 for realization C ran with input ptrue, the calibrated case (for Case 3, so seven parameters have been estimated) and 1100 data points (pressure and gas saturation data) and a case where the parameters from Case 3 have been set randomly. The vertical slice in Figure 10 shows the agreement between the forecasted plume generated with realization C using input ptrue and plumes generated using Case 3 using the calibrated input. As shown in Figures 9 and 10, a correlation coefficient of 0.87 (Table 5, realization C, t = 7.5 years) provides a good level of detail about the plume shape. In particular, preferential pathways are well identified in Figure 10. The surface area occupied by the plume is also well identified in Figure 9.

Figure 9.

Plume at layer 4 at t = 7.5 years. (left to right) True plume, plume resulting from calibrated parameters, and plume from randomly generated parameters for Case 3, realization C. (top) Gas saturation distribution. (bottom) Pressure change distribution.

Figure 10.

Gas saturation distribution in a vertical cross section, passing through the injection well, at t = 7.5 years. (top) True plume. (bottom) Plume generated from calibrated parameters for Case 3 with 1100 data points (pressure and gas saturation), realization C.

5. Discussion

[49] The results indicate that the method proposed gives very good estimates of the future plume (R2 > 0.83) as well as the current plume (R2 > 0.91) without a need to get exact estimates of the permeabilities of each of many facies in each layer (Case 3, 700 data points). The total number of facies-layer combinations is 60 (10 × 6) in the examples given here, and, there could be as many as 60 parameters for permeability without aggregation. There would be even more permeability parameters if we considered spatial variability within the shale in one layer. However, the amount of observed data will be limited by the cost and drawbacks of installing many monitoring sites. Our study indicated that with limited data we could get better results with fewer parameters, which involved lumping many facies-layer permeability combinations into just a few parameters. Even with only one monitoring well and two parameters (Case 1 in Tables 2 and 3), and only 50 simulations for calibration, we were able to get a reasonable average R2 of 0.72 for the plume at t = 1.5 years and 0.69 for the future plume at t = 7.5 years.

[50] As the number of parameters increases, a successful calibration is dependent on having a sufficient amount of data. The amount of data presented in each case is close (∓100 data points) to the minimum amount of data necessary for the calibration. Fewer data than presented have been used for Cases 1–3, resulting in a poor parameter solution that did not generate a plume anywhere close to the plume generated with ptrue. When the minimal amount of data is found, the modeler knows that no fewer data should be collected on the real site. Additionally, notice that for each case (A, B, or C), we have two sets of data (with different amounts of data) that we use to invert the model (see Tables 3-5). In other words, for a fixed number of parameters, we show the impact of the amount and type of data on the resulting calibrated plume.

[51] Our results support the optimistic view that it is possible to obtain a good or very good estimate of CO2 plume location with only one or two monitoring wells, without knowing the precise permeabilities of different facies in each of the nonshaly layers, as long as the continuity of the shale lenses in the shaly layers can be determined by the calibration. The best results for our three realizations (A–C) are for Case 3 (with two monitoring wells) that only attempts to estimate an average shale permeability and channel permeability for each shaly layer and then computes one average permeability over all nonshaly layers (Tables 2 and 5).

[52] It is also worthwhile to note that Case 3 with two monitoring wells (one of which was outside the CO2 plume) obtained an average R2 of 0.92 for the plume at the end of the calibration period and 0.80 for the plume at the end of the prediction period (t = 7.5 years), based only on pressure information. When CO2 saturation data were added, the R2 for the forecast plume increased somewhat to R2 = 0.85. Measuring CO2 saturation is difficult for several reasons, including the fact that it can interfere with CO2 injection during measurement. Also, the amount of error in the CO2 gas saturation measurement is unknown but is probably larger than what was assumed in Case 3 (as discussed in section 3.3). Therefore, the R2 = 0.87 for Case 3, realization A, with CO2 gas measurements is probably an upper bound on the best improvement that could be obtained with the addition of CO2 gas saturation measurements to the data on pressure only. Hence, in this study and for the particular examples we have presented, there does not seem to be much benefit to CO2 gas saturation monitoring especially considering how difficult it is to obtain accurate CO2 gas saturation data.

[53] It is interesting to note from Table 5 that the estimated shale permeabilities range from the actual value (used in realizations A–C to generate the observation data) of 1 µD to just over 1 mD, indicating that even 1 mD shales act as effective barriers to flow. This is a significant finding for application of our methodology to a variety of geological settings and spatial scales. As larger spatial scales are considered, grid-block resolution gets much coarser, necessitating effective-property permeabilities that will be larger for the shale and smaller for the nonshales. During the early stages of a CO2 injection, one would probably never need to consider spatial scales much more than 10 km.

[54] Even with these scale approximations, we expect that our calibration method will be successful for shale/nonshale contrasts in permeabilities as small as 2.5–3 orders of magnitude or larger. This is primarily because for our heterogeneous-aquifer model and the relatively short time scales considered (7.5 years), a 1 mD shale acts much like a 1 µD shale in providing a local flow barrier, with the overall pressure change not much impacted, because there are continuous paths of nonshale facies in both cases. In fact, a forward simulation of the 7.5 year injection period with a 1 mD shale produces a pressure response and CO2 plume evolution very similar to that obtained with the model on Figures 2-4 (realizations A–C, respectively) with input parameter ptrue. However, for a geological setting with shale/nonshale contrasts smaller than two orders of magnitude, a more sophisticated conceptualization than shale/nonshale will probably be required.

[55] In application of this method at actual sites there will be additional errors (e.g., in the exact location of channels in the shaly layers) that will likely result in lower accuracy and R2 values. However, our ability to obtain quite good results with few data, even in Cases 1 and 2, would indicate that a good forecasting model does not need to incorporate everything precisely and that the most important information is the permeability and continuity of the shale lenses, which we are estimating. Hence, we conclude that the method described here has the potential to be very useful for estimating the CO2 plume evolution, monitoring design, and risk assessment. The results also confirmed the value of pressure measurements, even without saturation measurements, especially in the nonshaly layers that are directly under a shaly layer. Pressure measurements can be taken at several depths in a monitoring well, so this information is helpful in order to know where to measure pressure vertically.

[56] Following is a step-by-step procedure that would describe how our method could be employed at an actual geological carbon sequestration site. Step 1: Build several numerical models with different levels of complexity, all approximating the real formation from seismic data, core samples, and geologists' experience. For this study, these are the models in Figures 2-4, corresponding to Case 1–3, respectively. The model could also be used to explore the benefits of alternative monitoring sites before their construction. Step 2: Inject CO2 and record the pressure at some monitoring wells over time. Note that for this study, we used the model in Figure 2 to generate the data since we do not have field observations. Step 3: Use an optimization method to estimate the parameters of the numerical model based on data collected so far. For this study, stochastic RBF [Regis and Shoemaker, 2007] was an efficient global optimization algorithm. Step 4: Estimate the current CO2 plume and forecast its evolution over multiple years (e.g., years 1, 3, and 5 and 7 years later) using the best calibrated numerical model. For this study, using 1.5 years of data, the model that best matches the observation data is Case 3 so we use it to predict the CO2 plume at 7.5 years. Use this information to determine the spatial locations (currently and in the future) that should have surface or freshwater aquifer monitoring as risk aversion strategies and also to help understand if the system is working as expected. Step 5: Continually update the estimates for the current and future plumes as new data come in as frequently as desired, which means that steps 3 and 4 are repeated with longer measurement time series. Note that several numerical models in step 4 can be assessed to statistically estimate the locations at which risk assessment should focus.

6. Conclusions and Future Work

[57] We have investigated inverse methodology for GCS in order to give researchers tools to provide more accurate parameter, pressure distribution, and plume location estimates. Because the flow process is multiphase and the geological setting is likely to be highly heterogeneous, the inverse problem is highly nonlinear, making the vast literature on optimization of simpler linear problems merely a starting point. Critical issues that can make or break such a calibration include the number and choice of unknown parameters and the type and amount of observation data needed. Much of the paper is devoted to addressing these issues, taking into account (1) that observation wells are very expensive to drill so will be minimal in number, and (2) that downhole pressure measurements are much easier to make than gas saturation measurements. Our findings should be broadly applicable to GCS in saline formations, not just to the particular site studied.

[58] The essential components of our proposed method are (1) a preliminary geological model based on all available site characterization data; (2) a fully coupled numerical simulator for the forward problem; (3) an efficient nongradient-based global optimization algorithm for the inverse problem; (4) careful parameter aggregation based on hydrogeological understanding; and (5) critical assessment of the value of different types of observation data obtained from inverting synthetic data. For the idealized example of a heterogeneous fluvial-deltaic saline aquifer, the method is computationally efficient and sufficiently accurate to approximate the CO2 plume migration and pressure response. We showed how the zonation strategy used to lump parameters can improve the quality of the estimates as well as reduce the computational cost needed to solve the problem. We also demonstrated that determining the continuity of the shale lenses should be emphasized more than estimating individual permeabilities of other facies, even if they are predominant in the formation. We also found that it is much more useful to measure pressure than CO2 gas saturation.

[59] Insofar as our geological model is broadly representative of potential storage formations in many sedimentary basins, we expect that both the methodology and these finding will be widely applicable to GCS in heterogeneous saline aquifers. In particular, the lumping of unknown lithologies into just two groups—shale and nonshale—works very well when the contrast in permeabilities is greater than two orders of magnitude, but if all permeabilities lie in a smaller range, more sophisticated methods will be needed.

[60] Estimation of the plume migration is expected to be very helpful in identifying where the CO2 saturation or pressure change is sufficiently high to need to be incorporated into risk assessment (e.g., for abandoned wells in the plume region) and in sensing if the CO2 sequestration system is not operating properly. Additionally the method will aid in the design of monitoring strategies, including estimating the minimal amount of observation data needed for successful calibration.

[61] This study is a significant first step for using detailed and computationally expensive models to help identify regions for risk assessment. We specifically chose the present problem to be a basis on which further studies can be built. Future work will consider other factors besides permeability that are typically poorly known for potential GCS sites, such as characteristic curve parameters (e.g., parameters from the Van Genuchten [1980] function) and permeability anisotropy; these properties can easily be considered within the present methodology. Other fluvial-deltaic depositional sequences such as upward coarsening can be readily constructed. Different sedimentary settings being considered for GCS should be examined as well, to verify that our conclusions are broadly applicable. A more difficult issue is allowing the calibration to identify the locations of the high-permeability channels in the shaly layers, not just their existence, and will be the topic of future work. Brute-force approaches in which the permeability of each grid block in the shaly layers is an unknown parameter lead to a high number of unknowns that are not feasible to estimate given limited data, but there are attractive alternatives that make use of hydrogeological information to minimize the number of unknowns. One promising method is the iterated function system (IFS) approach [Doughty et al., 1994], which bases permeability distributions on fractal geometry, enabling complex, natural-looking permeability distributions to be generated with relatively few parameters. The IFS method has been used to look for channels in low-permeability layers in a petroleum exploration application [Doughty, 1995]. Another promising method is the pilot point approach, which creates geostatistically generated property fields that are conditioned on property values at pilot points, which are the unknown parameters of the inversion [Finsterle and Kowalsky, 2006]. Future work will also include uncertainty analysis for parameters as well as for CO2 plume position, which becomes increasingly important for quantifying uncertainty.

Appendix A: TOUGH2 Simulator and ECO2N Module

[62] In order to simulate the CO2 plume, we use the simulator TOUGH2 [Pruess et al., 1999; Pruess, 2004] and the equation-of-state module ECO2N [Pruess and Spycher, 2007]. TOUGH2 is a numerical simulation program for nonisothermal flows of multiphase, multicomponent fluids in permeable (porous or fractured) media. Fluid flow is modeled with Darcy's law extended for two-phase flow via relative permeability and capillary pressure functions. TOUGH2 employs the integral-finite-difference method [Narasimhan and Witherspoon, 1976] for spatial discretization. For regular geometries, this method is equivalent to a simple finite-difference method, whereas for complicated geometries, it has all the flexibility of a finite-element method. TOUGH2 solves fully coupled fluid-flow and heat-flow equations, using implicit time stepping. The resulting discrete nonlinear algebraic equations for mass and energy conservation are written in a residual form and solved using Newton/Raphson iteration. ECO2N is a fluid property module for the TOUGH2 simulator (Version 2.0) that was designed for applications involving geological storage of CO2 in saline aquifers. It includes a comprehensive description of the thermodynamics and thermophysical properties of H2O-NaCl-CO2 mixtures that reproduce fluid properties largely within experimental error for the temperature, pressure, and salinity conditions of interest. In particular, CO2 can exist as a separate supercritical phase (which we refer to as the gas phase) or dissolved in the aqueous phase, and water can evaporate into the gas phase. Brine containing dissolved CO2 is heavier than brine without dissolved CO2, so natural convection in the aqueous phase can develop. Pruess et al. [2004] have shown that results on test problems using the TOUGH2 code and other CO2 sequestration simulation codes like stochastic trajectory optimization for motion planning (STOMP) [Oostrom and White, 2006] have a high level of agreement, reducing the possibility of programming errors and numerical instabilities. For the present problem we neglect diffusive fluid fluxes, which is appropriate for the short time period being simulated, and use TOUGH2 in isothermal mode, which has been shown to be reasonable when injected CO2 is at the same temperature as the storage formation [Doughty and Freifeld, 2012].

Appendix B: Generating a Geological Formation for Application

[63] The method is applied to multiple realizations of a hypothetical geological formation based on properties of the Frio Formation [Hovorka et al., 2001; Doughty et al., 2001], the host formation for the Frio Brine Pilot, an early test of GCS in saline formations conducted at the South Liberty oil field operated by Texas American Resources in Dayton, Texas, USA [Hovorka et al., 2006; Doughty et al., 2008]. There, 1600 metric tons of CO2 were injected over a period of 10 days into a steeply dipping brine-saturated sand layer at a depth of 1500 m. At this depth, free phase CO2 is supercritical.

[64] Our geological formation represents the fluvial/deltaic depositional environment of the Frio Formation in the upper Texas gulf coast [Galloway, 1982; Hovorka et al., 2001] but does not include any structural dip (i.e., our formation is flat). The basic building blocks (i.e., the layers) of the formation are generated stochastically using the program TProGS (Transition Probability Geostatistical Software) [Carle and Fogg, 1996, 1997; Fogg et al., 2001]. This program uses transition probability theory to construct multiple two-dimensional (the layers) stochastic representations of each depositional setting consistent with its schematic representation. Each realization honors the proportions, characteristic lengths, and juxtapositions of facies present in the schematic representation. More details on the stochastic generation process may be found in Doughty and Pruess [2004]. The layers are then superimposed to represent typical depositional sequences, which in this case is a fining-upward storage formation, i.e., rocks with larger permeabilities at the bottom of the formation and with lower permeabilities at the top on average. The stochastic layer generation is based on schematic representations of three fluvial depositional settings found in this part of the Frio Formation: barrier bars (continuous, very high-permeability sands), distributary channels (intermingled sands and shales, with a large high-permeability sand component), and interdistributary bayfill (predominantly low-permeability discontinuous shale lenses, interspersed with moderate-permeability sand), which we refer to as “shaly layers.” In order to draw meaningful conclusions, we created three stochastically generated realizations of the storage formation, all upward fining and with the same properties (see Tables 1a and 1b). The three realizations (A–C) are shown in Figures 2-4. In each realization, layers 8 and 10 are barrier bars, layers 2, 4, 5, 7, and 9 are distributary channels, and layers 1, 3, and 6 are shaly layers. In the following, we collectively refer to barrier bars and distributary channels as “nonshaly layers,” because although they do contain some shale lenses, they are predominantly composed of higher-permeability facies.

Appendix C: Comment on Representation of Heterogeneity

[65] Mathematical models of hydrogeological processes are necessarily simplifications of natural systems, in which heterogeneity usually occurs at all scales from basin scale to pore scale. For inverse models in particular it is important to carefully choose the level of heterogeneity included. The choice is almost always a compromise between competing goals. Too simple a model (e.g., a homogeneous storage formation), while computationally efficient and more easily inverted, will miss too much of the physics to be useful, whereas too complex a model (e.g., meter-scale heterogeneity for a multikilometer-scale problem) will be computationally expensive and have so many unknowns as to form an ill-posed inverse problem. We must carefully examine the flow and transport behavior in typical hydrogeological settings considered for GCS and tailor the level of heterogeneity incorporated into our model for it.

[66] We have chosen a model of a hypothetical saline aquifer for the storage formation, based on well logs from a few wells plus a regional geological understanding of a fluvial-deltaic depositional setting. We explicitly resolve heterogeneity on the facies scale (300 m to 1 km), with material properties chosen to implicitly account for smaller-scale heterogeneity. This level of heterogeneity resolution seems optimal for the analysis of the injection period of a large-scale GCS project, where we are primarily concerned with mobile CO2 and its movement due to imposed pressure gradients and buoyancy forces through a formation in which permeability varies by orders of magnitude. Note that we do not model the postinjection period, during which capillary trapping, dissolution trapping, and mineral trapping become the dominant processes.

[67] To check the effect of smaller-scale heterogeneity, we did some additional forward simulations, using an intrafacies heterogeneity at the smallest scale possible for our relatively coarse grid (50 m). This heterogeneity was introduced by creating a log-normal distribution of random numbers with mean zero and specified standard deviation and by adding a random number drawn from it to the log-permeability value of each grid block. Grid-block porosity and capillary pressure were then modified as described in this appendix. Standard deviations of log-base-10 of permeability of 0.10, 0.25, and 0.50 were used. This study confirmed that for standard deviations up to 0.25, the addition of small-scale heterogeneity within facies had only a minimal impact on the pressure response and gas saturation changes accompanying CO2 injection, supporting our simpler conceptualization that did not resolve it. For a standard deviation of 0.50, the pressure changes were all noticeably smaller, as the high-permeability grid blocks began to dominate, but the spatial distribution of the pressure response was still quite similar to the less heterogeneous cases, so the calibration would be likely to work for this case also.

[68] Note that this treatment of heterogeneity is only applicable to sandstone formation based on fluvial-deltaic depositional settings a priori. For other situations, other treatment of heterogeneity and other choices of lumped parameters could be necessary.

Appendix D: Observation Data: Pressure, Gas Saturation, and Locations of Wells

[69] A preliminary study was carried out in order to investigate optimal locations for monitoring. We ran several forward simulations and looked at the pressure profiles in all the layers. Results of this study are illustrated in Figures D1 and D2, which show the pressure response for the true set of parameters ptrue. Figures D1 and D2 show that the pressure response propagates throughout the formation, including shaly layers (i.e., layers with over 50% shale), and does not obviously identify the formation heterogeneities. This is not to say that the pressure response is homogeneous. Figures D1 and D2 suggest that the pressure response might be strongest at the top of the formation, but given the noise level, a clear pressure signal (a good signal-to-noise ratio with a pressure response of at least 1 bar and measurement noise approximately 0.1 bar) will be detected practically anywhere in the formation. Although Figures D1 and D2 are generated with the true parameter set ptrue, the pressure response also propagates throughout the formation for random sets of parameters. The conclusion is that wherever the monitoring wells are located, we will see a pressure response that is not too noisy, i.e., the signal will be large enough to be distinguishable from the measurement error.

Figure D1.

Distribution of pressure change at all depths for realization A at the end of the calibration period (1.5 years) generated with the true parameter values. Layer numbers refer to Figure 2.

Figure D2.

Distribution of pressure change for realization A at the end of the calibration period (1.5 years) generated with the true parameter values.

[70] Next, we examined the relationship between pressure response and gas saturation. This is a critical step because we want to use the pressure response as the observed data, but ultimately it is the gas saturation distribution that identifies where the CO2 plume is located. Figure D3 illustrates the relationship between pressure change, permeability, and gas saturation SG for all the grid blocks in the model. For high-permeability materials in which SG ≥ 0, pressure change is positively correlated with gas saturation, indicating that the pressure response does provide information on plume location, as stated by Zhou et al. [2008]. However, for the low-permeability shales, large pressure increases can occur where no CO2 is present (SG = 0). Thus, it will be important to formulate the inverse problem carefully, in terms of both where to locate observation locations (described later) and how to choose the unknown parameters for the calibration (described in section 3.2).

Figure D3.

Pressure response (DP, in bars) versus gas saturation (SG) for all grid blocks of realization A, generated with true parameter values; color scale shows permeability of the grid block.

[71] In practice, we have seen that the most effective depth to sample from is just below a shaly layer because the pressure response will be most different from one parameter set to another at these locations. That is the reason why our samples are always drawn from layers 2, 4, and 7 (see Figures 2-4). Such a strategy allows the samples to carry more information about the shale permeability (which we show to be critical in the plume position prediction process). In fact, the pressure response will be very different if the parameter chosen by the optimization algorithm for the shale permeability is large (say more than 10 mD), allowing the CO2 to go through the layer rather than if the parameter chosen by the optimization algorithm for the shale permeability is small (say less than 0.1 mD), forcing the CO2 to accumulate underneath it.

[72] Gas saturation is another source of information, in addition to pressure. In the field, the gas saturation is not measured directly but instead we assume that a fluid sample is taken to the surface where its density ρ is measured. The gas saturation, noted SG, is then deduced by applying the following formula:

display math(D1)

[73] In addition to uncertainty in SG arising from uncertainty in pressure, temperature, and salinity, which affect math formula and ρbrine, there is an intrinsic uncertainty in measuring ρ. Assuming typical errors of 7000 Pa (0.7 bars) for pressure, 0.02°C for temperature, 0.005 for salinity, and 5 kg/m3 for ρ yields a standard deviation of about 0.01 for SG. For actual field data, further uncertainty in SG comes from the fact that heat loss occurs during the sampling process, causing additional CO2 to dissolve into the brine, hence changing SG slightly. One possibility would be to account for this added uncertainty by using a much larger standard deviation for SG (a simple calculation suggests that the error could be as large as 0.3). However, this heat loss does not introduce a random error to SG measurements, so it would be much better to include the cooling process in the numerical model and keep the standard deviation small. Because we are considering synthetic data and there is no actual heat loss affecting our ρ, we choose to add a zero-mean log-normal noise with standard deviation 0.01, which gives a noise-to-signal ratio of the same order as the one used for the pressure observation. By using this smaller error level, we can estimate an upper bound on the beneficial value of gas saturation measurements. It has been shown that the benefit of measuring gas saturation with a measurement error of 0.1 is quite small, so benefits with a larger error would be even smaller. Other possibilities for indirectly measuring the gas saturation also exist, such as doing repeated well logging to detect changes in a property that differs strongly between brine and CO2. However, all options for determining SG yield relatively large measurement errors and are far more labor intensive than measuring pressure.

[74] Gas saturation, unlike pressure, cannot be measured everywhere. In particular, it became obvious from preliminary simulations that shaly layers do not hold any CO2 and the extent of CO2 away from the injection well during the 1.5 years calibration period is limited. Hence, a more careful choice of location for this kind of data is needed, i.e., we need to monitor where SG becomes greater than zero before the end of the calibration period. We note that knowing locations where there is a gas saturation response before the end of the calibration period is optimistic. This information would be more challenging to obtain in a commercial implementation of GCS, another reason why the pressure response is our preferred observation. The widespread dissemination of the pressure response in advance of the CO2 plume itself also may provide an “early warning” of potential leakage paths out of the storage formation, enabling timely remediation to be done before actual CO2 leakage occurs.

Appendix E: Stochastic RBF Global Optimization Algorithm

[75] We use for the optimization of equation (1) the global optimization method developed by Regis and Shoemaker [2007], which introduced the multistart local stochastic RBF, which here we will call “stochastic RBF.” RBF stands for radial basis function, which is the spline used in the surrogate surface. A goal of this global optimization method is to utilize a response surface to approximate the expensive objective function f (p) to reduce the number of function evaluations of f (p) (as defined in equation (1)) required to get an accurate answer.

[76] The CO2 parameter estimation problem as described in the earlier section has multiple local minima. Hence, local optimization algorithms like Levenberg-Marquardt or Newton-Raphson cannot necessarily find the best solution. As a global optimization algorithm, stochastic RBF searches for the global optimum, which is the best among all the local minima. Stochastic RBF does not require derivatives of the TOUGH2 simulation model, which is also an advantage since the derivatives would need to be computed numerically at great computational expense, which requires more simulations, and the resulting derivatives may not be accurate. Most importantly, a simulation with our 3-D model takes up to 2 h; hence, the feasible optimization algorithm cannot require as many function evaluations as heuristic global optimization methods, such as simulated annealing or genetic algorithms, often do. Stochastic RBF was shown by Regis and Shoemaker [2007] to find accurate solutions with relatively few simulations on many test problems with multiple local minima in comparison to a number of other global optimization methods and also performed well in comparisons by Espinet and Shoemaker [2013].

[77] A response surface approximating the objective function over the whole parameter domain is updated each time a simulation is made. This gives an ever-improving approximation of the objective function curve. This way, no information is lost, reducing the number of simulations needed. The stochastic RBF algorithm was previously applied to a complex nonlinear bioremediation groundwater model [Mugunthan et al., 2005] that took 3 h/simulation. In this application, the Regis-Shoemaker method was compared to a number of other optimization algorithms, and it was much more efficient. However, the mathematical equations in this earlier example included population growth rates and single-phase flow and hence were very different from the equations of multiphase flow.

[78] Given that f (p) has been previously evaluated on a parameter set math formula, the response surface that we use is a RBF interpolation model s(p) that approximates the objective function as

display math(E1)

where math formula is the Euclidean norm. We have found math formula to perform well. A linear polynomial L (p) with coefficients math formula is also part of the response surface. The surface sn (p) is computed by calculating the values of λi and ti so that the constraints math formula are satisfied. This calculation involves solving a set of linear equations.

[79] A summary of the stochastic RBF method is that initially, an experimental design (Latin hypercube) is used to select m = 2d + 3 points math formula. Then the first surface sn (p) is created using the points in Cm, and the stochastic RBF picks the next point pm+1 for evaluation. This selection of pm+1 is done by (a) randomly generating “candidate” points p around the current best solution and (b) picking the candidate point that minimizes a weighted sum of the value of s (p) minus the minimum distance of candidate point p to previous evaluated points in Cm. The search for the best of the candidate points is computationally inexpensive since all points are evaluated on the inexpensive response surface. Then f (pm+1) is computed (an expensive calculation), and pm+1 is added to Cm to create Cm+1. Then an updated response surface is computed using the points in Cm+1 to pick the next simulation point pm+2. This is continued until the point pM is computed, where M is the maximum number of simulations. Regis and Shoemaker [2007] show that this algorithm will converge in probability to the global optimum.

Appendix F: Sensitivity and Choice of Parameters

[80] A preliminary study was carried out in order to establish which parameters were the most critical to determine the plume position. A Monte Carlo experiment was set up for a simpler homogeneous model with the following parameters: the permeability (horizontal and vertical permeability assumed to be equal), porosity, and the irreducible gas saturation for the van Genuchten-Mualem relative permeability function [Van Genuchten, 1980; Mualem, 1976]. Each parameter was given a uniform distribution that spanned a feasible interval of values for the parameters. We repeatedly drew a value of each parameter randomly on its interval, simulated the CO2 injection process, and computed the value of the objective function (using pressure data) based on this Monte Carlo sample. This gave us a set of values of the objective function math formula, for which we can compute a variance Var (Y). Then we repeated this procedure, except that we kept one of the parameters xi constant. This gives us a set of values of the objective function math formula, for which we can compute a variance Var (CYi). Then the sensitivity Sens(i) of the objective function due to the parameter xi is defined as follows:

display math(F1)

[81] The most sensitive parameter is the permeability (sensitivity as defined in equation (7) is 1.19e12 Pa2), then irreducible gas saturation (7.36e11 Pa2), and finally, the porosity (4.06e11 Pa2). The effect of irreducible gas saturation is lower than the one due to the permeability so we assume that it is known for the purpose of this study. The permeability produced a greater variance Var(i) in the objective function than did the other parameters and hence was chosen to be calibrated in this study. This result is confirmed for homogeneous formations in other studies by Law and Bachu [1996] and Lindeberg [1997].

[82] We further studied the sensitivity of the permeability by separating it in two categories for the heterogeneous model: the low-permeability parameter, consisting only of the shale, and the high-permeability parameters, consisting of all other facies (six in total), as shown in Figures 2-4. We first kept the shale permeability constant, set to its “true value” (i.e., the value with which we generated the synthetic observations) and allowed the other to vary, sampling uniformly from 0.0001 to 800 mD. Then we simulated the model with these randomly drawn parameters (two trials) and compiled the results in the first two rows (trial 1) of Table F1. It shows that the correlation coefficient was on average 0.71, 0.61, and 0.62 for realizations A, B, and C, respectively. The conclusion is clear and twofold: (1) both the CO2 plume position and the pressure response are not sensitive to the permeability of the nonshaly facies, taken separately, as shown in Figure F1; and (2) knowing the magnitude of the shale permeability enables us to obtain a strong correlation coefficient between the plume generated with input parameter ptrue and the plume generated with random parameters. In the third row (trial 2) of Table F1, we allowed the shale permeability to vary on the same range as the other facies permeability. We obtained a correlation coefficient of zero, even when the nonshaly facies were set to their true permeability ptrue. However, by lumping high-permeability facies together as one parameter, the pressure responses become sensitive to changes in high-permeability facies as shown in Figure F2. By additionally varying the shale permeability, the pressure response is even more different when we use a random set of parameters.

Figure F1.

Pressure profile at different locations: (1) observed; (2) generated by running the simulator with a calibrated set of nonshale facies permeabilities (six in total); and (3) by assigning a random value to these permeabilities. The shale permeability is kept to its true value in all three cases. Note the pressure profile is not sensitive to the permeabilities of the nonshale facies, when calibrated or randomly assigned independently.

Table F1. Correlation Coefficient for Realizations A–C at t = 7.5 Years Showing the Importance of the Shale Permeability
ScenarioRealization ARealization BRealization C
Shale permeability set to true value other parameters randomly set (trial 1)SSE = 8.53SSE = 13.21SSE = 10.52
R2 = 0.72R2 = 0.63R2 = 0.63
Shale permeability set to true value other parameters randomly set (trial 2)SSE = 8.2SSE = 13.4SSE = 10.89
R2 = 0.74R2 = 0.63R2 = 0.61
Shale permeability randomly set other parameters set to their true valueSSE = 26.5SSE = 31.2SSE = 28.2
R2 = 0.0R2 = 0.0R2 = 0.0
Figure F2.

Pressure profile at different locations: (1) observed; (2) generated by lumping the nonshale facies permeabilities together as one parameter and running the simulator with the calibrated lumped nonshale facies permeability (only one parameter); and (3) by assigning a random value to this parameter. The shale permeability is kept to its true value in all three cases. Note the pressure profile is very sensitive to the permeability of the nonshale facies, when lumped together as one.

[83] For the present problem, in which only the early stage of the CO2 injection period is considered, residual gas saturation is not thought to play an important role (it is critical during postinjection periods when the flow process switches from drainage to inhibition, and capillary trapping becomes important). However, there are other features of relative permeability functions that are important, and one in particular has been examined in a preliminary sensitivity study. This is the van Genuchten m parameter (similar to the exponent in a power-law function), which determines how quickly liquid relative permeability decreases as liquid saturation decreases (as the CO2 plume arrives). This is clearly an important parameter controlling flow, and it is generally poorly known. The study indicates that the spatial distribution of pressure response does depend somewhat on the value of m, so it would be valuable to do calibrations in which it was treated as an unknown parameter. This is within the capability of our methodology, but beyond the scope of the present paper.

[84] The reason why so few parameters are used is because the time series pressure and CO2 gas saturation data for the GCS problem will be so limited spatially (only one or two wells, with at least one well located outside the CO2 plume). We experimented with a larger number of parameters for the forecasting model with monitoring data and obtained worse results than for the parameterizations in Cases 1–3. In other words more parameters did not improve forecasting over the finite element model (FEM) models in Cases 1–3. With few spatially distributed data, it is expected that one cannot accurately estimate from the monitoring time series a large number of parameters. We obtained the best results for our example problems with seven parameters. Hence, the limitation is not with the optimization method but with the spatially sparse data available in a GCS implementation. (In fact, the algorithm used here has been used for up to 150 decision variables [Regis and Shoemaker, 2013].)

[85] In conclusion, we decided to estimate the permeability distribution, noted as p in equation (4). Secondary parameters that are functions of the permeability are changed each time a value of permeability is changed. In particular, the capillary pressure strength as well as the maximum value that the capillary pressure can take is allowed to vary, by making the capillary pressure strength inversely proportional to the square root of the permeability, and then setting maximum capillary pressure 500 times bigger than the capillary strength. Additionally, the porosity dependence on permeability is obtained by inverting the Carman-Kozeny equation [Bear, 1972], incorporating different constants for shale and nonshale facies. It is possible that using a different relationship between permeability and porosity would have an effect on the results. However, this artifice of using secondary parameters reduces the number of parameters to be estimated. Given the amount of data, it is not clear that porosity could be accurately estimated.

Acknowledgments

[86] We thank three anonymous reviewers and Kyle Perline, whose comments greatly improved this paper. Thanks are due to Tianfu Xu, Karsten Pruess, and Stefan Finsterle from the LBNL for their technical support while A.E. visited the LBNL during summer 2009 and the Cornell Atkinson Center for a Sustainable Future (ACSF) and grants to C.S. (NSF EAR 0711491, CISE 1116298, and DOE SCIDAC) for their partial financial support for C.S. and A.E. C.D. acknowledges financial support from the Assistant Secretary for Fossil Energy, Office of Sequestration, Hydrogen, and Clean Coal Fuels, National Energy Technology Laboratory, and Lawrence Berkeley National Laboratory under U.S. Department of Energy contract DE-AC02–05CH11231.

Ancillary