Confidence region estimation techniques for nonlinear regression in groundwater flow: Three case studies

Abstract

[1] This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F test method, and a log likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well test results. The confidence regions for each case study are analyzed and compared. Although the F test and log likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.

1. Introduction

[2] Nonlinear models are frequently used to model physical phenomena and engineering applications. In this paper, we refer to a nonlinear model very broadly: the output of the model is a nonlinear function of the parameters [Draper and Smith, 1998]. Thus nonlinear models can include systems of partial differential equations (PDEs). Some examples include CFD (computational fluid dynamics) [Oberkampf and Barone, 2005], groundwater flow [Beauheim and Roberts, 2002], etc. Nonlinear models also include functional approximations of uncertain data via regression or response surface models. Many nonlinear models require the solution of some type of optimization problem to determine the optimal parameter settings for the model. Statisticians have long worried about how to determine the optimal parameters for nonlinear regression models [Seber and Wild, 2003]. In the case of nonlinear regression, optimization methods have been used to determine the parameters which “best fit” the data, according to minimizing a least squares expression. The optimization methods may not always converge and find the true solution, although advances in the optimization methods have improved nonlinear least squares solvers.

[3] In this paper, we are concerned about determining confidence intervals around the parameter values in a nonlinear model. The parameters may be parameters in an approximation model such as a regression model, or physics modeling parameters which are used in physical simulation models such as PDEs. We refer to data as separate from parameters: data are physical data which are input either to a regression or physical simulation, and parameters are variables which are used in the representation and solution of the nonlinear model. For example, in groundwater flow modeling, parameters include hydraulic conductivity, specific storage, etc. Data may include measured flow rates from well tests.

[4] The focus of this paper is on calculating and evaluating joint confidence intervals. A joint confidence interval is one that simultaneously bounds the parameters; it is also called a simultaneous confidence interval. It is not a set of individual confidence intervals for each parameter in the problem. Individual confidence intervals on parameters usually assume independence of the parameters, which may lead to large errors if one is trying to infer a region where the parameters may jointly exist, for example, with 95% confidence. A joint confidence interval for a problem with two parameters may be an ellipse, where all points within the ellipse represent potential combinations of the parameters that fall within the confidence region.

[5] We are interested in calculating joint confidence intervals to understand the range or potential spread in the parameter values. Given various sources of uncertainty, it is unlikely that the optimal parameters which minimize some least squares formulation are the only reasonable parameters. There are several sources of uncertainty that can contribute to difficulty in identifying optimal parameter values in nonlinear problems. The data itself may have significant uncertainties: there may be missing values, measurement error, systematic biases, etc. Nonlinear inverse problems may involve discontinuities which result in multiple values for the optimal parameters, due to complexities in the underlying physics (e.g., significant heterogeneities in material models). The parameters themselves may have significant variability as part of their inherent randomness. Finally, model form can also influence the parameter settings [Sun, 1999]. For these reasons, one should not always trust the optimal parameter values obtained by a nonlinear least squares solutions. Looking at the joint confidence intervals on parameter values will give a more complete picture about the optimal values for the parameters, and their correlation.

[6] This paper examines three methods for determining joint confidence intervals in nonlinear models and compares them in case studies. In two of our cases, the nonlinear model is a groundwater flow code; it is a set of PDEs. The first section of this paper provides background to nonlinear regression and determination of parameters in nonlinear models. The second section describes three methods which are used to determine joint confidence intervals for parameters in nonlinear models. The third section outlines the example problems used in the case studies, and the fourth section discusses the results, with a fifth section providing conclusions.

2. Nonlinear Regression

[7] Nonlinear regression extends linear regression for use with a much larger and more general class of functions [Bates and Watts, 1988]. Almost any function that can be written in closed form can be incorporated in a nonlinear regression model. Unlike linear regression, there are very few limitations on the way parameters can be used in the functional part of a nonlinear regression model. The way in which the unknown parameters in the function are estimated, however, is conceptually the same as it is in linear least squares regression. In nonlinear regression, the nonlinear model of the response y as a function of the n-dimensional inputs x is given as

where f is the nonlinear model, is a vector of parameters, and ε is the random error term. E[εi] = 0, V = σ2, and εi are identical and independently distributed (iid). As an example, one could have yi = θ1[1−exp(xiθ2)]. Note that for nonlinear functions, the derivative of f with respect to the parameters depends on at least one of the parameters of the vector . The goal of nonlinear regression is to find the optimal values of to minimize the error sum of squares function S(), also referred to as SSE:

[8] Nonlinear regression requires an optimization algorithm to find the least squares estimator of the true minimum θ*. This is often difficult. Nonlinear least squares optimization algorithms have been designed to exploit the structure of a sum of the squares objective function. If S() is differentiated twice, terms of residual ri(), ri″(), and [ri′()]2 result. By assuming that the residuals ri() are close to zero near the solution, the Hessian matrix of second derivatives of S() can be approximated using only first derivatives of ri().

[9] Note that these optimization methods work both in the case where the function f is an analytical model OR in the case where f is a computational simulation model and we are trying to find the optimal value of the parameters which minimizes the differences between the model predictions and experimental data. An algorithm that is particularly well suited to the small-residual case and the above formulation is the Gauss-Newton algorithm. This formulation and algorithm combination typically requires the user to explicitly formulate each term in the least squares (e.g., n terms for n data points) along with the gradients for each term. This may be very expensive for computationally intensive evaluations of f. The number of necessary calculations also will increase as the number of parameters increases. Additionally, the approximation of gradients in the presence of errors in the problem is problematic. Often, the gradient approximation has larger errors than the objective function approximation [Borggaard et al., 2002].

[10] Because of the expense and questionable accuracy of computing gradient approximations, we choose an optimization algorithm that does not require gradients. The Shuffled Complex Evolution Method (SCEM) is a hybrid of the Nelder-Mead algorithm and evolutionary algorithms that was developed specifically for nonlinear hydraulic parameter identification problems [Duan et al., 1992]. Vugrin [2005] contains a complete description of our application of the SCEM.

3. Joint Confidence Regions

[11] The term confidence region is often misused, but most people have the sense that they want to understand the uncertainty in a data point or in a prediction. Confidence regions are sometimes called inference regions, indicating that these are regions where one infers something about the likelihood of the parameters existing. This section outlines three methods for constructing simultaneous confidence regions on parameters from nonlinear models. These methods can be found in the work of Seber and Wild [2003] and Donaldson and Schnabel [1987]. The methods are confidence regions based on linear approximation, the F test, and the likelihood ratio.

3.1. Linear Approximation Method

[12] We implement the linear approximation method described by Rooney and Biegler [2001] and Bates and Watts [1988]. The linear approximation confidence region is based on a linear approximation of f. In linear models, the sum of squares function is quadratic, and contours of constant sums of squares are ellipses or ellipsoid surfaces. In nonlinear models, if one approximates the nonlinear function with a linear Taylor series expansion about the parameter estimate , the sum of squares approximation is then quadratic. This results in ellipsoid contours centered at .

[13] In order to implement the linear approximation method, an estimate of the Hessian matrix of f(; x) is necessary. The first derivative of the numerical model with respect to each parameter is approximated using forward finite difference approximation. The approximated vector of first partial derivatives at is the Jacobian matrix J. Then, the Hessian matrix is approximated by

The variance of εi must also be approximated in order to create a linear approximation confidence region. The formula is

where n is the number of data points and p is the number of parameters. The confidence region is then

where Fp,αnp is the upper α percentage point of the F distribution.

[14] Note that some versions of (5) use the variance matrix instead of the Hessian. The relationship is = s2H−1. The Cramer-Rao lower bound for the covariance of the parameters may be used to provide an estimate for the confidence region. In Cramer-Rao, the reciprocal of the Fisher information is the lower bound on the variance. This bound may be calculated using Monte Carlo methods. We have not investigated using Cramer-Rao estimates for the variance in this paper.

[15] For linear regression models, the linear approximation confidence regions are exact. However, for nonlinear models, this approximation may not be very accurate, especially with small data sets and/or if the model is very nonlinear with respect to one or more of the parameters. Bates and Watts [1988, p. 65] state: “We hasten to warn the reader that linear approximation regions can be extremely misleading.”

[16] In addition to the concerns about nonlinearity in the model, the Hessian approximation may cause the linear approximation method to result in incorrect confidence regions. The Hessian approximation does not include any second-order terms. In spite of this, Donaldson and Schnabel [1987] reported that the linearization method based solely on the Jacobian matrix appears preferable to variants that use the full Hessian matrix since it is less expensive, more numerically stable, and at least as accurate. Note that the use of finite differences for derivative approximation in finite element models has been shown to result in very poor approximations of derivatives [Borggaard et al., 2002]. Since the nonlinear regression modeling technique contains sources of errors that are largely unavoidable, residuals are often not close to 0, and finite difference derivatives should be used with caution. Clearly, errors in the Hessian approximation will impact the computed confidence region. Thus the confidence regions computed with the linear approximation method may be compromised by highly nonlinear models, small data sets, or poor Hessian approximations.

3.2. F Test Method

[17] The second method we investigate is based on the F distribution. This confidence region is based on the assumption that the error terms εi are jointly normally distributed, or spherically normal. The confidence region is the intersection of the expectation surface (f) with a sphere centered at y [Bates and Watts, 1988]. In the case of the linear model, this is an ellipse as outlined in the section above. In the nonlinear case, however, it is not. The confidence region is a set of points for which S() is less than or equal to a constant. The formula is given by

[18] Seber and Wild [2003] claim that equations (5) and (6) both yield confidence regions for which have the asymptotic confidence level of 100(1−α)%. However, these regions may be different. Additionally, equation (6) does not require the approximation of derivatives or Hessian matrices, so this source of error is avoided. Note that the F test and linear approximation confidence regions are the same for linear models.

3.3. Log Likelihood Method

[19] There is a discussion by Lyman [2003] and in sections 5.3 and 5.9 of Seber and Wild [2003] about a different confidence region based on likelihood functions. In this approach, L(γ) is the log likelihood function for a general model with an unknown p-dimensional vector parameter γ. In the log likelihood method, contours of the likelihood function map out confidence regions for the parameters. A hypothesis test can be constructed for the statistic LR (likelihood ratio): , where is the maximum likelihood estimate of γ which maximizes the likelihood L(γ). The test statistic LR is approximately distributed as a χ2p when the null hypothesis γ = γ0 is true. This can be used to obtain an approximate confidence interval for γ: {γ: 2[LL(γ)] ≤ χ2p(α)}. Finally, under the assumption of normally distributed residuals, a similar transformation can be used to express this confidence interval in terms of the SSE:

where χ2p(α) is the upper α percentage point for the χ2 distribution.

[20] Equation (7) can include a Bartlett correction factor. We use a value of one for this investigation. See Rooney and Biegler [2001] for further discussion. Note that the F test and log likelihood method produce the same confidence regions for linear models.

4. Test Problems

[21] We calculate confidence regions for three test problems. The first test problem is a linear parameter identification problem with synthetic data. The other two test problems are nonlinear groundwater parameter identification problems based on experimentally collected data.

4.1. Test Problem 1: Polynomials

[22] The first test problem is simulation of a fifth degree polynomial. The set of synthetic data points d1 is generated by evaluating the polynomial p(x) = 0.15x2 − 0.05x5 at 101 evenly spaced points between 0 and 1. This set of data points is perturbed by adding a noise term to each data point. The noise terms are chosen from a normal distribution with a mean of zero and a standard deviation of 0.05. Figure 1 is a plot of the noisy data points and underlying polynomial.

[23] The noisy data set d1 is modeled with the fifth degree polynomial ps1, θ2; x) = θ1x22x5. The optimization problem is then:

[24] Because of the construction of this test problem, we know that * = (0.15, −0.05) and S(*) = 0 for the unperturbed problem. However, when optimization algorithms are applied to the perturbed data set, the termination point is unlikely to be equal to *. In order to generate the value of , the SCEM is applied to the problem. We use paCalc, a parameter identification framework developed by INTERA Engineering, for all calculations and application of the SCEM optimization algorithm.

[25] For all three test problems, we compute confidence regions at the 95% confidence level; therefore α = 0.05. The percentage points of the F and χ2 distributions at α = 0.05 are taken from Zar [1984]. Table 1 is a summary of the factors that influence confidence region calculation for test problem 1.

Table 1. Test Problem 1: Values Used for Confidence Region Calculations
Variable Value
p 2
n 101
0.121205
0.008131
S 0.2492
s2 0.00252
F2,990.05 3.098
χ22(0.05) 5.991
Hessian Approximation
θ1θ2
θ120.5013.01
θ213.019.60

4.2. Groundwater Flow Model Parameters

[26] The final two test problems are identification of hydraulic parameters for geologic formations through well test analysis. Values of the hydraulic parameters are estimated using the numerical well test analysis code nSIGHTS (n-dimensional Statistical Inverse Graphical Hydraulic Test Simulator), developed by Sandia National Laboratories. The nSIGHTS code integrates the ability to simulate the solution to the numerical model and the SCEM optimization capability. Although the numerical model is quite complicated, there are two primary governing equations. The first governing equation is the “generalized radial flow equation” from Barker [1988]:

where Ss is specific storage (1/m), p is pressure (kPa), r is radial distance from borehole (m), t is time (s), K is hydraulic conductivity (m/s), and nf is flow dimension (dimensionless). This model is based on conservation of mass and Darcy's law [Freeze and Cherry, 1979]:

where Q is flow rate (m3/s), ρ is the density of water (kg/m3), g is the acceleration due to gravity (m/sec2), and A(r) is the flow area at a distance r from the borehole (m2). A complete description of the nSIGHTS governing equations and initial and boundary conditions can be found in the work of Pickens et al. [1987], Savage and Kesavan [1979], and Avis [1996].

4.3. Test Problem 2: Groundwater Parameters

[27] Test problem 2 is based on a well test performed as part of Ontario Power Generation's Moderately Fractured Rock (MFR) experiment conducted within a 100,000 m3 volume of fractured crystalline plutonic rock at the 240-m level of Atomic Energy of Canada's Underground Research Laboratory. A 30-min constant pressure (CP) test was followed by a pressure buildup (PB) test [Roberts, 2002]. We estimate hydraulic parameters by matching the measured transient flow rates d2 from the CP portion of the test.

[28] The two hydraulic parameters that we identify for test problem 2 are conductivity K and specific storage Ss. The flow dimension nf has a value of 3.39 for this formulation. Let yn(K, Ss; t) be the nSIGHTS solution to the numerical model of the problem. The optimization problem is then

[29] The SCEM optimization algorithm is used to identify ,s, and S(, s). Table 2 is a summary of the values used in confidence region calculations.

Table 2. Test Problem 2: Values Used for Confidence Region Calculations
Variable Value
p 2
n 566
2.93E-12
s 1.96E-6
S(,s) 17.482
s2 0.031
F2,5640.05 3.01
χ22(0.05) 5.991
Hessian Approximation
KSs
K7.52E218.92E15
Ss8.92E151.19E10

4.4. Test Problem 3: Groundwater Parameters

[30] The third test problem is based on data collected as part of the Swedish Nuclear Fuel and Waste Management Company's (SKB) Tracer Retention Understanding Experiments (TRUE) at the Äspö Hard Rock Laboratory [Winberg et al., 2000]. The testing sequence consisted of a 30-min CP withdrawal test followed by a 30-min PB test in a borehole drilled in fractured crystalline rock. Hydraulic parameter estimates were obtained by matching the CP flow rates, the PB pressure data, and the derivative of the PB pressure data. The vector d3 represents the data points from all three sources.

[31] The two hydraulic parameters that we identify for test problem 3 are again conductivity K and specific storage Ss. The flow dimension nf is 6.55 for this formulation. Let yn(K, Ss;t) be the nSIGHTS solution to the numerical model of the problem. The optimization problem is then

Again, the SCEM is used to identify ,s, and S(,s). Table 3 is a summary of the values used in confidence region calculation.

Table 3. Test Problem 3: Values Used for Confidence Region Calculations
Variable Value
p 2
n 317
4.09E-10
s 1.39E-4
S(, s) 37.528
s2 0.119
F2,3150.05 3.024
χ22(0.05) 5.991
Hessian Approximation
KSs
K1.23E21−8.68E14
Ss−8.68E147.58E8

5. Results and Discussion

5.1. Visualization of Confidence Regions

[33] For each test problem, equations (5), (6), and (7) are used to compute limits that define each type of confidence region. To be within the confidence region for linear approximation method, the parameter values must satisfy (5). To be within the confidence region for the F test and log likelihood methods, the parameter values must have a fit value lower the value identified by equations (6) and (7), respectively. Note that it is not a trivial problem to find the set of parameter values that lie exactly on the boundary of the confidence region. We did not use any computational geometry techniques, but instead enumerated various combinations of parameters until we obtained combinations that were very close to the boundary. The boundary points of the plots of each confidence region depicted in this section are all less than the identified boundary value and within 0.035% of the target boundary value. Thus the visualized confidence regions are slightly smaller than the true regions. However, every point on the boundary and interior of the visualized confidence regions is within the calculated confidence region.

5.2. Test Problem 1

[34] Figure 2 is a summary of the results for confidence region calculation for test problem 1. The minimum * of the unperturbed problem and the SCEM termination point of the perturbed problem are both included, along with results from the three different techniques for estimating confidence regions. For this linear test problem, the three techniques produced confidence regions that are almost identical.

[35] All three confidence regions contain , which is guaranteed by the construction methods in equations (5), (6), and (7). All three regions are also exactly ellipses, since test problem 1 is linear. All three methods also contain *, the true solution to the unperturbed problem. This is a welcome result, since confidence regions are used to identify a range of parameters that might be good solutions to the underlying (and generally unknown) unperturbed problem. Clearly, * is one such parameter value.

[36] Figure 3 is a plot of the objective function for test problem 1. The pink contour lines are the contours of the objective function that correspond to the confidence regions. As expected for this linear test problem, the contours of the objective function are elliptical.

5.3. Test Problem 2

[37] Figures 4 and 5 are summaries of the results for confidence region calculation for test problem 2. The value for is included in the plots. Figure 4 is a plot of the confidence regions produced by the F test and log likelihood methods. The two regions are almost identical, although the F test region is slightly larger than the log likelihood region. Both confidence regions are composed solely of positive values for each parameter. Although the true values for K* and Ss* are unknown, the physics of groundwater flow require that each of these parameters take on a positive value.

[38] Figure 5 is a plot of all three confidence regions for test problem 2. The difference in scale between Figures 4 and 5 is very large, as can be seen by the size of the F test and log likelihood regions. The linear approximation confidence region is so large that the details of the other two confidence regions are not readily visible when the regions are plotted on the same scale. Most of the points in the linear approximation region have negative values for conductivity or specific storage. The immense size of the region and the inclusion of negative parameter values strongly suggests that the linear approximation method overestimates the confidence region.

[39] Figure 6 is a plot of the objective function for test problem 2. The pink contour lines are the contours of the objective function that correspond to the F test and log likelihood confidence regions. The objective function contours are the same shape as the confidence regions calculated by these two methods. The confidence region computed by the linear approximation method does not correspond to any of the objective function contours. The confidence region is an ellipse, and the contours of the objective function are not elliptical for this nonlinear problem.

5.4. Test Problem 3

[40] Figure 7 is a summary of the results for confidence region calculation for test problem 3, and Figure 8 is a plot of the objective function. Again, is included in the confidence region plot. As before, the F test region is slightly larger than the log likelihood region. Both confidence regions are composed of positive values for each parameter. For these two methods, the confidence region for test problem 3 is disjoint. Test problem 3 has at least two local minima, which are outlined in pink in Figure 8. In between these two minima is an area of parameters with fit values above the computed thresholds for the boundaries of the F test and log likelihood confidence regions. The pink contour lines correspond to the boundaries of the F test and log likelihood confidence regions.

[41] The region created by the linear approximation method is not disjoint. This method is not able to create disjoint confidence regions, since members of the confidence region are included only based on their parameter values and not on their fit values. The elliptical region calculated by the linear approximation method does not correspond to the contours of the objective function, which are not elliptical. Although the linear approximation confidence region overlaps with the other two confidence regions, there are obvious differences between the linear approximation region and the other two regions. The regions are all on the same scale, and none of the regions contain negative values for either of the two parameters.

6. Conclusions

[42] For all three test problems, the F test and log likelihood methods have very similar results. Although we cannot conclude that these two methods will always produce similar confidence regions, we are unable to observe differences in performance between these two methods based on our case studies. Both of these methods depend upon the objective function contours for the shape and size of the confidence region. Each method has the ability to create disjoint confidence regions in the presence of local minima. Each method performed well on our linear and nonlinear test problems.

[43] The linear approximation method resulted in a reasonable confidence region for the linear test problem. However, the linear approximation regions are noticeably different from the other two methods for the two nonlinear test problems. In test problem 2, the linear approximation region is very large. Since it includes negative values for the hydraulic parameters, we know that this region includes values that are physically meaningless. Thus, in this case, the region is too big. Note that the Donaldson and Schnabel [1987] study of confidence region approximation methods on 20 nonlinear functions showed that the F test and log likelihood methods performed very reliably, with high observed coverage of the true confidence region. In contrast, they found that the linear approximation method often grossly underestimated the confidence region. We hypothesize that the linear approximation method overestimated the confidence region in test problem 2 because of strong nonlinearity in the problem and/or a poor Hessian approximation at . The F test and log likelihood methods produce confidence regions that follow the contours of the objective function, and these regions contain a reasonable range of values for test problem 2.

[44] The linear approximation method fails on test problem 2 and yet produces a reasonable region on test problem 3. The region is a different size and orientation than the region produced by the F test and log likelihood methods. Despite an area of overlap, the linear approximation region contains points that are not within the regions produced by the other two methods, and vice versa. The most obvious difference between the linear approximation region and the other two regions is that the linear approximation region is an ellipse, while the other methods produce a disjoint confidence interval. Given the uncertainties inherent in test problem 3, the true confidence region is unknown, and we are unable to evaluate which method produces the more accurate confidence region.

[45] There are several open questions in the field of confidence region estimation. One is the determination of which confidence region method to use for general nonlinear models that are “black box” simulation models. We first thought that the linear approximation method might be sufficient for a large number of situations, but after performing the case studies, we believe the F test and log likelihood methods produce more correct results because they do not rely on a calculation of the Hessian matrix, they are more reflective of the objective function, and they are able to capture disjoint confidence regions. There is a tradeoff between accuracy and computational effort: using a linear approximation, one can calculate the confidence regions with minimal additional calculation given the gradients; however they are likely to be inaccurate. The F test and log likelihood methods, however, require that one evaluates the sum squared error term for each potential considered, which requires a large number of function evaluations. We have not seen case studies on the efficacy of any of the methods for high-dimensional problems.

[46] Most realistic computational models involve dozens to hundreds of parameters. We easily see the potential for very ill conditioned Hessian matrices in application of the linear approximation method. For the F test and log likelihood method, the problem of finding combinations of parameters which satisfy the constraints in equations (6) and (7) could become extremely difficult: this is a one-to-many inverse problem which can be very challenging to solve. The brute force enumeration approach that we used may not be computationally feasible for high-dimensional problems. Another issue that complicates the visualization of confidence regions occurs when more than two parameters are identified. Although a three-dimensional graph is possible, the question of how to visualize relationships among more than three parameters is likely to arise. Although a simple calculation reveals whether a particular parameter combination is inside or outside the confidence region for all three methods, total representation of the confidence region is a significant challenge.

[47] The need for calculating joint confidence intervals around parameters in nonlinear models is common for many scientists and engineers running simulation models. The state of the art is limited at this point. While some general approaches are available, we do not see that they can scale up to realistic problem sizes and be computationally feasible to solve for the interval boundaries. Approximation methods or better ways of “partitioning” the parameter space so that only subsets of parameters are examined simultaneously may be paths forward. One approach is to evaluate a conditional likelihood function for a pair of parameters while holding the others fixed at their least squares estimates. Another approach is to evaluate what Bates and Watts [1988] call the profile likelihood function, which involves finding the minimum sum of squares over all other parameters for pair of parameters plotted on a 2-D grid. This approach may become very expensive computationally, since evaluating the profile likelihood function requires solving a p-2 dimensional nonlinear least squares problem for each of the points on p(p-1)/2 grids. Bates and Watts propose the idea of profile traces and profile pair sketches. These approaches also look at pairs of parameters conditional on the remaining parameters. However, they use some efficient interpolation schemes to plot the contours defining confidence regions for each pair of parameters. We have not seen widespread use of the profile trace or profile sketch, but they warrant further investigation for parameter spaces of higher dimension.

Acknowledgments

[48] The first three authors all work at Sandia National Laboratories, and the final two authors were summer interns at Sandia in 2005. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the U.S. Department of Energy under contract DE-AC04-94AL8500. This work was supported by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA), part of the Department of Energy. The authors wish to thank Jon Helton and Bill Oberkampf for technical reviews. We also thank Scott Mitchell and Tom Kirchner for providing valuable background information. We thank two anonymous referees for their improvements to our paper.