Standard Article

# 79 Assessing Uncertainty Propagation through Physically Based Models of Soil Water Flow and Solute Transport

Part 6. Soils

Published Online: 15 APR 2006

DOI: 10.1002/0470848944.hsa081

Copyright © 2005 John Wiley & Sons, Ltd

Book Title

## Encyclopedia of Hydrological Sciences

Additional Information

#### How to Cite

Brown, J. D. and Heuvelink, G. B. M. 2006. Assessing Uncertainty Propagation through Physically Based Models of Soil Water Flow and Solute Transport. Encyclopedia of Hydrological Sciences. 6:79.

#### Publication History

- Published Online: 15 APR 2006

- Abstract
- Article
- Figures
- References

### Introduction

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

Soils are an elementary yet highly complex component of many natural, modified, and managed ecosystems with physical and hydrological properties that vary both systematically through space and time and in a seemingly erratic way (Webster, 2000). Soil properties are a product of underlying physical, chemical, and biological processes and, in turn, affect these underlying processes. The relationships between soil properties and processes are usually complex and often involve nonlinear interactions and scale-dependencies that are difficult to explain or to predict (Addiscott and Tuck, 2001). These complexities are evident when modeling soil water flows and solute transport. Thus, while hydrological data and models have continued to improve over recent years, they are rarely certain or “error-free”. Rather, the combined effects of unpredictable variations in soil properties and simplified representations of complex hydrological processes lead to errors in model outputs (e.g. Zhang *et al.*, 1993; Leenhardt, 1995; Dillah and Protopapas, 2000; Dubus and Brown, 2002). These errors may be sufficiently large to result in poor decisions about the exploitation and management of soils. For example, poor estimates of soil hydraulic properties may encourage excess irrigation, leading to soil erosion and salinization. While it is generally accepted that soil data and models are not “error-free”, these errors may be difficult to quantify in practice. Indeed, the quantification of error implies that the “true” character of soils is known (i.e., error is a specific departure from “reality”). In the absence of such confidence, we are uncertain about the “true” properties and processes that characterize soils. Understanding how these uncertainties affect model predictions is important for: (i) establishing the utility of data and models as decision-support tools; (ii) directing resources towards improving data and models; and (iii) seeking alternative ways of managing soils when the opportunities for accurate modeling are limited.

Uncertainties in soil data combine with uncertainties in hydrological models and lead to uncertainties in model predictions. This article focuses on uncertainty propagation through physically based hydrological models and, specifically, models describing soil water flow and solute transport through the unsaturated zone. Solute transport is important because many practical applications require knowledge about the distribution of chemicals in soils (e.g. soil acidification, heavy metal pollution, nutrient availability, salinization, and nitrate leaching), and much of the research on uncertainty propagation in soil hydrology has focused on solute transport (e.g. McKone, 1996; Foussereau *et al.*, 2001; Seuntjens *et al.*, 2002; Sohrabi *et al.*, 2002; Vachaud and Chen, 2002a; De Vries *et al.*, 2003). While there are longstanding literatures on uncertainty propagation in other areas of hydrology, including groundwater hydrology (e.g. Christensen and Cooley, 1999; Wang and McTernan, 2002) and surface hydrology (e.g. Beven and Binley, 1992; Brazier *et al.*, 2001; Beven and Freer, 2001), these are reviewed elsewhere (*see* articles 131, 77, and 156), and are only referred to in discussing methods for uncertainty propagation.

Uncertainties in soil data are discussed below under “Uncertainties in Model Inputs”, while uncertainties in models are discussed in a later section. In the section “Uncertainty Propagation”, the statistical procedures available for propagating uncertainties in model inputs and models through to uncertainties in model outputs are reviewed. Techniques for evaluating the contribution of different sources of uncertainty to the overall uncertainties in model predictions are discussed in the section “Evaluating the Contribution of Different Uncertainties”. Finally, the section on “Scale and Uncertainty” considers the impacts of scale, and changes between scales, on the outcomes of an uncertainty analysis. First, however, it is useful to consider the nature of uncertainty and how it might be quantified.

### What is Uncertainty and How Can We Quantify it?

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

Uncertainty is an expression of confidence about what we “know”, both as individuals and communities, and is, therefore, subjective. Different people can reach different conclusions about how uncertain something is, based on their own personal experiences and world-view, as well as the amount and quality of information available to them (Cooke, 1991). It is not an inherent property of the environment (Quantum Theory can be ignored here), but may be encouraged in people by some aspects of the environment. For example, the environment may appear more complex than our abstractions and simplifications imply (e.g. kinetic processes in pesticide sorption), or too variable for us to capture (e.g. infiltration rates in Mediterranean soils), too large and interconnected (open) for us to observe everything at once (e.g. global weathering of minerals), or too small to observe at practical scales (e.g. soil pore volume in the field), too opaque for observation (e.g. hydraulic conductivity), or because we do not have the capacity to observe it (e.g. soil matrix over large areas). Uncertainty differs from ignorance, because ignorance involves a lack of awareness about our imperfect knowledge (Smithson, 1989). It also differs from error, because this involves a specific departure from “reality” (Heuvelink, 1998a).

#### Representing Uncertainty with Probability Distributions

Ideally, our representations of reality would be perfectly accurate or “error-free”. In practice, however, they are not, and we are aware of this (otherwise we would be ignorant). Rather, we are uncertain about the errors in our representations. However, we may be able to specify some boundaries for our uncertainty, which would allow us to explore its impacts on the outcomes of a decision-making exercise (e.g. a hydrological model). For example, the nitrate concentration in a sample of soil water may be measured as 68.6 g m^{−3}. The “true” value remains unknown, but control experiments in a laboratory suggest that the measuring instrument is unbiased and has a standard deviation of 5 g m^{−3}. If measurement error is the only source of error in the sampled value and the laboratory measurements are relevant to the current situation, we know that the “true” value (*t*) lies within 58.8 g m^{−3} < *t* < 78.4 g m^{−3} 95% of the time. Thus, we can express our lack of confidence (uncertainty) about the “true” nitrate concentration with a probability distribution of possible nitrate concentrations (Heuvelink, 1998a). A probability distribution function (pdf) is characterized by its shape (e.g. Gaussian, exponential, and uniform) and by its parameter values. One important parameter is the variance, as this denotes the average magnitude of uncertainty in the variable of interest.

In order to represent uncertainty with a pdf, it is necessary to specify the “domain” of the probability model, to identify the parameters of the model, and to estimate the values of those parameters. In principle, a number of subjective decisions must be introduced at each stage in order to uniquely and completely define the pdf for a given variable (see Cooke, 1991). In practice, however, many of these “decisions” are assumed implicitly. It is, therefore, instructive to consider the range of conditions and assumptions required. The “domain” of the probability model includes a set of conditions that describe: (i) the times for which the model is valid; and (ii) the locations or areas for which the model is valid. For example, the probability model may apply for a time frame of one week (e.g. a period of constant calibration for a measuring instrument) and for a spatial domain of one land use parcel. The second set of conditions govern the application of probabilities within this domain, and include: (i) the pattern of uncertainties in time; (ii) the pattern of uncertainties in space; (iii) the relationship between the size of the uncertainties and the size of the measured variable; and (iv) any restrictions on sizes and patterns imposed by other variables (“cross-correlation”). The patterns of uncertainty in time and space are important because the impacts of correlated error (or bias) may differ substantially from those associated with random error in environmental research. Similarly, when many different variables are used in a model, “cross-correlation” is important because extreme values in one variable may coincide with extreme values in another. For example, the cadmium, zinc, and lead concentrations in a polluted soil are strongly correlated, and, hence, the uncertainties associated with spatial interpolation of these properties are also correlated (Leenaers *et al.*, 1990). Examples of pdf*s* for uncertain environmental variables are shown in Figure 1. Figure 1(a) shows a pdf for a continuous numerical variable while Figure 1(b) shows a pdf for a discrete numerical variable. Figure 1(c) shows a joint pdf for two uncertain numerical variables, where the deviation from a circular shape denotes statistical dependence or “cross-correlation” between the variables. Figure 1(d) shows a semivariogram model, which describes the magnitude of variation as a function of distance (Goovaerts, 1997).

The conditions and parameters that specify a probability model may be determined through a data-driven approach based on “validation” or (geo)-statistical estimation or a people-driven approach based on expert elicitation. When more accurate data are readily available or observations of one variable can be used to diagnose uncertainties in another, expert opinions on the uncertainties in data and models may converge. In contrast, when a “people-driven approach” is required, estimates of uncertainty may differ substantially between individuals, and also between groups of scientists.

### Uncertainties in Model Inputs

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

As indicated above, the conditions and parameters that define a pdf may be estimated through a data-driven approach or through expert elicitation (“people-driven” approach). Data-driven approaches are considered first.

#### Data-Driven Approach

Inputs to soil hydrological models include the “forcing inputs” required to generate flow (e.g. meteorological data), the boundaries in which flow occurs (positions of the air-soil and groundwater interfaces) and the characteristics of the soil within these boundaries. Many inputs are defined through measurements in the field or in the laboratory, which introduces measurement uncertainty. While soil properties vary continuously through space and time, measurements always occupy a limited number of space-time points (even remote sensing). When exhaustive inputs are required but only partial observations are available, they must be interpolated, which leads to interpolation uncertainty. Interpolation uncertainty generally increases with sample distance and soil variability, but also depends on the interpolation algorithm employed (Goovaerts, 2001). Measurement uncertainties can be estimated through comparisons with more accurate data, through laboratory testing of measurement instruments, or through repeat measurement with the same instrument (assuming constant environmental conditions). While the former provides an indication of accuracy or “bias”, the latter two approaches only indicate precision.

In order to interpolate soil data, and to assess the uncertainties associated with space-time interpolation, some assumptions must be made about the behavior of soils at unmeasured times and locations. A common approach to sampling soil properties involves separating the field site into “homogeneous” units, sampling these units, and calculating a within-unit sample mean (“best estimate”) and variance (uncertainty) (Voltz and Webster, 1990). This approach forms the basis for so-called “pedotransfer functions”, which allow soil properties to be estimated at arbitrary locations within a “homogeneous” sample unit (Schaap and Leij, 1998; Wösten *et al.*, 2001; Minasny and McBratney, 2002). In practice, however, it may not be possible, or appropriate, to separate soil properties into “homogeneous” units, but to assume instead that soil properties vary continuously in space and time. Alternative statistical techniques, such as time-series analysis and geostatistics, are available to interpolate continuous data from partial measurements and to estimate the uncertainties associated with this interpolation (e.g. Angulo *et al.*, 1998; Goovaerts, 1997, 2001). In soil hydrology, geostatistics has been widely used to estimate interpolation uncertainty (e.g. *see* Goovaerts, 2001 for a review), both in spatial applications and in space-time interpolations (e.g. Kyriakidis and Journel, 1999; Snepvangers *et al.*, 2003). Spatial dependence between samples can be modeled with the sample semivariogram, and this information can be used for optimal prediction of soil properties at unmeasured locations through kriging (Goovaerts, 1997). The kriging variance provides an explicit measure of interpolation uncertainty.

Where soil characteristics are grouped into classes, categorical data may be used to describe soil properties for hydrological studies. In an uncertain categorical distribution, each location has a single “true” outcome (as far as the categories are identifiable and sufficient) but the precise outcome remains unknown. The probability of each outcome can be described with a discrete pdf for that location. A discrete pdf is simply a table listing each possible outcome and its associated probability. In practice, however, the distribution at one location may be sensitively dependent upon the distributions at surrounding locations, because classification errors are often statistically correlated (and, thus, dependent) in space and time. Capturing this dependence is important, but not straightforward, because multivariate discrete pdfs are characterized by a large number of parameters. In recent years, geostatistical techniques have been developed and applied for handling spatial autocorrelation in uncertain categorical data (e.g. Bierkens and Burrough, 1993; Finke *et al.*, 1999; Kyriakidis and Dungan, 2001), but identifying realistic pdfs remains inherently difficult in practice. Often, categorical data are not used directly in hydrological models, but, rather, continuous variables are related to categories of environmental variables using statistical models such as “pedotransfer” functions (see above). Here, uncertainties in the continuous variable become dependent upon (correlated with) those in the categorical variable. These uncertainties must be represented by a joint pdf when the parameters are used together in a hydrological model, otherwise the propagated uncertainties may become unrealistic.

#### People-Driven Approach

Estimating the parameters of a probability model will always require “expert” judgment, because data must be processed and interpreted by people. However, in the absence of more accurate data, expert judgment may be the only means of estimating these parameters. So-called “expert elicitation”, which formalizes the processes of estimating probability models through expert judgment (e.g. Cooke, 1991; Kaplan, 1992), has not been widely used in soil hydrology, but has been used successfully in other areas of environmental research (e.g. Morgan *et al.*, 2001). Here, the probability models were found to be highly sensitive to the questions posed in eliciting a parameter value and to the people estimating those values (e.g. Morgan and Henrion, 1990). In principle, therefore, expert elicitation should aim to canvass a range of informed opinion about uncertainties in data, but in practice, this may not be possible, and excessive optimism or pessimism will not simply be confined to individuals. This is important because uncertainty is inherently a social construct, and will vary between people regardless of the focus of an argument (but not independently from it). For example, in a survey of soil scientists, Heuvelink and Bierkens 1992 found that respondents were overly optimistic about the predictive power of general-purpose soil maps. Nevertheless, it will often be necessary to quantify the uncertainties associated with input data for soil hydrological models through a “people-driven” approach (e.g. Keller *et al.*, 2002; Kroeze *et al.*, 2003).

### Uncertainties in Models

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

Environmental models are inherently imperfect because they abstract and simplify real patterns and processes that are themselves imperfectly known and understood. In physically based modeling, it is important to distinguish between the predictive performance of a model and its ability to explain environmental phenomena (Beven, 2001). Indeed, it is widely acknowledged that the ability of a model to predict environmental patterns satisfactorily does not mean that its explanations of these patterns are also satisfactory (e.g. Oreskes *et al.*, 1994; Rykiel, 1996; Anderson and Bates, 2001). This chapter focuses on the predictive power of models, but it is important to acknowledge that models may perform well for bad reasons, and explanatory uncertainty should, therefore, be investigated where possible (i.e., representing structural uncertainty as statistical noise is not sufficient).

Model uncertainties are “case-dependent” because hydrological models do not perform consistently for all applications involving soil water flow and solute transport. The definition of a “case” is important here, because experience is often used to “validate” models and to estimate the uncertainties associated with model parameters. In essence, a “case” refers to the circumstances in which a model performs consistently without needing to modify its underlying structure and parameter values. In accepting this case-dependence, it follows that model predictions can only be assessed through some form of comparison with direct observations or experience and, specifically, a comparison with observations or experience from the same “case” (which need not imply the same times or locations). Since observations are themselves uncertain, departures between predicted and observed outputs cannot simply be related to uncertainties in model predictions, but must also include uncertainties in empirical observations (Heuvelink and Pebesma, 1999).

#### Components of Model Uncertainty

Model uncertainty includes uncertainties in the structure of the model (conceptual or logical uncertainties), uncertainties in model parameters, and uncertainties in the solution of the model (Addiscott *et al.*, 1995). For example, a soil hydrological model may ignore macropore flow (structural uncertainty), may use uncertain estimates of hydraulic conductivity (parameter uncertainty), and may solve a set of continuous partial differential equations using a discrete numerical scheme (solution uncertainty). Uncertainties in model structure may originate from a perceived lack of knowledge about the real processes operating or a belief that the model intentionally abstracts and simplifies known processes. Model parameters are not inherently uncertain because they do not refer to real, measurable quantities, but are empirical quantities that allow general models to be applied to specific cases. For example, varying the friction coefficient in a hydraulic model allows the same model structure to be applied under different soil conditions, but the friction coefficient accounts for “surface roughness” at finer scales (among other things, in simple models) and cannot be measured at large spatial scales. In practice, therefore, it is difficult to define a single, optimal, set of parameters *a priori*. Moreover, nonuniqueness or “equifinality” of parameter values is common in environmental modeling because model structures only approximate reality. This leads to uncertainties in model predictions and explanations, and is particularly important where models are complex (many degrees of freedom) and observations are limited (few degrees of constraint).

If models cannot be identified uniquely, they must be represented by a probability distribution of possible models, each with a certain chance of performing well. Given uncertainties in model inputs and models, an uncertainty analysis aims to identify how these uncertainties “propagate” to model outputs (the forward problem). In practice, however, as model uncertainties are difficult to estimate *a priori*, it may be useful to compare the results of the “forward problem” to empirical observations. If these observations allow some of the original models to be rejected as improbable, the original assessment of model uncertainty should be improved (the inverse problem). When uncertainties in model inputs are known, solving the inverse problem allows model uncertainty to be identified explicitly, but only for that “case”. However, model uncertainties cannot be disaggregated further into structural uncertainty and parameter uncertainty, because model parameter values do not refer to real quantities and cannot, therefore, be delimited by physical arguments or by comparisons with field observations.

#### Inverse Modeling

As initial estimates of model uncertainty may be no more than informed guesses, it is useful to update these estimates by comparing model predictions with empirical observations (see “Uncertainty Propagation” also). So-called “inverse modeling” has been widely used in uncertainty schemes over recent years (e.g. Abbaspour *et al.*, 1999, 2000; Schmied *et al.*, 2000; Beven and Freer, 2001; Vrugt *et al.*, 2003, and articles 77 and 156). Most of these schemes use Bayes' theorem, as this allows a “prior” distribution of uncertain input and parameter values to change in response to the amount and perceived (weighted) value of information available (Frenc and Smith, 1997; Bernado and Smith, 2001). The prior is sampled to provide a range of possible models for simulating the same problem, and uncertainties are propagated to model outputs by implementing the sample (running the models) and recording the results. This is known as *Monte Carlo simulation* because different possible models are sampled randomly from the prior distribution (Hammersley and Handscomb, 1979). Once the sample has been implemented, the results are compared with empirical observations and either accepted as possible or rejected as improbable (Beven and Binley, 1992). If some results are rejected, the corresponding sample can be eliminated from the prior and the resulting, calibrated distribution (posterior) used to reassess predictive uncertainty.

Two approaches are available for sampling the prior, namely: (1) a predefined probability sample from the entire distribution, which may consist of a simple random sample leading to standard Monte Carlo (e.g. the Generalised Likelihood Uncertainty Estimation (GLUE) approach of Beven and Binley, 1992), stratified random sampling, involving “Latin Hypercube” techniques (e.g. Stein, 1987), fully stratified sampling, involving the “Sectioning Method” (Addiscott and Wagenet, 1985), or (2) an interactive sample (optimization) based on a random walk approach (Markov Chain Monte Carlo or MCMC). Standard Monte Carlo and MCMC are not fundamentally different in principle. However, in practice, MCMC is more efficient because it attempts to minimize the sample required to “adequately” represent the posterior. It can, therefore, be distinguished from predefined probability sampling using arguments of acceptable risk; that is, the risk of taking too few samples from the prior to adequately determine the (posterior distribution of) model uncertainties. An important disadvantage of standard Monte Carlo, and Monte Carlo in general, is the time taken to perform an uncertainty analysis, and other types of propagation tools are available for simple models (see below). An important disadvantage of MCMC is that its predictions may be systematically biased, as the sampling algorithm may become caught in localized pockets of good performance without accommodating the full range of equifinality implied by the prior distribution (see Page *et al.*, 2003). The risk of bias can be reduced by employing a variant of standard MCMC, known as “*shuffled complex evolution*”, which evolves different areas of the parameter space simultaneously (Vrugt *et al.*, 2003).

Another important application of inverse modeling in soil hydrology is for reducing “errors” in model predictions without an uncertainty analysis. Kalman filtering (e.g. Cahill *et al.*, 1999) is an example of this approach. Kalman filtering is used in dynamic modeling to update state variables and parameter values through time as new information becomes available, but has not, in general, been used to evolve different plausible parameter values (equifinality) within an uncertainty framework (although see Wendroth *et al.*, 1999).

#### Structural Uncertainty

To date, uncertainty analyses of environmental models have typically focused on model inputs and parameter values, as evidenced by the range of schemes available for input uncertainty propagation and for evaluating plausible parameter values (e.g. Janssen *et al.*, 1994; Bennett *et al.*, 1998; Clausnitzer *et al.*, 1998; Duke *et al.*, 1998; Haan *et al.*, 1998; Heuvelink, 1998a; Hanson, 1999; Beven and Freer, 2001; Dillah and Protopapas, 2000; Christiaens and Feyen, 2001, 2002; Vrugt *et al.*, 2003). The focus here partly reflects a consensus that model inputs and parameters are an important source of uncertainty in simulation predictions. However, it also reflects the relative ease with which uncertainty in model inputs and parameters can be quantified in comparison to assessments of structural uncertainty in models. In practice, structural uncertainty may be more important than parameter uncertainty in evaluating model performance, but such uncertainties are difficult to assess explicitly or to separate from other uncertainties during the calibration process (Beven and Binley, 1992). For example, in a study of the Uhlirska Catchment, Czech Republic, Blazkova *et al.* 2002 found that uncertainties in the water table depths predicted by TOPMODEL could be attributed to uncertainties in the topographic data or to structural uncertainties in the model, for which further interpretation was difficult. However, Mackay and Robinson 2000 successfully evaluated the internal contradictions or “semantic errors” (one form of structural problem) between hydrological submodels in predicting water table depths with TOPMODEL.

As indicated above, uncertainties in model structure may originate from a perceived lack of knowledge about the real processes operating, or a belief that the model abstracts and simplifies known processes. Both of these are relevant in modeling soil water flows and solute transport. At an elementary level, structural uncertainty need not exist in hydraulic modeling, because Newton's laws (the general) cannot be improved, nor expressed more accurately in practice (Navier–Stokes equations). Rather, it is the simplification of these equations, their discretization in space and time, and their numerical solution that lead to uncertainty in specific cases. In this context, some types, scales or directions of flow may be deemed “less important” than others and either removed from the numerical model or their effects on the mean (important) flow incorporated through one or more empirical parameters. In practice, however, the distinction between “important” and “less important” flow requires “expert” judgment in specific cases, for which uncertainty (as well as ignorance) may be substantial. For example, while an empirical diffusion/dispersion parameter may be sufficient for describing the “mean flow” of water through a control volume (e.g. Darcy-Richards), it may not be appropriate for describing the “mean transport of solutes” within the fluid, as solute transport is sensitive to the “local” distribution of fluid velocities. Moreover, Newton's laws only resolve transfers of energy, matter, and momentum through the environment. They do not resolve the processes that lead to changes in the storage and transfer of these elementary units. For example, they cannot predict changes in rainfall inputs or in the hydraulic properties of the soil caused by vegetation growth. In practice, we are usually interested in the processes, as well as the transfers, because dominant process controls change through time and space, and, thus, Newton's laws cannot predict the future with some arbitrary degree of accuracy. In this context, structural uncertainty arises because our best representations of environmental processes are deemed insufficient.

In principle, the impacts of structural uncertainty can (and should) be evaluated by exploring different process formulations (explanatory uncertainty) or, less ideally, by adding correlated noise to model structures. In practice, however, identifying alternative process formulations or appropriate levels and patterns of noise is not straightforward.

### Uncertainty Propagation

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

The combination of uncertainties in model structure, inputs, parameter values, and solution leads to a distribution of “models” for any given case, which expresses our lack of confidence about a “correct” model for that case. This leads to uncertainty in model outputs, as predictions will vary according to model inputs, structure, parameter values, and solution method (Figure 2). When uncertainties in input data and models lead to uncertainties in model output, the original uncertainties are said to have “propagated” through the modeling system. The Monte Carlo method, described above, is one (very useful) approach to this problem, but not the only one.

The problem of uncertainty propagation can be formulated generically as follows (Heuvelink, 1998a). Let y be the output of a model g that incorporates any number of certain elements, but m uncertain elements *x*_{i}:

- (1)

The elements *x*_{i} represent input uncertainties as well as model uncertainties. Hence, they might refer to uncertainties in soil organic matter or porosity, but could also refer to uncertainties in van Genuchten parameters or in model structure. In terms of the latter, *x*_{i} could be a residual noise superimposed upon a deterministic model or a binary random variable that distinguishes between two alternative model structures. The aim here is to determine the uncertainty in the output *y*, given the operation g and the uncertainties in the inputs *x*_{i}. The output *y* will have a probability distribution, the variance of which is a measure for the propagated uncertainty in *y*. When *g* is linear and all uncertainties *x*_{i} are quantitative and quantified, the variance of *y* can be derived analytically. However, linear models are rare in soil hydrology, and it is, therefore, useful to consider numerical methods for solving equation 1. Two methods are considered below and, for simplicity, it is assumed that the uncertainties *x*_{i} are real numbers and are unbiased, although generalizations to the biased case are relatively straightforward, even if the quantification of bias is not.

#### Taylor Series Method

The Taylor Series Method (TSM), also known as *first-order analysis*, approximates g with a truncated Taylor series (Figure 3). It greatly simplifies the process of propagating uncertainty through a soil hydrological model. However, it also introduces an approximation error, which is proportional to the divergence of g from a linear form. The TSM yields analytical expressions for the mean and variance of the model output (Heuvelink, 1998a). The variance expression contains the correlations and standard deviations of the uncertain inputs and model parameters, as well as the mathematical derivatives of the model (which are assumed to exist). These derivatives reflect the sensitivity of the model to changes in each of the inputs and parameters. Rosenblueth 1975 proposed an alternative, but similar, approach to TSM, which allows the moments (e.g. mean and variance) of a function to be estimated from 2^{m} model outputs, evaluated at all 2^{m} corners of a “hypercube” representing the input space for m model inputs. The need to evaluate all 2^{m} outputs is a significant restriction for complex models, but can be reduced to 2^{m} outputs by using points on the diameters of a hypersphere, rather than the corners of the inscribed hypercube (Christian and Baecher, 2002).

#### Monte Carlo Method

The Monte Carlo method adopts an entirely different approach to TSM, as it retains the original model *g*, but randomly samples from the joint distribution of possible input and model uncertainty for g (Hammersley and Handscomb, 1979). The Monte Carlo method involves:

Repeat *N* times a, b:

Generate a set of realizations of the uncertain inputs and model parameters, structure, and solution.

For this set of realizations, compute and store the model output.

Compute and store sample statistics from the *N* outputs.

A representative sample from the joint distribution of uncertain inputs and model parameters, structure, and solution can be obtained using an appropriate pseudorandom number generator and a sufficiently large sample size (Ross, 1990; Van Niel and Laffan, 2003). The accuracy of the Monte Carlo method is inversely proportional to the square root of the number of runs *N* and, therefore, increases gradually with *N*. As such, the method is computationally expensive, but can reach an arbitrarily level of accuracy, unlike TSM or Rosenblueth's method. Of course, the time taken to perform an uncertainty analysis must influence the choice of propagation tool (Heuvelink, 2002). However, the Monte Carlo method is generic, invokes fewer assumptions, and requires less user-input than other propagation tools. Moreover, the accuracy of the method can be fixed in advance according to the level of risk associated with a decision. The standard Monte Carlo method can be adapted to reduce the number of samples (model simulations) required for a given level of accuracy in the computed uncertainty. Examples of modified sampling schemes include Latin Hypercube sampling (e.g. Rossel *et al.*, 2001; Christiaens and Feyen, 2002; Sohrabi *et al.*, 2002; Minasny and McBratney, 2002) and MCMC (e.g. Soulsby *et al.*, 2003; Vrugt *et al.*, 2003; see also the discussion in “Uncertainties in Models”). The Monte Carlo method has been widely used to propagate uncertainties through soil hydrological models and has become increasingly popular over recent years as the computational overheads of performing such an analysis have declined. Examples in soil hydrology can be found in Petach *et al.* 1991; Bennett *et al.* 1998; Duke *et al.* 1998; Hansen *et al.* 1999; Kros *et al.* 1999; Dillah and Protopapas 2000; Thorsen *et al.* 2001; Keller *et al.* 2002; De Vries *et al.* 2003. For distributed modeling of soil water flows and solute transport, the sampling algorithm must accommodate spatial and temporal dependencies in model inputs and parameters (De Roo *et al.*, 1992; Endreny and Wood, 2001). This can be achieved through sequential simulation of distributed input or parameter values (e.g. Goovaerts, 1997).

### Evaluating the Contribution of Different Uncertainties

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

#### Prioritizing the Key Sources of Uncertainty

As indicated above, soil data and models are inherently imperfect because they abstract and simplify “real” soil properties and processes. However, it is neither possible nor desirable to eliminate all of the uncertainties associated with data and models, because resources are always limited and must be used effectively (Van Rompaey and Govers, 2002). Rather, an uncertainty analysis should aim to focus on those inputs and model entities that are likely to contribute most to the uncertainties associated with model predictions. The contribution of a specific, uncertain variable to the overall uncertainty in model predictions will depend upon the sensitivity of the model to that variable and the uncertainties associated with it. In general, models of soil water flow and solute transport will be highly sensitive to the specification of physical parameters, such as bulk density and hydraulic conductivity (e.g. Schaap and Leij, 1998; Abbaspour *et al.*, 2000; Christiaens and Feyen, 2001; Lenhart *et al.*, 2002; Vachaud and Chen, 2002b). As a calibration parameter in soil hydrological models, hydraulic conductivity may also be associated with the largest uncertainties (e.g. Rong and Wang, 2000; Schaap *et al.*, 2001; Foussereau *et al.*, 2001; Minasny and McBratney, 2002), although other parameters may be more important in specific cases (e.g. Seuntjens *et al.*, 2002; Dubus and Brown, 2002).

Evaluating the contribution of different sources of uncertainty to the overall uncertainties in model predictions is important for: (i) understanding where the greatest sources of uncertainty reside and, therefore; (ii) directing efforts towards these sources. For example, De Vries *et al.* 2003 found that denitrification in the soil column was the main source of uncertainty in predicting nitrogen inflow to groundwater in Dutch agricultural soils and suggest that improvements in field-based observations and process studies would lead to the greatest reductions in these uncertainties. Of course, too normative an emphasis on reducing uncertainty is inappropriate when the outcomes of an uncertainty analysis are highly sensitive to the assumptions made in performing that analysis (e.g. model uncertainty). Similarly, it is not appropriate to separately assess the contribution of model structure and parameter uncertainties because: (i) model parameters are empirical quantities; they are not inherently uncertain; and (ii) separate assessment is doubtful, as model uncertainties can only be assessed by comparing model predictions with independent observations. However, it should be possible to assess the contribution of different input uncertainties and the total model uncertainty to the overall uncertainties in model outputs.

In most cases, it is neither conceptually appropriate nor theoretically feasible to resolve every source of uncertainty in model predictions, but it is also important to consider practical arguments when addressing specific sources of uncertainty in models. In particular, there is a need for pragmatism when improving model inputs that do not contribute significantly to the overall uncertainties in model outputs. For example, Loague *et al.* 1989 suggest that improvements in a model of pesticide leaching should focus on soil organic carbon rather than bulk density, as model uncertainties were more sensitive to organic carbon content than bulk density. Equally, attempts to reduce uncertainty in model outputs must be balanced against the range of inputs for which uncertainties are poorly defined or remain unknown and against the practical benefits of improving model inputs or structures. For example, high quality input data are of little use when the models themselves are poor or inherently uncertain. Similarly, improvements in models should be justified against the need for additional complexity, the accuracy of new concepts, and their relevance, as well as the resources required to implement them (Van Rompaey and Govers, 2002).

#### Partitioning Property

Under the fairly strict assumptions that model and input uncertainties are mutually uncorrelated, only quantifiable uncertainties are involved, *g* is continuously differentiable with respect to all uncertain inputs, and the TSM approximation error is relatively small, the following holds (Heuvelink, 1998a):

- (2)

where the derivative of *g* with respect to each of the inputs *x*_{i} is evaluated about its mean. That is, the variance of *y* is the sum of the products of the variances from each uncertain input and the squared derivative of *g* with respect to that particular input. This partitioning property allows the user to determine which sources of uncertainty are the main contributors to uncertainty in model predictions. It also allows an assessment of how the output variance will decline for a given reduction in the variance from one or more inputs. Clearly, the output variance will mainly improve from a reduction of the input variance that contributes most to the uncertainties in model output (other factors being equal). However, this may not correspond to the input with the largest variance, because the hydrological model will display different sensitivities to each uncertain input. Similarly, the contributions of the input uncertainties, as well as the magnitude of the output uncertainty, will depend upon the specific “case” considered and the types of output predicted. While it is instructive to consider equation 2, its assumptions of near-linearity in g and zero correlation between input uncertainties are rarely met in practice. Rather, for most practical applications in soil hydrology, where these assumptions are deemed unacceptable, more advanced analysis techniques, often based on Monte Carlo simulation, may be employed (*see* Jansen *et al.*, 1994; Jansen, 1999; Chan *et al.*, 2000).

Examples from soil hydrology where the contributions of different sources of uncertainty were compared include Zhang *et al.* 1993; Finke *et al.* 1996; Piggott and Cawlfield 1996; Tiktak *et al.* 1999; Barlund and Tattari 2001; Dubus and Brown 2002; Keller *et al.* 2002; Sohrabi *et al.* 2002; De Vries *et al.* 2003.

### Scale and Uncertainty

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

Issues of scale have received increasing attention in soil hydrology over recent years (e.g. Blöschl and Sivapalan, 1995; De Vries *et al.*, 1998; Bierkens *et al.*, 2000). This partly reflects a consensus that hydrological patterns and processes are strongly scale-dependent (Blöschl and Sivapalan, 1995) and that the implications of scale, and changes between scales, have not been fully acknowledged in hydrological modeling (Beven, 1995, 2001). It also reflects an increasing demand for “policy-relevant” research in hydrology (Beven, 2000; Clifford, 2002), where there is a need to integrate social and physical perspectives on environmental problems that traverse a range of political and geographic scales (e.g. Dumanski *et al.*, 1998; Kros *et al.*, 1999; Kros *et al.*, 2002).

The need to operate at a range of scales, or to change between scales, introduces uncertainty because the dominant patterns and process controls may not be known at all scales, or incorporated practically in models at the scales of interest (e.g. Heuvelink and Pebesma, 1999; Vachaud and Chen, 2002b; Hennings, 2002). For example, while macropore and preferential flow are dominant at the “pedon scale”, they remain important contributory processes to flow patterns at “field”, “catchment”, and “regional” scales (Heuvelink, 1998b; Heuvelink and Pebesma, 1999; Zehe *et al.*, 2001), but are difficult to parameterize from general-purpose soil survey data and, hence, to incorporate in large-scale models of soil water flow (Simmonds and Nortcliff, 1998).

#### Scale Dependence of Model Inputs

When model inputs, or the observations used to test model outputs, are defined with a different control volume or “support” from that required by the model, these data must be aggregated or disaggregated, or the model must be redefined at an appropriate scale (Heuvelink and Pebesma, 1999; Bierkens *et al.*, 2000). The basic elements of “support” include the domain chosen to represent a real entity, together with the size (resolution), shape, and orientation of the “building blocks” or space-time units used to discretize that entity (Webster and Oliver, 1990). An important consequence of representing model inputs as stochastic quantities is that changes in support will affect uncertain quantities (e.g. a variance) more than deterministic ones (e.g. an average). Thus, aggregating or disaggregating data may greatly affect the probability distribution for those data, and, particularly, its width, without significantly affecting the mean value (Heuvelink and Pebesma, 1999; Heuvelink, 2002). For example, in a study of soil acidification with the SMART2 model, Kros *et al.* 2002 found that predicted concentrations of aluminum and nitrate in the soil solution were highly dependent upon the support size of model entities. In practice, space-time aggregation should lead to a reduction in uncertainty and to an increase in the spatial autocorrelation of model inputs because much of the variability at finer scales is lost and, thus, disappears as a source of uncertainty (e.g. Heuvelink and Pebesma, 1999). By contrast, space-time disaggregation will lead to an increase in uncertainty and to a reduction in the spatial autocorrelation of model inputs because the attribute variability is increased at finer scales.

#### Scale Dependence of Models

Uncertainties in model structure (dominant process controls) and parameter values are also sensitive to changes in scale. However, unlike aggregation or disaggregation of data, model parameters cannot be transformed using the original quantities alone because they do not, in general, refer to real, measurable things. Rather, upscaling or downscaling of model parameters can only be achieved by (re)-calibrating the model at another scale, and may require adaptation of the original model because functional relationships are typically nonlinear and process controls usually change with scale (Addiscott and Tuck, 2001). More fundamentally, upscaling or downscaling of model parameters may hide underlying problems with model structure if process controls are not simultaneously evaluated for their relevance and sufficiency at other scales (i.e., uncertainty in explanation).

### Conclusions

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

A mismatch between the complexity of soil patterns and processes and our ability to capture them adequately for some practical purpose leads to uncertainty in the predictions of soil hydrological models. These uncertainties originate from a lack of confidence in model inputs, which include measurement and interpolation errors, and in models that include conceptual, logical, and computational errors. Uncertainties in model inputs and models can be described with probability distribution functions, for which a number of conditions and parameters must be estimated *a priori*. In principal, this can be achieved with a “data-driven” approach for model inputs. Where observations are lacking, a “people-driven” approach is required, but the resulting estimates may introduce bias into the uncertainty analysis. For estimating uncertainties in model parameters, a data-driven approach cannot be used because model parameters do not, in general, refer to real, measurable quantities. Here, a “people-driven” approach is useful for making an initial assessment of parameter uncertainties, which may be updated later through inverse modeling. In practice, uncertainties in model concepts (structural or explanatory uncertainties) are difficult to estimate *a priori* or to isolate through inverse modeling, but may ultimately determine the utility of model predictions.

Uncertainties in model inputs and models combine and propagate through a hydrological model, leading to uncertainties in model outputs, which can be quantified using a range of statistical techniques. In soil hydrology, the TSM and Monte Carlo Simulation (MCS) have been widely used for uncertainty propagation. In recent years, MCS has largely replaced the TSM, as it is generic, invokes fewer assumptions, and requires less user-input than other propagation tools. Moreover, continued improvements in computing hardware and sampling techniques (e.g. MCMC) have allowed MCS to be applied to increasingly complex models, including spatially and temporally distributed models of soil water flow and solute transport. Nevertheless, uncertainty analyses remain complicated when uncertain inputs are “autocorrelated” in space or time or “cross-correlated” between variables. More fundamentally, autocorrelation and cross-correlation will introduce more degrees of freedom than can be constrained uniquely through comparisons of model predictions and independent observations (i.e., inverse modeling), leading to the characteristic equifinality of hydrological models, but also implying sensitivity to the specific range of observations available. Indeed, while it has long been recognized that mathematical models should not be more complicated than specific applications require, recent research in soil hydrology has consistently reiterated the need to balance model complexity against the “testability” of model predictions. It has also argued for improvements in field observations as a key source of information for reducing uncertainties in soil hydrological models. In this context, it is helpful to identify those variables that contribute most to uncertainties in model outputs, because resources are always limited and must be used effectively.

In evaluating the contribution of different sources of uncertainty to the overall uncertainties in model predictions, it is instructive to consider the partitioning property for a multivariate distribution. In practice, however, more advanced analysis techniques based on Monte Carlo simulation are necessary for investigating the sources of uncertainties in complex hydrological models. While analysis of variance (ANOVA) techniques should assist in targeting resources towards those inputs and parameters that contribute most to uncertainties in model outputs, they are ultimately constrained by the interdependence of model inputs, structure, and parameters following calibration and the lack of physical reasoning attached to parameter uncertainties and, hence, the difficulties in separating model parameter and structural uncertainties. When evaluating predictive uncertainties, it is, therefore, important to distinguish between the predictive performance of a model and its ability to explain environmental phenomena. Indeed, estimates of uncertainty in model predictions will be unreliable if the explanations upon which they are based are inappropriate. In principal, this implies a detailed analysis of the “contingencies of place” associated with a particular modeling study and careful inspection of any modeling assumptions introduced within this context. In practice, however, this assumes a depth and breadth of expertise, including a detailed knowledge of mathematical modeling and field techniques, that is rarely, if ever, available (Blöschl, 2001). Thus, while decision-makers, and those affected by decisions, would benefit most from improved access to uncertainty tools, they currently benefit least because of the inherent limitations of existing tools and the practical disadvantages of performing such analyses. These issues must be addressed if an important aim of developing uncertainty tools is to encourage more widespread criticism of data and models in soil hydrology.

Given that uncertainty analyses are not benign instruments, with the capacity to both encourage unreasonable decisions and impede reasonable ones, there is a need to balance the complexity of an uncertainty analysis against the expertise of the user and the risks associated with bad decisions. While MCS dramatically simplifies the problem of propagating uncertainties through soil hydrological models, there are many problems of equal or greater complexity that remain. These include assessments of uncertainty in space-time categorical data, analyses of statistical dependence within and between uncertain inputs and parameters (autocorrelation and cross-correlation), assessments of structural uncertainty in models, and analyses of the scale-dependence of model input and model uncertainties. In evaluating modeling uncertainties, the importance of accounting for different modeling scales and changes between scales can hardly be over-emphasized because the consequence of using wrong (combinations of) support is to invalidate the uncertainty analysis. In soil hydrology, and in environmental science more generally, the need for interdisciplinary, “policy-relevant” research will only increase this problem in future. Here, the benefits of performing an uncertainty analysis are clear (e.g. Beven, 2000), but the challenges of balancing physical and statistical realism against the need for pragmatism in extending their application are considerable.

### References

- Top of page
- Introduction
- What is Uncertainty and How Can We Quantify it?
- Uncertainties in Model Inputs
- Uncertainties in Models
- Uncertainty Propagation
- Evaluating the Contribution of Different Uncertainties
- Scale and Uncertainty
- Conclusions
- References

- 2000) Inverse parameter estimation in a layered unsaturated field soil. Soil Science, 165, 109–123. , and (
- 1999) Uncertainty in estimation of soil hydraulic parameters by inverse modeling: example lysimeter experiments. Soil Science Society of America Journal, 63, 501–509. , and (
- 1985) A simple method for combining soil properties that show variability. Soil Science Society of America Journal, 49, 1365–1369. and (
- 1995) Critical evaluation of models and their parameters. Journal of Environmental Quality, 24, 803–807. , and (
- 2001) Non-linearity and error in modelling soil processes. European Journal of Soil Science, 52, 129–138. and (
- 2001) Model Validation: Perspectives in Hydrological Science, John Wiley & Sons: Chichester. and (
- 1998) Semi-parametric statistical approaches for space-time process prediction. Environmental and Ecological Statistics, 5, 297–316. , , and (
- 2001) Ranking of parameters on the basis of their contribution to model uncertainty. Ecological Modelling, 142, 11–23. and (
- 1998) On uncertainty in remediation analysis: variance propagation from subsurface transport to exposure modeling. Reliability Engineering and System Safety, 62, 117–129. , , and (
- 2001) Bayesian Theory, John Wiley & Sons: Chichester. and (
- 1995) Linking parameters across scales: subgrid parameterizations and scale-dependent hydrological models. Hydrological Processes, 9, 251–290. (
- 2000) On model uncertainty, risk and decision making. Hydrological Processes, 14, 2605–2606. Direct Link: (
- 2001) On explanatory depth and predictive power. Hydrological Processes, 15, 3069–3072. (
- 1992) The future of distributed models: model calibration and uncertainty prediction. Hydrological Processes, 6, 279–298. and (
- 2001) Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems using the GLUE methodology. Journal of Hydrology, 249, 11–29. and (
- 1993) The indicator approach to categorical soil data. I. Theory. Journal of Soil Science, 44, 361–368. and (
- 2000) Upscaling and Downscaling Methods for Environmental Research, Kluwer: Dordrecht. , and (
- 2002) Testing the distributed water table predictions of TOPMODEL (allowing for uncertainty in model calibration): The death of TOPMODEL? Water Resources Research, 38, 1257. , , and (
- 2001) Scaling in hydrology. Hydrological Processes, 15(4), 709–711. (
- 1995) Scale issues in hydrological modeling - a review. Hydrological Processes, 9, 251–290. and (
- 2001) Implications of model uncertainty for the mapping of hillslope-scale soil erosion predictions. Earth Surface Processes and Landforms, 26, 1333–1352. , , and (
- 1999) Combined spatial and Kalman filter estimation of optimal soil hydraulic properties. Water Resources Research, 35, 1079–1088. , , , and (
- 2000) Winding stairs: a sampling tool to compute sensitivity indices. Statistics and Computing, 10, 187–196. , and (
- 1999) Evaluation of prediction intervals for expressing uncertainties in groundwater flow model predictions. Water Resources Research, 35, 2627–2639. and (
- 2001) Analysis of uncertainties associated with different methods to determine soil hydraulic properties and their propagation in the distributed hydrological MIKE SHE model. Journal of Hydrology, 246, 63–81. and (
- 2002) Constraining soil hydraulic parameter and output uncertainty of the distributed hydrological MIKE SHE model using the GLUE framework. Hydrological Processes, 16, 373–391. and (
- 2002) The point-estimate method with large numbers of variables. International Journal for Numerical and Analytical Methods in Geomechanics, 26, 1515–1529. and (
- 1998) Parameter uncertainty analysis of common infiltration models. Soil Science Society of America Journal, 62, 1477–1487. , and (
- 2002) Hydrology: the changing paradigm. Progress in Physical Geography, 26, 290–301. (
- 1991) Experts in Uncertainty. Opinion and Subjective Probability in Science, Oxford University Press. (
- 1992) Estimating the effects of spatial variability of infiltration on the output of a distributed runoff and soil erosion model using Monte Carlo methods. Hydrological Processes, 6, 127–143. , and (
- 1998) The use of upscaling procedures in the application of soil acidification models at different spatial scales. Nutrient Cycling in Agroecosystems, 50, 225–238. , , , and (
- 2003) Uncertainties in the fate of nitrogen II: a quantitative assessment of the uncertainties in major nitrogen fluxes in the Netherlands. Nutrient Cycling in Agroecosystems, 66, 71–102. , , and (
- 2000) Regional-scale leaching assessments for Tenerife: effect of data uncertainties. Journal of Environmental Quality, 29, 835–847. and (
- 2000) Uncertainty propagation in layered unsaturated soils. Transport in Porous Media, 38, 273–290. and (
- 2002) Sensitivity and first-step uncertainty analyses for the preferential flow model MACRO. Journal of Environmental Quality, 31, 227–240. and (
- 1998) Parameter-induced uncertainty in modeling vadose zone transport of VOCs. Journal of Environmental Engineering-ASCE, 124, 441–448. , and (
- 1998) Relevance of scale dependent approaches for integrating biophysical and socio-economic information and development of agroecological indicators. Nutrient Cycling in Agroecosystems, 50, 13–22. , and (
- 2001) Representing elevation uncertainty in runoff modelling and flowpath mapping. Hydrological Processes, 15, 2223–2236. and (
- 1999) Quantification and simulation of errors in categorical data for uncertainty analysis of soil acidification modelling. Geoderma, 93, 177–194. , , , and (
- 1996) Effects of uncertainty in major input variables on simulated functional soil behaviour. Hydrological Processes, 10, 661–669. , and (
- 2001) Solute transport through a heterogeneous coupled vadose-saturated zone system with temporally random rainfall. Water Resources Research, 37, 1577–1588. , , , and (
- 1997) The Practice of Bayesian Analysis, Arnold: 152–171. and (
- 1997) Geostatistics for Natural Resources Evaluation, Oxford University Press. (
- 2001) Geostatistical modelling of uncertainty in soil science. Geoderma, 103, 3–26. (
- 1998) Effect of parameter distributions on uncertainty analysis of hydrologic models. Transactions of the ASAE, 41, 65–70. , , , , and (
- 1979) Monte Carlo Methods, Chapman & Hall: London. and (
- 1999) Uncertainty in simulated nitrate leaching due to uncertainty in input data. A case study. Soil Use and Management, 15, 167–175. , , , and (
- 1999) A framework for assessing uncertainties in simulation predictions. Physica D, 133, 179–188. (
- 2002) Accuracy of coarse-scale land quality maps as a function of the upscaling procedure used for soil data. Geoderma, 107, 177–196. (
- 1998a) Error Propagation in Environmental Modelling with GIS, Taylor & Francis: London. (
- 1998b) Uncertainty analysis in environmental modelling under a change of spatial scale. Nutrient Cycling in Agroecosystems, 50, 255–264. (
- 2002) Analysing uncertainty propagation in GIS: why is it not that simple? In Uncertainty in Remote Sensing and GIS, FoodyG. M. and Atkinson P. M. (Eds.), Wiley: Chichester, pp. 155–165. (
- 1992) Combining soil maps with interpolations from point observations to predict quantitative soil properties. Geoderma, 55, 1–15. and (
- 1999) Spatial aggregation and soil process modelling. Geoderma, 89, 47–65. and (
- 1999) Analysis of variance designs for model output. Computer Physics Communications, 117, 35–43. (
- 1994) Monte Carlo estimation of uncertainty contributions from several independent multivariate sources. In Predictability and Nonlinear Modelling in Natural Sciences and Economics, Grasman J. and Van Straten G. (Eds.), Kluwer: Dordrecht, pp. 334–343. , and (
- 1994) UNCSAM: a tool for automating sensitivity and uncertainty analysis. Environmental Software, 9, 1–11. , and (
- 1992) ‘Expert information’ versus ‘expert opinions’: another approach to the problem of eliciting/combining/using expert knowledge in PRA. Reliability Engineering and System Safety, 35, 61–72. (
- 2002) Assessment of uncertainty and risk in modeling regional heavy-metal accumulation in agricultural soils. Journal of Environmental Quality, 31, 175–187. , and (
- 2003) Uncertainties in the fate of nitrogen I: an overview of sources of uncertainty illustrated with a Dutch case study. Nutrient Cycling in Agroecosystems, 66, 43–69. , , , , , , , , and and (
- 2002) Assessment of the prediction error in a large-scale application of a dynamic soil acidification model. Stochastic Environmental Research and Risk Assessment, 16, 279–306. , and (
- 1999) Uncertainty assessment in modelling soil acidification at the European scale: a case study. Journal of Environmental Quality, 28, 366–377. , , and (
- 1988) On the validity of first-order prediction limits for conceptual hydrologic models. Journal of Hydrology, 103, 229–247. (
- 2001) A geostatistical approach for mapping thematic classification accuracy and evaluating the impact of inaccurate spatial data on ecological model predictions. Environmental and Ecological Statistics, 8, 311–330. and (
- 1999) Geostatistical space-time models: a review. Mathematical Geology, 31, 651–684. and (
- 1990) Comparison of spatial prediction methods for mapping floodplain soil pollution. Catena, 17, 535–550. , and (
- 1995) Errors in the estimation of soil water properties and their propagation through a hydrological model. Soil Use and Management, 11, 15–21. (
- 2002) Comparison of two different approaches of sensitivity analysis. Physics and Chemistry of the Earth, 27, 645–654. , , and (
- 1989) Uncertainty in pesticide leaching assessment in Hawaii. Journal of Contaminant Hydrology, 4, 139–161. , , and (
- 2000) A multiple criteria decision support system for testing integrated environmental models. Fuzzy Sets and Systems, 113, 53–67. and (
- 1996) Alternative modeling approaches for contaminant fate in soils: uncertainty, variability, and reliability. Reliability Engineering and System Safety, 54, 165–181. (
- 2002) Uncertainty analysis for pedotransfer functions. European Journal of Soil Science, 53, 417–429. and (
- 1990) Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis, Cambridge University Press: New York. and (
- 2001) Elicitation of expert judgments of climate change impacts on forest ecosystems. Climatic Change, 49, 279–307. , and (
- 1994) Verification, validation, and confirmation of numerical-models in the earth-sciences. Science, 263, 641–646. , and (
- 2003) Investigating the uncertainty in predicting responses to atmospheric deposition using the model of acidification of groundwater in catchments (MAGIC) within a generalised likelihood uncertainty estimation (GLUE) framework. Water Air and Soil Pollution, 142, 71–94. , , and (
- 1991) Regional water flow and pesticide leaching using simulations with spatially distributed data. Geoderma, 48, 245–269. , and (
- 1996) Probabilistic sensitivity analysis for one-dimensional contaminant transport in the vadose zone. Journal of Contaminant Hydrology, 24, 97–115. and (
- 2000) Monte Carlo vadose tone model for soil remedial criteria. Soil and Sediment Contamination, 9, 593–610. and (
- 1975) Point estimates for probability moments. Proceedings of the National Academy of Sciences of the United States of America, 72, 3812–3814. (
- 1990) A Course in Simulation, MacMillan: New York. (
- 2001) Assessment of the production and economic risks of site-specific liming using geostatistical uncertainty modelling. Environmetrics, 12, 699–711. , and (
- 1996) Testing ecological models: the meaning of validation. Ecological Modelling, 90, 229–244. (
- 1998) Database-related accuracy and uncertainty of pedotransfer functions. Soil Science, 163, 765–779. and (
- 2001) ROSETTA: a computer program for estimating soil hydraulic parameters with hierarchical pedotransfer functions. Journal of Hydrology, 251, 163–176. , and (
- 2000) Inverse estimation of parameters in a nitrogen model using field data. Soil Science Society of America Journal, 64, 533–542. , and (
- 2002) Sensitivity analysis of physical and chemical properties affecting field-scale cadmium transport in a heterogeneous soil profile. Journal of Hydrology, 264, 185–200. , , , and (
- 1998) Small scale variability in the flow of water and solutes, and implications for lysimeter studies of solute leaching. Nutrient Cycling in Agroecosystems, 50, 65–75. and (
- 1989) Ignorance and Uncertainty: Emerging Paradigms, Springer-Verlag: New York. (
- 2003) Soil water content interpolation using spatio-temporal kriging with external drift. Geoderma, 112, 253–271. , and (
- 2002) Uncertainty in nonpoint source pollution models and associated risks. Environmental Forensics, 3, 179–189. , and (
- 2003) Identifying and assessing uncertainty in hydrological pathways: a novel approach to end member mixing in a Scottish agricultural catchment. Journal of Hydrology, 274, 109–128. , , , , and (
- 1987) Large sample properties of simulations using Latin hypercube sampling. Technometrics, 29, 143–151. (
- 2001) Assessment of uncertainty in simulation of nitrate leaching to aquifers at catchment scale. Journal of Hydrology, 242, 210–227. , , , , and (
- 1999) Uncertainty in a regional-scale assessment of cadmium accumulation in the Netherlands. Journal of Environmental Quality, 28, 461–470. , and (
- 2002a) Sensitivity of computed values of water balance and nitrate leaching to within soil class variability of transport parameters. Journal of Hydrology, 264, 87–100. and (
- 2002b) Sensitivity of a large-scale hydrologic model to quality of input data obtained at different scales; distributed versus stochastic non-distributed modelling. Journal of Hydrology, 264, 101–112. and (
- 2003) Gambling with randomness: the use of pseudo-random number generators in GIS. International Journal of GIS, 17, 49–68. and (
- 2002) Data quality and model complexity for regional scale soil erosion prediction. International Journal of GIS, 16, 663–680. and (
- 1990) A comparison of kriging, cubic splines and classification for predicting soil properties from sample information. Journal of Soil Science, 41, 473–490. and (
- 2003) Toward improved identifiability of soil hydraulic parameters: on the selection of a suitable parametric model. Vadose Zone Journal, 2, 98–113. , , and (
- 2002) The development and application of a multilevel decision analysis model for the remediation of contaminated groundwater under uncertainty. Journal of Environmental Management, 64, 221–235. and (
- 2000) Is soil variation random? Geoderma, 97, 149–163. (
- 1990) Statistical Methods in Soil and Land Resource Survey, University Press: Oxford. and (
- 1999) State-space prediction of field-scale soil water content time series in a sandy loam. Soil and Tillage Research, 50, 85–93. , , , , and (
- 2001) Pedotransfer functions: bridging the gap between available basic soil data and missing soil hydraulic properties. Journal of Hydrology, 251, 123–150. , and (
- 2001) Modeling water flow and mass transport in a loess catchment. Physics and Chemistry of the Earth Part B-Hydrology Oceans and Atmosphere, 26, 487–507. , , and (
- 1993) An approach to estimating uncertainties in modeling transport of solutes through soils. Journal of Contaminant Hydrology, 12, 35–50. , and (