Dynamic rating curve assessment in unstable rivers using Ornstein-Uhlenbeck processes

Authors


Abstract

[1] The procedure of fitting rating curves in channels where the stage-discharge relationship is subject to changes driven by morphological processes remains one of the major unsolved problems in hydrometry. This paper addresses this issue by formulating the stage-discharge relationship as a steady flow, one-segmented power law model with parameters that are viewed as stochastic processes with characteristics associated with the temporal instabilities of the channel elements governing the stage-discharge relationship. A Bayesian analysis with informative priors and time-stage-discharge measurements as forcing data is used to determine the most plausible model and its posterior parameter distributions using Markov chain Monte Carlo simulation techniques and particle filtering. The proposed framework is applied to data from gauging stations in two unstable rivers and one stable river in Norway.

1. Introduction

[2] Rating curves are used for transforming stage data recorded at gauging stations into discharge so that streamflow time series can be obtained and applied in hydrological, hydraulic, and geomorphologic analysis as well as for water resources management and in climatic studies. Rating curves are, in most cases, obtained by fitting a predetermined stage-discharge model to measurements of discharge and concurrent values of stage.

[3] It is well known in hydrometry that rating curve assessment in natural rivers can be plagued with several complicating factors [Di Baldassarre and Montanari, 2008], of which unstable channel conditions are the most difficult to handle and also the most widespread. Temporal changes in the elements that govern the stage-discharge relationship at the gauge, such as channel geometry, roughness characteristics, and approach conditions, cause the stage-discharge relationship to vary with time. Shifting control behavior is a result of complex river processes that are both diverse and difficult to interpret. There is a large amount of literature on the causes to changes in natural watersheds, e.g., the flood regime and climatic changes. The mechanisms behind river adjustments are investigated extensively by researchers. Channel instability induced by human activity in the catchments (e.g., river regulation, land use change, mining, etc.) is also well covered in the literature. It is beyond the scope of this paper to give a satisfactory review on the geomorphologic literature. Works such as those by Schumm [1977] and Knighton [1998] provide a thorough treatment of the fluvial processes that cause channel changes through time together with a long list of references.

[4] The procedure of setting up rating curves for unstable channel sections has not been adequately solved in the literature. Virtually all standard hydrometric literature [e.g., Rantz et al., 1982; Herschy, 1995; International Standards Organization, 1998] tackles the problem by segmenting the available stage-discharge measurements into time periods for which the hydraulic control is assumed to be stable. Rating curves with unique sets of parameters for each of the time periods are then set up. Some measurements are sometimes used across two or more time periods when it is believed that instability is affecting only a limited range, typically the lower one, of the stage-discharge relationship. The rating curves are usually derived from one of two methods. The first way is to fit the rating curves to the measurements using statistical or manual methods. Another approach is to start with the last valid rating curve, or a base rating curve, and then adjust some, often one, of its parameters until the resulting curve fits adequately to the measurements taken after the date when the shift is deemed to have taken place. Burkham and Dawdy [1970] illustrate this approach with a practical example and an error analysis.

[5] There are several problems with the common practice. First and foremost, defining all changes of the rating curve with measurement is, in most cases, not possible because of economic and personnel constraints. This fact introduces considerable uncertainty about the segmentation dates and the validity of the current and past rating curves. Also, the information about the stage-discharge relationship contained in the measurements will vary, sometimes greatly, between consecutive time periods. As a result, the computed streamflow time series characteristics will more or less jump from one time period to the next. The traditional approach does not attempt to rectify the problem appropriately, especially in cases where channel changes vary continuously with time. It is similar to the procedure of approximating a complex nonlinear function with linear segments, where the number of segments available is constrained by the number of observations of the nonlinear function. Also, a general and clear definition of when to introduce a shift is lacking. A common method is to assess the deviation between the current rating curve and recent consecutive validation measurements. If the deviations appear to be systematic and larger than one would expect from measurement imprecision exclusively, a shift is indicated. This approach can be implemented with experience and knowledge of observations of the changing control. Clearly, the success of this technique for detecting rating shifts is highly variable and depends on factors such as measurement frequency, knowledge about the site-specific conditions dictating the discharge measurement accuracy, and information about the characteristics of the changing control as well as the skill and experience of the analyst assessing the problem. Even if the hydraulic control is stable within the chosen time periods, there normally would be few measurements available to fit or adjust the rating curves. Consequently, modelers using the streamflow data are faced with a large and varying degree of rating curve uncertainty. It should be mentioned that some studies propose methods where a theoretical flow equation (e.g., a uniform flow formula or a critical flow equation) is applied to leveled channel geometry and then fitted to measurements. Leonard et al. [2000] utilized this approach to model a stage-fall-discharge relationship in an unstable river using two stage gauges. To overcome the problems of varying geometry, they assumed that changes in cross-sectional geometry were manifested in the slope measurement characteristics. Although they found evidence for this assumption in results from several geometrical surveys of the channel reach, it can hardly be recognized as a general rule. Nonetheless, this method might be a viable but costly way for setting up rating curves in some unstable rivers.

[6] The aforementioned criticism implies that a better approach would be to model the channel changes continuously in the time domain. Explicitly, this means that the rating curve parameters should be viewed as appropriate functions of time t. The simplest way is to formulate some of the parameters as deterministic functions of time, e.g., polynomials of t with unknown coefficients that can be obtained through standard regression methods. This naïve approach provides an incomplete approach to the problem because it assumes a too simplistic course of changes and also requires an accurate prior knowledge about the progress of the changing control. A better approach is to view the rating curve parameters as continuous time stochastic processes. This paper proposes a method where the rating curve parameters are assumed to follow Ornstein-Uhlenbeck processes, which will be described in section 3. Analysis is performed using a Bayesian framework and Markov chain Monte Carlo simulation techniques. The appropriateness and computational feasibility of the method is examined by three Norwegian case studies.

2. Power Law Rating Curve Model

[7] The cross-sectional geometry of a natural section control or the reach-averaged geometry of a channel control might be formulated as a so-called power law:

equation image

where w is the water surface width, d is the effective flow depth, and equation image and equation image are parameters related to the scale and shape of the cross section, respectively. Since d normally is not really identifiable in this idealized description and also varies in an unstable channel, it is replaced by hh0, where h is the stage, or the water level in relation to a local datum, and h0 is a parameter representing the effective stage of zero flow. The value of h0 will normally be relatively close to the stage for the lowest point of the control. If the hydraulic control has compound characteristics, several segments with unique parameters have to be invoked. This will lead to a multisegmented rating curve, which involves more advanced modeling procedures [Reitan and Petersen-Øverleir, 2009]. However, a compound rating curve modeling problem implies a framework that is significantly more complicated than a one-segmented case and is therefore not considered in this study. The dynamic model presently described, however, will, at least theoretically, be applicable in multisegmented cases. In practice, it might not be computationally feasible.

[8] If the stage-discharge relationship is governed by channel control, a simple friction law equation, such as the Manning formula, would normally be applicable for steady or quasi-steady flow. Assuming steady flow conditions and that wd, one obtains

equation image

where Q is the discharge, F is a coefficient reflecting channel friction, and sf is the friction slope. The friction law exponent 5/3 could be replaced by an unfixed value, say equation image, as it is known to vary between different types of channels.

[9] The stage-discharge relationship for a natural section control, e.g., a ledge of rocks or gravels, can be formulated as for a broad-crested weir. Neglecting velocity head upstream at the gauge and assuming a power law depth-width relationship, one can postulate the following stage-discharge relationship:

equation image

where equation image is the Coriolis energy coefficient and g is the acceleration of gravity.

[10] One notes that equations (2) and (3) are, in fact, the power law stage-discharge rating curve model recommended in virtually all standard hydrometric literature:

equation image

where exp(a), b, and h0 and all derived quantities will be called “hydraulic parameters” to set them apart from parameters in a statistical model.

[11] Concerning channel change, one can draw some conclusions by considering equations (2)(4). First, it is observed that changes in channel shape affect the exponent in equation (4) and also the constant exp(a). Second, it is readily evident that changes in channel scale, i.e., a narrowing or enlargement of the cross-sectional depth-width relationship, affect the constant. Considering channel control, one sees that changes in hydraulic loss properties and changes in channel slope will alter the hydraulic parameter exp(a). In cases of local control, it is evident that changes in the crest of the section control affecting the degree of nonuniformity across the section will change exp(a) through changes in equation image. Channel changes affecting the choking losses (which are neglected in equation (3)) will most likely also induce changes in exp(a). It is unclear how changes in approach velocity affect equation (4). Most likely, it will have an impact on both b and exp(a) and perhaps also on h0. Finally, one sees that changes in river bed elevation will substantially impact the cease-to-flow hydraulic parameter h0, also called the zero-plane displacement.

[12] The aforementioned considerations are based on a simplistic hydraulic framework and several simplifying assumptions and are therefore approximate. Nonetheless, they show that direct observations of measurable changes can be used to a priori determine what hydraulic parameters are most likely to be affected and, to some extent, the possible range of hydraulic parameter alterations. Obviously, this can be utilized in a Bayesian rating curve analysis. More specifically, one can use it to elicit prior distributions for the hydraulic parameters and to set up the prior probability structure for a selection of plausible rating curve models that are based on equation (4).

3. Rating Curve Parameters as Ornstein-Uhlenbeck Processes

[13] The hydraulic parameters in equation (4) are usually assumed to be constant in time. However, in an unstable channel section, the relationship between stage and discharge will change over time. Such changes are caused by scour-and-fill processes and redeposition of river bed material that is moved from one location in the river stem, as well as by aquatic plant growth. Therefore, the zero-plane displacement h0, the hydraulic scale parameter exp(a), and the hydraulic shape parameter b may all have different values for different times. It is also possible that the major changes can affect only one or two of the hydraulic parameters.

[14] The mathematical nature of these change parameters is seldom apparent. In some cases, a steady increase in the zero-plane displacement might occur, yielding a linear relationship between zero-plane displacement and time. In other cases, this hydraulic parameter could change more or less instantaneously from one value before a flood incidence to another value after that event. However, if such changes are possible, then all types of changes with intermediate characteristics could occur, i.e., a gradual scouring during a long-lasting spring flood followed by a gradual filling period during summer low flow, which, in turn, is followed by periods of several scour and fill episodes caused by brief and heavy rain floods in the autumn. If a hydraulic parameter changes, it may do so in complex sequences where the value both rises and falls. It is usually not possible to know the precise mathematical form of the change a priori. Thus, we propose to use a stochastic process in this paper. Such a process should be parsimonious. Usually, parsimony implies a tractable analysis that is applicable in data-sparse cases. However, there should be some structure in the process so that if one has knowledge about a hydraulic parameter value at one time, this would be informative for the same parameter value at a small period in the future or the past. The hydraulic parameters have a physical interpretation, and intervals of plausible values can be formed. Methods using such prior information have been studied by Árnason [2005], Moyeed and Clarke [2005], and Reitan and Petersen-Øverleir [2008, 2009]. In these studies, the parameters were given informative Bayesian prior distributions. Thus, the stochastic process should be characterized by the following properties: (1) The process should have a stable prior distribution; that is, it should typically vary around a central value with a finite spread. (2) The process should have some memory; that is, there should be dependency between time-shifted versions of the hydraulic parameter values, and the dependency should decrease with an increase in lag time. (3) The process should be a function of continuous time, as the measurements are typically not performed at equidistant time intervals and the state of the stage-discharge relationship may be of interest at arbitrary times. (4) Since instantaneous changes seem physically unrealistic, the process should be continuous.

[15] Requirement 4 may be a little too strict, as changes happening over time intervals smaller than those in the stage time series may be possible. Thus, such changes could for all practical applications be considered instantaneous and will give rise to discontinuous processes. The method used here could possibly be expanded to allow for instantaneous changes, but this will not be the focus of the analysis in this paper.

[16] Requirement 1 could readily be interpreted as meaning that the process should have an expectation and a variance. The maximum entropy distribution, and thus the most parsimonious way of using these two quantities, is the normal distribution. Requirement 2 suggests that the process should have some autocorrelation, such as a first-order autoregressive (AR(1)) process for an equidistant time series. However, since requirement 3 states that the process should have continuous time as an argument, a stochastic differential equation (SDE) is suggested. A linear SDE will have a stationary distribution, which is normal. This is known as the Ornstein-Uhlenbeck (OU) process, also known as the mean-reverting process. This is because the process contains two elements, one stochastic part that adds noise in continuous time and one deterministic (differential equation) part that pulls the process toward the mean value. In SDE terms, the process is written as

equation image

where Bt is the Wiener process, i.e., a continuous time random walk, and Xt is the resulting OU process. A description of the Wiener process and the OU process without the full SDE formalism is given by Taylor and Karlin [1998, chapter 8]. Typically, equation image is called the drift parameter, equation image is the diffusion parameter, and equation image is the expected value of the process. A realization of an OU process is shown in Figure 1. A time series of equidistant measurements with time differences u of this process will be AR(1) distributed:

equation image

with equation image and independent stochastic contributions equation image. Equation (6) can also be used when one is calculating the relationship between any two points in time in the process. Since the OU process is Markovian, this relationship is sufficient to express the probability density of any number of points in time. Thus, a likelihood function can be formed.

Figure 1.

A realization of a stationary Ornstein-Uhlenbeck process with equation image, s, and Δt shown.

[17] Equation (5) can be reparameterized using the variance of the stationary distribution instead of the diffusion as equation image. Also, the drift parameter equation image can be reparameterized into a quantity that has dimension time, rather than frequency, as equation image. This parameter will be called the characteristic time. It is the time needed for the correlation between two points in the process to dip below 1/e, as equation image. Figure 1 shows a realization of an OU process with an illustration of this parameterization. While the process may look like a random walk on small time intervals, it will have, in the long term, a stationary normal distribution centered around equation image. The process only has three parameters, two describing the mean and variance of the stationary distribution and one describing the autocorrelation structure. It could therefore be regarded as a minimal way of fulfilling requirements 1–4.

[18] Note that when net sedimentation is either digging out or filling in the riverbed, then stationarity for h0 can no longer be assumed, at least not on the time scales we are looking at. However, since there are limits as to how far down and how high up a river bed can be found, we would like the process to have a stationary distribution on very long time scales. This could be achieved by a linear combination of OU processes or by having one linear SDE track another. While this would conceptually be a small expansion of the current model, such an expansion could make the numerical routines more complicated and the inference less tractable. A single OU process for h0 with large Δt and s should adequately describe such a situation.

[19] In this paper, we will thus apply the OU process to one or more of the hydraulic parameters in equation (4). This allows for a range of different types of changes in the stage-discharge relationship. Using the reparameterized version of equations (5) and (6), the probability density of a series of states (x1, x2,…, xn) at times (t1, t2,…, tn) can be expressed as

equation image

where equation image

[20] The changing hydraulic parameters form a hidden process underlying the stage-discharge measurements. Thus, the model proposed is in the family of hidden variable models, where observations react to a hidden state. The evolution of the state is described by equation (7) for the various processes found behind the hydraulic parameters. Unless otherwise stated, the assumption is that the observational discharge is log normally distributed and centered around the real discharge with the noise proportional to the discharge [Venetis, 1970], so that

equation image

where qt = log(Qt) and ht are the measured discharge and stage, respectively, at, bt, and h0t are the hidden states, equation image is the size of the observational noise on the log scale, and equation image is independent standard normally distributed noise. Using the Venetis model, no observational noise is assumed for the stage values. In real measurements, there will, of course, also be errors in these measurements, but for typical situations (excluding low flow), it is assumed that the effect of stage measurement errors is much less than the effect of the discharge errors. Taking equations (7) and (8) together, a probability density for the observations and states can be formed. Zero-plane measurements can also be incorporated into the likelihood by assuming equation image for such measurements, where υt is independent standard normal noise.

4. Bayesian Analysis

4.1. Hierarchical Structure

[21] In this paper, we consider the hidden states at the time of the observations as parameters with distributions described by equation (7). Thus, this is a hierarchical Bayesian model, with the OU parameters at the top, the hidden states in the middle, and the observations at the bottom. We denote D as the data and the set of system states for all time points of interest as X = {Xi}i = 1,…,n = {(ai,bi,h0i)}i = 1,…,n, where n is the number of measurements and Xi is the state at the time of the ith measurement ti. The top parameters are defined as equation image = (μa,sa,equation imagetab,sb,equation imagetbh0,sh0,equation imageth0) (see equations (4) and (7)). Thus, we have the structure equation imageXD; that is, the top parameters affect the states which affect the data.

[22] The prior of the topmost parameters was set as independent for each parameter f(equation image) = f(μa)f(sa)f(equation imageta)f(μb)f(sb)f(equation imagetb)f(μh0)f(sh0)f(equation imageth0)f(equation image). The equation image parameters were given a normal distribution, with credibility bands taken from Reitan and Petersen-Øverleir [2008]. The noise parameters, sa, sb, sh0, and equation image, were given probability densities belonging to the inverse gamma distribution, with a 95% credibility interval ranging from 0.1 times the standard deviation of the corresponding equation image to 2 times that. For sh0, this parameter simply had a 95% credibility band between 0.1 and 2.0 m. The equation imaget parameters were assigned gamma distributions, all going from 0.1 to 100 years. The distributions of all parameters were given hyperparameters so that their 95% credibility bands conformed to those shown in the last column of Table 2.

[23] Since neither the top parameters nor the state are directly available, statistical analysis is needed for this set of unknown quantities, which we may define as equation image = (X, equation image). We may think of this set as the total parameter set, or we may keep the state and topmost parameters separate. These two ways of looking at the unknown quantities may inspire two different ways of doing numerical inference, as we shall see.

4.2. Markov Chain Monte Carlo Sampling on State Plus Topmost Parameters

[24] The first method, which follows the thinking of Carlin et al. [1992], treats equation image, i.e., both the latent variables X and the top parameters equation image, as a single set of parameters that are treated with standard Markov chain Monte Carlo (MCMC) methods. In a Bayesian analysis, the posterior parameter density is the main point of interest. In such an analysis, one needs a prior parameter and a state probability density f(equation image) = f(X, equation image) = f(X|equation image)f(equation image). The first term is described by equation (7). The second term is the prior for the top parameters.

[25] The posterior density then uses the likelihood to update the density f(equation image|D) = f(D|equation image)f(equation image)/∫f(D|equation image')f(equation image')dequation image'. Often, including the model studied here, this density is not analytically available, as the normalization constant f(D) = ∫f(D|equation image')f(equation image')dequation image' cannot be calculated analytically. However, an MCMC algorithm can be implemented, which can then sample from the posterior parameter plus state density f(equation image|D) = f({ati,bti,h0ti},μa,sa,equation imagetab,sb,equation imagetbh0,sh0,equation imageth0,σ|{qti,hti}), where the index i runs over a number of measurements. In our case, we used a random walk Metropolis algorithm. The random walk steps were adapted during the burn-in phase, so that the acceptance rate was approximately 0.33 [see, e.g., Roberts et al., 1997]. A parallel tempering scheme [Geyer, 1991] was used in order to deal with possible multimodality in the posterior parameter distribution.

[26] Implementing such an algorithm was fairly straightforward, but the efficiency was deemed low. An execution time of at least 1 day was necessary in order to get stable results for all models (all combinations of static and dynamic hydraulic parameters). That is, 1 day was needed for each analysis in order to get approximately the same results each time in terms of the two leading numbers in the model likelihoods and parameter estimates. Even for such long execution times, the stability of the results for the most complicated model (with all hydraulic parameters dynamic) could be called into question. However, for the less complicated models, it did seem like a stable result could be reached, which means that a different, more complex methodology could be implemented and tested against this more simple approach.

4.3. Particle Filtering

[27] The second numerical method we tested is called particle Markov chain Monte Carlo (PMCMC) [see, e.g., Andrieu et al., 2010]. This method combines the MCMC method for handling the topmost parameters equation image with the particle filter method [see, e.g., Arulampalam, 2002] for handling the state X. The particle filter calculates f(D|equation image), which then enables sampling from f(equation image|D) using MCMC. It also allows one to sample from f(X| D,equation image), which means that the combined distribution f(X,equation image|D) = f(equation image|D)f(X|D,equation image) will be sampled from. Thus, this algorithm will be equivalent to MCMC sampling on f(equation image|D) = f(X, equation image|D), which was described in section 4.2 and contains the interaction between state and top parameters.

[28] Since the parameters are treated in a Monte Carlo framework, the Bayesian inference is not made more complicated by the fact that each likelihood estimation from the particle filter is stochastic in nature. Thus, the MCMC part of the algorithm can be simplified. The efficiency of technique then hinges on finding an efficient particle filter method.

[29] A standard particle filter delivers inference on a state at a given time, given the process parameters and the observations up to the given time. This is typically done in an iterative fashion by starting with samples (particles) from the previous state given the observations up to that previous state {X(j)i−1}j = 1,…,N, where N is the number of particles. A proposal probability density q(Xi|Xi−1,Di) is then used for making a proposed set of new particles {X(j)i}j = 1,…,N. A set of importance sampling weights w(j) = f(Di|Xi)f(Xi|Xi−1)/q(Xi|Xi−1,Di) can be associated with each particle. In the most general case, the importance weights of the previous particle state should also be used in the updating of the particle weights, but if a resampling step is performed, where the set of particles is resampled with replacement and with probabilities proportional to w(j), then this is not necessary. In such a case, the resampled particles are samples from the state at the given time, given all observation up to and including that given time.

[30] Before doing the resampling, an average probability density of the observation at the given time given the previous observations can be calculated as f(Di|D1,…,i−1) = ∫f(Di|Xi)f(Xi|D1,…,i−1)dXiw(j)/N [see, e.g., Pitt, 2002]. This enables us to estimate the likelihood, conditioned only on the topmost parameters, as f(D|equation image) = f(Dn|D1,…,n−1)…f(Di|D1,…,i−1)…f(D1).

[31] A smoothing algorithm, i.e., an inference on the state at a given time given all observations, can be derived by tracking the resampling of each particle for the last state backward in time [see, e.g., Kitagawa, 1996]. Tracking the particles back in time may yield only a few or even only one initial particle. This method is deemed not very efficient for smoothing purposes, but since the particle filter is implemented within an MCMC sampler, one does not need a lot of different smoothing samples for each particle filtering.

[32] While a particle filter may yield an unbiased and consistent estimator for the likelihood, this estimation technique does not need to be particularly efficient. The efficiency of the filter can depend heavily on the proposal density q(Xi|Xi−1,Di). The original particle filter, known as sequential importance resampling (SIR) [see Gordon et al., 1993], used only the state equation f(Xi|Xi−1) for proposing new particle states. This can be efficient when the previous state gives more information about the next than the observation does. Where this is not the case, it is best to choose a filter that is as close as possible to the information about the state given the previous state and the current observation f(Xi|Xi−1)f(Di|Xi) [see, e.g., Maskell et al., 2003, pp. 932–933].

[33] In this study, the state equation f(Xi|Xi−1) is linear and Gaussian. If we can linearize the observation equation f(Di|Xi) as a function of the state, we can then use the equation for combining normal information to create a proposal density. If f(x|I1) = N(equation image112) and f(x|I2) = N(μ222), then

equation image

[34] Equation (9) can be used for combining the information about a state coming from the previous state and the current observation if the observational information is linearized. We thus need to study f(Xi|Di,Xi−1) ∝ f(Xi|Xi−1)f(Di|Xi). If the linear parts of the state, ai and bi, are integrated out, this yields

equation image

where

equation image

and similar expressions can be found for a and b. While the second term inside the exponent is quadratic in h0 and can thus be identified by a normal distribution, the first term contains nonnormal contributions. However, this term can be approximated as a quadratic expression by finding its minimum μh2 = hi − exp[(qia)/μb] and the inverse of the double derivative sh22, which will approximate the first- and second-order moments of the normal distribution, respectively. In this study, sh22 was calculated using numerical double derivation of the first term in equation (11). These two moments will then form the normal approximation for the information about h0 given the current observation. Taken together with the observation about the current state given the previous state (see equation (11)), we can get an approximate distribution for h0 given both the previous state and the current observation using equation (9). This will now be our proposal density for h0.

[35] The distribution of b given the previous states, the current observation, and the current sample of h0 can be shown to be

equation image

[36] The distribution of a given the previous states, the current observation, and the current sample of h0 and b is

equation image

[37] With these formulas, it is possible to get a proposal density for the state, Xi, which is fairly close to the distribution of the state given the previous state and the current observation. Practical experiments on the data sets in this study showed that this proposal density gave results that had about 1/16 of the variance belonging to the original particle filter, SIR. Thus, there was a marked improvement in efficiency when using this method.

4.4. Inference on Unobserved Time Points

[38] In addition to being able to do statistical inference on the parameters and the states at time points with observations, it is also possible to expand the measurement set with time points not associated with direct measurements of discharge. This enables one to do inference on the state for unmeasured time points. However, this may reduce the efficiency of the filter, as many steps in the filter will behave as a simple SIR filter. Instead, the OU process can be used for interpolating between points for each sample of parameters plus smoothing samples of observed time points. For a time point t, between measurement i − 1 and measurement i, one has

equation image

where the topmost parameters are sampled from the posterior distribution using the PMCMC approach and the states come from the corresponding smoothing samples for the given parameter sample. Alternatively, the state plus parameter samples could come from the straight MCMC approach. The treatment of b and h0 will be equivalent.

[39] What remains is to handle f(at|X(j),equation image(j)). Since the processes are independent Markov chains, this reduces to

equation image

[40] Combining equations (12) and (13) with the results from the PMCMC or the MCMC approach on state plus parameters will yield posterior samples for the states at any unobserved time points. Similarly, samples from the posterior distribution of the discharge given the stage Qt = exp[at + bt log(ht−h0t)] can be generated. If the stage is unknown, an inference on the stage-discharge relationship can nonetheless be formed. An example of such an interpolation and extrapolation analysis will be shown in section 5.

[41] The marginal distribution of the OU parameters can be gained from looking at the samples unconditioned on the states. Estimates of the state with corresponding credibility intervals can also be found for the hidden state. Examples of this will be shown in section 5.

4.5. Models and Model Selection

[42] Since not all hydraulic parameters in equation (4) need to be processed, there will be a range of submodels to the full process model, where one or more hydraulic parameters are set to be constant. The total number of models will be eight, counting also the model with only constant hydraulic parameters. The models are given equal prior probability. A model with h0 fixed will be a submodel of one where h0 is a process. But with a fixed h0, the diffusion will be zero with measure one, while with h0 dynamic, the diffusion will have a continuous (inverse gamma) distribution with zero probability for the diffusion exactly equal to zero. Bayesian hypothesis testing follows standard Bayesian inference schemes, using Bayes formula for models [see, e.g., Berger, 1985, chapter 4.3.3]. An importance sampling method described by Reitan and Petersen-Øverleir [2009] is then used for calculating the posterior model probability by estimating the marginal data density f(D|M) = equation imagef(D|equation image,M)dequation image. This method makes an importance sampling proposal distribution by calculating the first- and second-order moments of the MCMC samples and using a multinormal distribution with these moments as the proposal. When the marginal data density has been calculated, Bayes formula can be used for calculating the posterior model probabilities P(M|D) = f(D|M)P(M)/equation imageM′f(D|M′)P(M′). The model with the highest posterior probability is chosen. Note that in a Bayesian setting, this will not always be the most complex model since the marginal data density f(D|M) indicates prediction strength rather than the fit. Note also that Bayesian model probabilities may be sensitive to the choice of prior distribution for each model. This seems to be the case for the process diffusion parameters in our case.

5. Case Studies

5.1. Haga bru

[43] The Gaula River at Haga bru drains an unregulated catchment of 3062 km2. The station was established in 1908 and provides one of the longest time series of unregulated streamflow in middle Norway. The runoff is dominated with spring snowmelt floods and winter low flow. Short-lasting floods generated by heavy rainfall occur frequently in the summer and spring. The peak values of the flashy rain floods are often significantly larger that the spring floods.

[44] A rather steep-sloped channel section stretching from the station and 2–300 m downstream controls the stage-discharge relationship at Haga bru (A. Bjøru, former responsible field hydrometrist, personal communication, 2009). The length of the channel control varies with stage. At large floods the governing reach stretches farther down the river. The river bed comprises gravels and cobbles. Bars of bed material build up between floods, altering the elevation and the cross-sectional form of the channel control. This generates a scour-and-fill regime, causing frequent rating shifts.

[45] The result of the Bayesian model comparison was that a model with variation in a and h0, but not in the exponent b, was preferred (see Table 1). In practice, this implies that the zero-plane displacement and the scale of the discharge (which can be associated with the width of the river and the slope and friction) are changing, while the shape, as characterized by b, is relatively constant. The evidence for this model versus the others seems to be very strong, as the model probability changed from 1/8 = 12.5%–98%, indicating a Bayes factor of 343 for this model versus the other seven models combined.

Table 1. Posterior Model Probabilities for Haga bru
ModelVariable aVariable bVariable h0Posterior Probability (%)
1NoNoNo1.3 × 10−124
2NoNoYes3.4 × 10−5
3YesNoNo1.0 × 10−41
4NoYesNo5.5 × 10−99
5YesYesNo1.4 × 10−99
6YesNoYes98.6
7NoYesYes1.4
8NoNoNo1.4 × 10−2

[46] Sampling from the posterior density of this model, the results for the topmost parameters are shown in Table 2. As can be seen, most parameters become more specific after the data than they were before. The most uncertain parameters, the characteristic times of a and h0, are on the order of a few years to a decade. This means that the state of the hydraulic parameters will start to get inaccurate in the space of a year or two and be “forgotten” after a decade or three. Changing from one fixed set of hydraulic parameters to another such fixed set would be very difficult in such a case. This model, however, allows for gradual changes in each of these two hydraulic parameters.

Table 2. Parameter Estimates and Credibility Intervals for Haga bru
ParameterMedian95% Posterior Credibility Interval95% Prior Credibility Interval
equation image(m)0.440.04–0.84−2500–2500
equation image(m)0.180.097–0.3330.100–2.000
equation image13.4yr3.3–37yr0.1–100 years
equation imagea4.64.4–4.80.135–5.54
sa0.130.072–0.240.138–2.76
equation imageta (years)11.22.7–290.1–100
b1.751.69–1.820.66–4.26
equation image0.0280.024–0.0330.020–0.150

[47] These gradual changes can be illustrated by looking at the estimates and uncertainties of the state of a and h0. These are shown in Figure 2. The zero plane seems to have been high around 1910–1930, then went through a middle value to reach a low point in the middle of the 1990s. Large changes are also seen for a in the 1910–1930 period. The interpolation between measurements can be seen in these two plots. The 95% credibility bands are also shown, and it seems that for some time points, these bands do not overlap with the bands belonging to other time points. This is a fairly clear indication that the state of a and h0 are changing. Some of the more distinct fluctuations in Figure 2 can be associated with great floods, such as the major autumn flood in 1940 and the combined snowmelt and rain flood in 1918, which are by far the two largest floods ever witnessed in the Gaula River basin. The mechanisms behind other fluctuations are more difficult to interpret. Some might be caused by sudden and/or strong drifting of ice scouring the river bed during spring floods, but information about such historical conditions is difficult to obtain. The period from 1975 to 1982 was noticeably dry in middle Norway, and this might be related to the drop in the parameter a during this period. Note how the uncertainty is reduced around observational points in time and then bubbles out in between the measurements. This is a typical effect of OU processes in that observations will reduce the variance but the process will then slowly revert back to the stationary distribution.

Figure 2.

Median estimate for Haga bru for hydraulic parameters (top) h0 and (bottom) a. The time points where measurements have been done are marked by points. A 95% credibility band is shown with gray dotted lines.

[48] The residuals of the best model according to the model selection, namely, the one where both h0 and a are dynamic, are shown in Figure 3. Figure 3 also shows the residuals for a static rating curve model for comparison. As can be seen, the size of the residuals of the dynamic model is very much reduced compared to the fixed model. This can also be seen in Figure 4, where fitted values are shown together with the measurements. The dynamic model produces discharges much closer to the measurements than does the static model. Perhaps the most interesting feature to be seen here, however, is that the funnel shape of the fixed model residuals is no longer found in the dynamic model. Heteroscedasticity in rating curve regression has often been mentioned as a problem [Petersen-Øverleir, 2004; Árnason, 2005; Whitfield and Hendrata, 2006]. This analysis shows that at least some of the heteroscedasticity may be due to unmodeled dynamics in the stage-discharge relationship.

Figure 3.

Residuals for a model with dynamic h0 and a (black circles) together with residuals for a model with fixed hydraulic parameters (gray crosses) for Haga bru. The residuals are plotted with (top) stage and (bottom) time as arguments. Note the “inverse trumpet” shape of the fixed rating curve residuals in Figure 3 (top).

Figure 4.

Measurements (black circles) and fitted discharges using the best dynamic model (gray crosses), with a fitted static model shown as a line, for Haga bru.

[49] When we have a continuous time process model, samples of the hydraulic parameters can be had for different time points, even time points with no observations (see section 4). As such, one gets mean, median, percentiles, or any statistics one is interested in, not only for the parameters, but also for the discharge at any given stage. Thus, an estimated curve with credibility bands can be formed for different time points (see Figure 5). We can see that the credibility bands for the different time points do not overlap. This is an additional indicator, aside from the Bayesian model probabilities, that there is dynamics in the stage-discharge relationship. It is important to note that since we sample curve parameters, we are taking care of dependency in the stage-discharge relationship rather than simply dealing with one stage value at a time.

Figure 5.

Rating curves with 95% credibility bands for the years 1913 (black solid lines), 1958 (gray dotted lines), and 2007 (light gray dashed lines) at Haga bru.

[50] Figure 6a shows the autocorrelation in the discharge as a function of the time. Since we have posterior samples of the parameters controlling the correlation in the curve parameters, this means that we can form credibility bands not only for the autocorrelation of the curve parameters, but also for the discharge at a given stage. Moreover, we can look at the cross correlation between the discharge at one given stage and the discharge at another stage value at a later time. This is shown in Figure 6b. As can be seen, the correlation between the two discharges is not 100%, even when the time interval between them goes to zero, as for any given time, there is curve uncertainty.

Figure 6.

Autocorrelation for discharge at a given stage value (h = 1.94) and cross correlation for two discharges (h1 = 1.94, h2 = 2.44), for the station Haga bru. Black solid lines show median estimates, and gray dotted lines show the lower and upper parts of the 95% credibility band.

5.2. Håkkådalsbrua

[51] The station Håkkådalsbrua, established in 1969, is located in the Stjørdal River in middle Norway. The natural upstream catchment is 2125 km2. The river takes in water from a neighboring catchment (146 km2) via an upstream hydropower plant. Although the runoff regime in the Stjørdal River is altered, it still follows more or less the natural regime, which is much like that of the Gaula River.

[52] The stage-discharge relationship at Håkkådalsbrua is governed by both channel and section control. A reach of about 200 m, stretching from the station down to an artificial overfall made of wood, induces significant friction losses. The characteristics of the overfall, where critical flow occurs, act in consort with the reach in controlling the water level at the station. It is believed that the significance of the overfall increases with discharge. The stage-discharge relationship at Håkkådalsbrua is stable for long periods of time. However, during particular heavy floods, the bed material, which consists of sand and gravel, is washed away, revealing a compact layer of clay. The stableness of the station has been analyzed (A. Bjøru, former responsible field hydrometrist, personal communication, 2009). The analysis, which drew information from several independent sources, confirmed that the last rating shift was caused by a heavy flood occurring in 2006. During this incident, parts of the artificial section control were destroyed. Before 2006, it is believed that a lesser but significant shift occurred in 1990. The analysis also pointed out another possible shift occurring in 1973. However, the period prior to 1990 is poorly covered with stage-discharge measurements.

[53] The result of the analysis suggested either a model with variations in h0 and b but not in a or one with variation only in h0. The first model, which is marginally more probable, is shown in Table 3.

Table 3. Parameter Estimates and Credibility Intervals for Håkkådalsbrua
ParameterMedian95% Posterior Credibility Interval95% Prior Credibility Interval
equation image (m)−0.57−0.770–0.261−2500–2500
equation image (m)0.150.093–0.2390.100–2.000
equation image (years)6.61.48–13.30.1–100
a3.382.97–4.010.135–5.54
equation imageb2.381.83–3.040.66–4.26
sb0.160.072–0.360.092–1.84
equation imagetb (years)4.30.22–160.1–100
σ0.0210.015–0.0300.020–0.150

[54] The information gained from the data concerning the topmost parameters was not as great as for Haga bru. Still, the analysis was able to home in on a model that is more complex than the simplest one, namely, one with variations in b and h0. The reason can be seen in Figure 7, where it seems very unlikely that h0 is the same both for the years 1977 and 2009. It must be said that the model comparison was less clear about what constituted the best model in this case than for the other examples (see Table 4). For many of the parameters, the posterior credibility bands are fairly wide, indicating that the data do not contain all that much more information than what is found in the prior.

Figure 7.

Median (black solid line) and 95% credibility interval (gray dotted lines) for Håkkådalsbrua for parameter h0 for a model where only h0 is dynamic. The time points where measurements have been done are marked by points.

Table 4. Posterior Model Probabilities for Håkkådalsbrua
ModelVariable aVariable bVariable h0Posterior Prob. (%)
1NoNoNo1.9 × 10−20
2NoNoYes46.1
3YesNoNo0.011
4NoYesNo4.7 × 10−13
5YesYesNo0.15
6YesNoYes3.62
7NoYesYes49.4
8YesYesYes0.72

[55] A fairly small change in the prior, such that neither the previous nor present prior seemed unreasonable, could conceivably make models 2 and 7 switch places in the model ranking. Thus, in cases with moderate amounts of data, each of moderate quality, the results will not be as clear as in the preceding case. Still, there seems to be a consensus in the results, namely, that there is dynamics in the zero plane h0. The four most probable models all have this property and have a sum posterior probability of 99.94%. The dynamics in h0 can be further seen in the credibility intervals in Figure 7, which is made for a model with dynamic h0 and static a and b.

5.3. Fiskum

[56] The station in the Fiskum River, southeast Norway, drains an unregulated catchment of 49.9 km2. The stage gauge is located in the approach channel just upstream of an artificial V-shaped concrete structure, covering the whole range of observed stage, where critical flow occurs. The structure is considered stable since the establishment of the station in 1976. This implies that a model with a fixed rating curve model is the most appropriate. This is thus what the analysis should suggest, if it is working properly.

[57] Indeed, this was the result of the model selection, as the model with only fixed hydraulic parameters got 95% of the posterior probability. These results are encouraging, as they suggest that overly complex models are being penalized in the Bayesian model selection, just as they should be.

[58] The median posterior parameter values were a = 1.60, b = 2.70, h0 = −1.9 cm, and s = 0.079. The 95% credibility bands were aequation image (1.56, 1.65), bequation image (2.62, 2.80), h0equation image (−2.8 cm, −1.1 cm), and sequation image (0.060, 0.107).

6. Concluding Remarks and Scope for Further Studies

[59] As has been seen, both theoretically in the Bayesian analysis in section 4 and in the case studies in section 5, there are a lot of different analytical insights that can be gained from a process-based analysis of the type described here. Earlier analysis could potentially take the dynamics into account by making new rating curves for different time periods, but such an analysis would not take into account gradual change, change within the period, or uncertainty about the border to each period. Hence, the novel framework presented here should be useful in hydrometry. The Bayesian analysis described in this paper would also be beneficial in subsequent hydrological analysis. The impact of rating curve variability in flood frequency analysis has been widely studied in the literature [e.g., Potter and Walker, 1985; Kuczera, 1996; Petersen-Øverleir, 2008; Petersen-Øverleir and Reitan, 2009; Lang et al., 2010]. None of these studies provides methods that satisfactorily incorporate rating curve imprecision caused by channel changes. The effects of uncertain streamflow data have been given some attention in rainfall-runoff modeling [e.g., McMillan et al., 2010; Renard et al., 2010]. Few, if any, of the existing studies address the problem satisfactorily, even for steady state rating curve situations, and the uncertainty of unstable stage-discharge relationships is far from being incorporated into the total error analysis for hydrological models.

[60] It should be noted that if stable results are wanted, the execution time of this time of analysis can be fairly high. With a full Bayesian analysis of all eight models, including model comparison, a currently modern office computer took about 7 hours for Haga bru (consisting of 208 measurements). Håkkådalsbrua, with its 44 measurements, could, however, be analyzed robustly in an hour. The large difference may be due to a more complex structure in the larger data set as much as to the number of measurements. This means that for moderate data sets, it could be possible to use this kind of analysis as a practical tool, while for large data sets, it could be a bit cumbersome for a practical setting, depending on the resources and the need for fast results. Multicore or multiprocessor machines and parallel computing could, however, reduce the execution time considerably.

[61] While we think that the method shows great promise, there are some considerations that have not been taken into account. First is the possibility of rapid, approximately instantaneous changes. While a rapid change is possible in an OU process, the more rapid it is, the less likely it is, and thus the more evidence one needs for it. If enough data around the change point existed, one could expect that the framework would accommodate for the change. Even so, perhaps models that allow for only rapid change and models that combine that and an OU process could be contemplated as part of the analysis framework. One way of doing this would be to expand the process model by the possibility for instantaneous changes at instances determined by a Poisson model. Another way would be to increase the volatility of the model by a function of the discharge or stage. This would make changes during floods more probable and could decrease the uncertainty of when a change took place. It would, however, mean that instead of using equation (5), a nonlinear SDE with a noise term depending on stage discharge would be used. This SDE may not be analytically solvable, but the simple SIR particle filter depends only on the ability to be able to simulate it from the process. Thus, inference would still be possible, but it would be difficult to perform it in an efficient manner.

[62] The segmentation issue was not dealt with in this paper. The models from Reitan and Petersen-Øverleir [2009] could be expanded in a fashion similar to that done for the one-segment model here. Another way of dealing with this problem is to treat both time and stage with the same toolbox, using SDEs for changes in the hydraulic parameters both in time and stage.

[63] Stage-discharge relationships that exhibit seasonal variations are often observed in vegetated rivers. Variable aquatic plant growth will alter the hydraulic losses and the geometric characteristics of the channel, as well as the cease-to-flow stage. Unless many measurements are done for all years, the model framework will not adapt to this situation. If one allowed for models with recurrent seasonal components or increased correlation at 1 year time intervals, it could be possible for a model to “learn” about the seasonal variation from the variation found in earlier years.

[64] Existing stage time series can be utilized as a secondary information source in dynamic rating curve inference. Assuming that the stage is always above the zero plane, the time series gives an upper limit for the zero-plane process. The information content of each stage value is relatively low, though, as it only introduces a step function into the likelihood. Stage time series can have very fine resolution, which may be a problem. Handling a large data source with small information contents may not be worth the effort. With aggregated monthly minimal stage values, it could still be worth the effort. If it could be assumed that the real discharge is homogeneous in time, a further information source could be contemplated. The curve should then show that the stage time series transformed into a discharge time series should be homogeneous. Again, the information content is low. Still, yearly aggregates could perhaps be useful, as time dependency and seasonal variation would make homogeneity hard to relate to on finer time resolutions.

[65] The method could be used to set up a discharge hydrograph in cold-climate rivers during periods where the stage-discharge relationship is influenced by river ice (see Melcher and Walker [1991] for a description of existing methods for estimating winter discharge). One could then apply the procedure on available field measurements of discharge taken during icy conditions. However, it might be wise to use the dynamic model only during the winter season when there is known to be ice in the river channel that affects the hydraulics and use a fixed rating curve (or have less dynamics) for the rest of the season.

7. Source Code

[66] Source codes for the two implemented inference methods can be found at http://folk.uio.no/trondr/dynamic_rating_curves.

Ancillary