A resampling method for generating synthetic hydrological time series with preservation of cross-correlative structure and higher-order properties

Authors

  • C. J. Keylock

    Corresponding author
    1. Department of Civil and Structural Engineering, University of Sheffield,Sheffield,UK
      Corresponding author: C. J. Keylock, Department of Civil and Structural Engineering, University of Sheffield, Mappin Street, Sheffield S1 3JD, UK. (c.keylock@sheffield.ac.uk)
    Search for more papers by this author

Corresponding author: C. J. Keylock, Department of Civil and Structural Engineering, University of Sheffield, Mappin Street, Sheffield S1 3JD, UK. (c.keylock@sheffield.ac.uk)

Abstract

[1] Based on existing techniques in nonlinear physics that work in the Fourier domain, we develop a multivariate, wavelet-based method for the generation of synthetic discharge time series. This approach not only retains the cross-correlative structure of the original data (which makes it preferable to principal component methods that merely preserve the correlations) but also replicates the nonlinear properties of the original data. We argue that the temporal asymmetry of the typical hydrograph is the most important form of nonlinearity to preserve in the synthetic data. Using the derivative skewness as a measure of asymmetry and an example data set of 35 years of daily discharge data from 107 gauging stations in the United States, we compare two approaches that preserve the asymmetry of the original records. We generate synthetic data and then study the properties of fitting a generalized extreme value distribution to the annual maxima for a total flux time series. The synthetic series provides error bands for the fitted distribution that give a different way of assessing credible return periods. It is found that the best approach for studying extremes is to match the asymmetry of each series individually, rather than to formulate a global threshold criterion.

1. Introduction

[2] The generation of synthetic hydrological time series for model testing or water resource planning is an important research area in stochastic hydrology. This is because such data sets permit more robust testing of hydrological models, can be used to construct water planning scenarios, and can also be used to place confidence limits on hydrological forecasts based on a bootstrap methodology. For univariate time series, there are a great many applications of autoregressive (AR) and autoregressive moving average (ARMA) models [Box et al., 1994] in hydrology, as well as nonparametric methods that avoid imposing the assumption of Gaussianity [Lall and Sharma, 1996]. Of course, for many hydrological and hydroclimatological series, it is the relation between discharges on neighboring rivers or between rainfall and discharge that is of significant interest (e.g., for determining flood risk due to the sequencing of peak discharges on different tributaries or lags in the catchment system from input to output, respectively). Hence, multivariate generalizations of these methods are required that, at least, retain the correlation/covariance structure between series. Classical approaches to this problem have a long history [Pegram and James, 1972; Valencia and Schaake, 1973; Grygier and Stedinger, 1988].

[3] However, given the difficulty of estimating the parameters of a multivariate ARMA model, a useful trick is to transform the original data into decorrelated time series, meaning that univariate estimation may be attempted on the transformed series. Hence, one procedure for multivariate synthetic data generation is as follows:

[4] Transform the multivariate data array, Z, into a set of decorrelated series, PC, using a technique such as principal component analysis (PCA), which is discussed in section 1.1;

[5] Perform some form of randomization method (e.g., bootstrapping) on each decorrelated series;

[6] Invert the PCA to generate a set of synthetic series, where the covariance structure in the original data will be largely retained, as the covariance matrix underpins the PCA method.

[7] A very important development to this framework was introduced by Westra et al. [2007] who replaced PCA by independent component analysis (ICA), which is described in section 1.2. Thus, one moves from decorrelated variables to independent variables, which means that potential alternative forms of association between the inline image are extracted and, therefore, may be reimposed during the final part of the algorithm. As shown by Westra et al. [2007] and in section 2, the ICA method is clearly superior to the PCA approach, which captures neither the joint nor marginal behavior appropriately. To provide some insight into why ICA is advantageous, we briefly outline the PCA and ICA techniques.

1.1. Principal Component Approach to Synthetic Data Generation

[8] Joliffe [2004] provides a detailed explanation of PCA and its use in the geosciences. Consider a matrix Z consisting of m hydrological time series (columns), each of length N (rows). If we find the columnwise mean values, inline image, inline image, then these may be subtracted from Zto give the mean-centered matrix inline image. The covariance matrix is then inline image, and singular value decomposition may be used to derive the unit-norm eigenvectors, inline image, ordered such that their eigenvalues are in descending rank order (i.e., inline image contains the rotations associated with the axis in the principal component space that explains the greatest variability). Each principal component may then be extracted using the following equation:

display math

1.2. Independent Component Approach to Synthetic Data Generation

[9] The PCA method produces components that are uncorrelated, but other forms of dependence may still exist. To produce components that are truly independent, a more advanced method is required. The ICA [Jutten and Herault, 1991] uses a PCA with the variances of the extracted components normalized to unity (a whitening matrix) as a precursor step. With E organized such that each vector inline image occupies a different column, and D the diagonal matrix of the eigenvalues of C, the whitening matrix is as follows:

display math

The whitened data, w, may be obtained by multiplying the data, inline image, by W, and the independent components, s, are derived from inline image, where A is known as the mixing matrix. The central limit theorem states that the application of a linear transform to a set of independent random variables results in variables that tend toward Gaussian. Given the linear transform to derive W, it then follows that a means must be found to optimize the non-Gaussianity of the extracted components to move from merely uncorrelated variables to independent ones. Hence, it is assumed that the components,s, are maximally non-Gaussian, and a mixing matrix,A, is sought that yields appropriate s. The most common approach to characterizing non-Gaussianity in ICA is the minimization of mutual information, or equivalently, the maximization of the negentropy [Comon, 1994], which is the difference in the entropy for a random variable, y, and for a Gaussian variable with the same covariance matrix:

display math

where

display math

and inline image is the marginal probability distribution for inline image, although for computational simplicity, an approximation to inline image is often adopted [Hyvarinen and Oja, 1998]. A worked example of the technique in a hydrological context is provided by Westra et al. [2007].

1.3. Bootstrap-Based Approaches to Generating Hydroclimatological Data

[10] So far we have reviewed classical ARMA-type models and their multivariate representation in the form of principal or independent components. Alternative approaches to producing appropriate generators of hydroclimatological variables are commonly based on either the use of a Markovian representation of the temporal behavior or the notion of a bootstrap. Thus, in the former case, the problem of estimating a full ARMA model over multiple sites is dealt with by assuming that the value for a hydrometeorological parameter of dayt is conditional on the value for t − 1, rather than on, potentially, the full previous record [Mehrotra and Sharma, 2007]. Such models may be implemented in various ways as evaluated by Mehrotra et al. [2006].

[11] Our method combines autocorrelative and bootstrap principles, and, hence, it is worthwhile briefly reviewing recent bootstrap approaches, although note that our approach is not restricted to a Markovian assumption. In one example of this approach, Clark et al. [2004a]first constructed a synthetic record by selecting, in a uniformly random way, the current day's value from the 15 days of the historical record that extend to ±7 days either side of the current day. The appropriate cross-correlative structure was then reimposed using a “Schaake shuffle” method [Clark et al., 2004b]:

[12] A matrix, G, is formed from each of these synthetic series, for each variable and each field station, meaning that randomized versions of the original data are generated for each station, but that cross-correlative properties (within the 15 day window) are destroyed;

[13] An additional matrix, H, is formed from the original observations, with the difference being that the same date is used to populate records across all stations and variables, thereby retaining the cross-correlative structure. Thus, while values in a given column ofG are sampled independently of other columns, values in H retain correlations between columns (data sets);

[14] Variants of G and H are produced by placing each columnwise set of data into descending rank order, denoted here as inline image and inline image. inline image is a matrix containing the positions of the elements in inline image in the original matrix H;

[15] Given these matrices, the final step is to generate the synthetic matrix inline image by reshuffling G with respect to inline image.

[16] See the graphical example in Figure 2 of Clark et al. [2004a]for a visual explanation. A similar rank-order matching technique is employed in the methods used in this article, although it is combined with a Fourier spectrum method to ensure better preservation of all the periodicities in a time series.

1.4. An Alternative Approach to the Generation of Synthetic Hydrological Time Series

[17] Our approach to generating synthetic series has two major advantages compared to the methods reviewed above:

[18] Improved preservation of the full cross-correlation function, rather than the simple variable intercorrelation, and

[19] The introduction of a control parameter that can be used to tune the extent to which higher-order properties of the original data (which may be selected independently) are preserved in the synthetic data.

[20] The following section of this article reviews the mathematical and algorithmic basis for the relevant techniques used to develop our approach and goes on to develop a multivariate version of these techniques in section 2.3. This method is tested and compared to PCA and ICA methods in section 3 and then applied to a data set of daily discharge data from 107 U.S. gauging stations in section 4.

2. Existing Methods in the Fourier and Wavelet Domains

2.1. Fourier Transform-Based Approaches

[21] The PCA-based and ICA-based methods seek orthogonal components in a data set based on the structure of the correlation or covariance matrix. As such only the correlation at zero lag is preserved. A greater proportion of the cross-correlation function may be preserved by adding phase-shifted variants of the original data into the analysis, but this can rapidly result in the generation of an extremely large matrix on which to undertake PCA/ICA. However, preservation of the cross-correlation function is, via the Wiener-Khintchine theorem, identical to preserving the Fourier cross-spectrum between two time series [Chatfield, 2003]. Hence, a randomization scheme in the frequency domain has the advantage that the full cross-correlative structure will be preserved and avoids the need to work with large covariance matrices. To explain how such a method works, it is useful to first consider a problem in nonlinear physics that was resolved in the 1990s and provides the starting block for the tools that are required.

[22] A difficulty facing nonlinear physics in the 1980s with the explosion of interest in chaos and other forms of nonlinearity (see Rodriguez-Iturbe et al. [1989] and Liu et al. [1998]as examples of the use of such concepts in rainfall and streamflow-forecasting applications, respectively) was determining if a value for a metric of nonlinearity/chaoticity applied to data gave a result that was significantly different to that for linear/nonchaotic data. The approach taken byTheiler and Prichard [1992] was to compare the real data to synthetic variants that were built from a linear generator. If these surrogate data are built from the same values in the original data set and retain the same Fourier spectrum, then such surrogates could be obtained from a linear autocorrelative process, with nonlinear features only preserved by chance. Thus, if the value for the metric of nonlinearity differs between the observed data and the surrogates, then a significant difference may be deemed to exist. This method was enhanced by Schreiber and Schmitz [1996] to give the iterated amplitude adjusted Fourier transform (IAAFT) method for generating surrogate data.

[23] The IAAFT algorithm proceeds from a Fourier transform of a discretely and regularly sampled time series, inline image, with a sampling interval, inline image. Recalling the equivalence, via the Euler identity between the exponential function and the sum of sine and cosine waves, we may write the Fourier transform of inline image as follows:

display math

The Fourier amplitudes, inline image, of the original signal are stored, while the original phases, inline image, are replaced with the phases from a random sort of the data. Given that a Fourier power spectrum is derived from the amplitudes, this process preserves the spectrum, while randomizing the phases, and is equivalent to well-known phase-randomization techniques [e.g.,Nikora et al., 2001]. The next stage is to replace the new values with the original to remove this source of variation.

[24] The inverse Fourier transform is taken, and a rank-order matching procedure is used to substitute the original values for the new. That is, the new values are placed in rank order, and their values are replaced by the values in the original time series with the same rank (as in the Schaake shuffle discussed insection 1.3). In this way, the synthetic series has identical values to the original data set. This change to the values would have decreased the fidelity of the Fourier spectrum for the surrogate series. Hence, the process is repeated until a convergence criterion is satisfied, where the stored amplitudes are substituted each time and the phases are retained from the last iteration. Because a data set is finite, there is always convergence to some minimum state where a reordering of the substituted values no longer occurs. If the error remaining in the Fourier representation is still too high, it is possible to use a rejection sampling approach to accept or reject a synthetic series (we have rarely found this to be necessary in practice). An alternative is to use a more sophisticated convergence framework. One means to do this is the gradual imposition of the original values in a stochastic fashion as introduced by Venema et al. [2006] (see also discussion by Keylock [2008]). A further alternative is to abandon this gradient-descent framework and make use of a generic optimization tool such as simulated annealing, albeit at the expense of time effectiveness [Schreiber, 1998]. It should be noted that, with the Fourier transform method and with a strongly periodic signal, end points to the time series need to be chosen with care to prevent a harmonic distortion in the surrogates [Keylock, 2007]. As shown by Keylock [2008], implementation in the wavelet domain can help here owing to the time-frequency localization property of the wavelet transform.

[25] The IAAFT algorithm has the advantage over standard phase randomization that the values for the original data are also preserved. Hence, it is possible to use IAAFT surrogates for hypothesis testing, as the linear parts of the original signal (values and autocorrelative structure) are both preserved. We introduce a simple example of the method in Figures 1 and 2, which we will return to below, when we examine the more advanced methods used in this study. Figure 1 shows two signals, M1 and M2, and their Fourier amplitude spectra. The upper signal is a simple model for a hydrological discharge series—a truncated sine wave with a simple Gaussian noise added with zero mean and a standard deviation that is 10% of the amplitude of the sine wave. Note that the time scale is arbitrary. White noise has a flat Fourier spectrum, and it is clear in Figures 1b and 1d that the higher frequencies yield a flat spectrum. The spectrum in Figure 1bdoes not yield a sharp spike owing to the fact that departure of a half-sinusoid from a sinusoid smears the energy and because the time series is truncated such that the length of the series is not a precise number of wavelengths of the peak signal. A clearer departure from a linear series (the type that can be represented by an autocorrelative model) is given inFigure 1a with the addition of a rectangular discharge pulse. Rectangular pulses are difficult to capture in the Fourier domain, as their energy is spread out over a wide range of frequencies. This can be seen in Figure 1d, where a set of harmonics appear in the spectrum between the low-frequency pulse and the high-frequency white noise.

Figure 1.

(a, c) example signals, M1 and M2, respectively, are shown, along with (b, d) their accompanying amplitude spectra. The unit for the abscissa in Figures 1b and 1d is a dimensionless frequency in radians, such that inline image radians correspond to the Nyqvist frequency.

Figure 2.

Three synthetic variants of the signals given in Figure 1 constructed using the IAAFT algorithm. Surrogates series for M1 are shown in gray and for M2 are in black.

[26] Figure 2 shows three surrogate series for the data from Figure 1 produced using the IAAFT algorithm. Some important observations can be made. First, the gray series look very much like M1 in Figure 1, except for some noise spreading across into what are flat, zero value segments in the original data (this occurs because phases are randomized with respect to one another). However, note that zero values in the original data are not corrupted by the IAAFT algorithm (or any of its variants discussed here [Keylock, 2006, 2007, 2010]), although randomization can mean that runs of consecutive zero values are broken up, as shown in Figure 2. Second, and again because of the phase randomization, the maxima are shifted with respect to the original data, which limits the utility of such an algorithm in hydrology where annual maxima are, in many cases, restricted to a particular time of the year. It is clear from Figure 2that it proves to be highly improbable to replicate a feature similar to the rectangular pulse using simple autocorrelative or Fourier-based randomization. The Fourier modes contributing to the rectangle in the original data are smeared out across the time series, resulting in something that departs significantly from the original series, despite the fact that the representation of the surrogates in the frequency domain is good (Figure 3). Our approach to handling this issue to produce more realistic synthetic series is explained in section 2.3. First, we explain how the IAAFT method can be extended to handle multivariate series.

Figure 3.

Amplitude spectra for the synthetic series shown in Figure 2 plotted against one another on the same plot: (a) the IAAFT surrogates for M1 and (b) the surrogates for M2.

2.2. Multivariate IAAFT Methods

[27] This IAAFT technique may be extended to preserve the cross correlation:

display math

or, equivalently in the Fourier domain, the cross-spectrum:

display math

between time series, inline image and inline image [Prichard and Theiler, 1994], where Δ is some time lag. If the difference in the original phases is preserved when the IAAFT algorithm is applied to inline image and inline image, the cross correlation between the series will be retained in the synthetic data. Thus, this approach to multivariate surrogate series has the advantage over the PCA/ICA methods discussed by Westra et al. [2007] that the full cross correlation is retained, rather than just the correlation at zero lag. This algorithm may be formulated as follows:

[28] 1. Identify a particular time series to start the algorithm, which we term y1;

[29] 2. Generate an IAAFT surrogate for this series, termed inline image;

[30] 3. Take the Fourier transform and find the phases for both the data, inline image, and its surrogate, inline image;

[31] 4. For each of the remaining original data series, yg, where inline image and m is the number of time series:

(i) Find the phase difference: inline image;

(ii) Generate an IAAFT surrogate to give inline image;

(iii) Take the inverse Fourier transform of inline image, store the amplitudes, Ag, and find the phases inline image;

(iv) Calculate the phase difference for the surrogates inline image;

(v) Form the new phases for the surrogate series using inline image;

(vi) Generate an appropriately correlated synthetic series from the Fourier transform of inline image;

(vii) Run a final IAAFT step (rank-order matching and subsequent (minimal) phase adjustment) to ensure convergence.

[32] The phase difference inline imagecontains the cross-correlation information between data series,yg, and y1, while inline image is similar but for the surrogates. By subtracting them in (v), we can adjust the phases for the surrogate series inline imagein such a way that the synthetic data are offset from one another in a way that retains the original cross-correlative structure.

[33] As an example, consider the four time series shown in Figure 4. That, in Figure 4a, is a Doppler signal given by inline image [Donoho and Johnstone, 1994], while Figure 4bis a phase-shifted version of the former. The third and fourth series are stochastically derived multifractional Brownian motion series with a sinusoidal variation in their Hölder exponents [Keylock, 2008], where these sinusoids are also subject to a phase difference. Hence, distinct forms of nonlinearity are present, including intermittency in x3 and x4, and temporally specific frequency content and heteroskedasticity in x1 and x2.

Figure 4.

A set of four signals used to demonstrate the behavior of different synthetic series algorithms. (a) The equation for x1 is given in the text; (b) x2is a phase-shifted variant ofx1. (c, d) Signals x3 and x4, respectively, are multifractional Brownian motion signals with a sinusoidal variation in their Hölder exponents.

[34] An example set of surrogate signals based on the IAAFT algorithm for multivariate data is shown in Figure 5 as the upper gray line, along with those generated from the PCA and ICA approaches of Westra et al. [2007](black and lower gray line, respectively). Here, instead of using a block-based bootstrapping scheme to randomize thePC scores, we avoided discontinuities introduced at the edge of blocks by applying the joint IAAFT algorithm [Prichard and Theiler, 1994; Schreiber and Schmitz, 1996]. Note that the nonlinear nature of the test signals used means that the linear methods (PCA and joint IAAFT) cannot adequately replicate key properties of the original data. The ICA surrogate data at the bottom of the page are clearly preferable in this respect. Table 1contains both the correlations at zero lag between the time series and the maximum and minimum of each cross-correlation function (equation (6)) for the data in Figures 4 and 5. The maxima and minima are included so that lagged relations between the time series may be assessed. The inability to represent the nonlinear properties affects the robustness of the linear statistics, but the multivariate IAAFT method has the advantage over the ICA technique that preservation of the full cross-correlation function is enhanced. The PCA-based method still performs reasonably at capturing the appropriate correlation coefficients (root-mean-square error (RMSE) of 0.087), but, unsurprisingly, results for the cross-correlation maxima and minima are poor (RMSE of 0.266 and 0.135, for the maximum and minimum off-diagonal values, respectively). The multivariate IAAFT method has corresponding RMSE values of 0.042 for the correlations, 0.071 for the off-diagonal maxima, and 0.070 for the off-diagonal minima. The advantages of ICA over PCA as shown byWestra et al. [2007] are evident visually not only in Figure 5 but also in terms of the correlation coefficients, which, because of the accounting for nonlinear interactions in the transform, can be reconstructed with a very low error (RMSE < 10−8). However, preservation of the cross-correlation function is no better than for PCA (RMSE of 0.270 for the off-diagonal maxima and 0.178 for the off-diagonal minima).

Figure 5.

Synthetic realizations of the set of time series shown in Figure 4with the gray lines undergoing a vertical displacement for clarity. Lines in black are produced using the PCA-based method; the upper gray line is the multivariate IAAFT algorithm, while the lower gray line is the ICA-based method. (a-d) correspond to the realizations in Figures4a–4d, respectively.

Table 1. Correlations (Above the Diagonal) and Maxima and Minima of the Cross-Correlation Function (Below the Diagonal) Among the Four Series inFigure 4 (Data) and Synthetic Data Generated Using Three Methodsa
 x1x2x3x4
  • a

    The values in parentheses are the minima.

Data
x11.000 (−0.193)−0.1040.116−0.368
x20.991 (−0.210)1.000 (−0.213)−0.064−0.136
x30.273 (−0.364)0.255 (−0.323)1.000 (−0.427)−0.474
x40.366 (−0.370)0.349 (−0.349)0.403 (−0.480)1.000 (−0.484)
 
PCA Method
x11.000 (−0.318)−0.1170.031−0.424
x20.387 (−0.371)1.000 (−0.346)−0.197−0.008
x30.446 (−0.420)0.313 (−0.265)1.000 (−0.336)−0.505
x40.515 (−0.547)0.415 (−0.297)0.406 (−0.511)1.000 (−0.330)
 
ICA Method
x11.000 (−0.206)−0.1040.116−0.368
x20.452 (−0.331)1.000 (−0.190)−0.064−0.136
x30.550 (−0.407)0.491 (−0.438)1.000 (−0.395)−0.474
x40.295 (−0.744)0.384 (−0.417)0.499 (−0.510)1.000 (−0.367)
 
IAAFT Method
x11.000 (−0.174)−0.1060.099−0.318
x20.874 (−0.231)1.000 (−0.292)−0.042−0.301
x30.336 (−0.300)0.296 (−0.301)1.000 (−0.270)−0.356
x40.375 (−0.410)0.393 (−0.436)0.334 (−0.495)1.000 (−0.478)

2.3. Wavelet Methods and Gradual Wavelet Reconstruction

2.3.1. Overview

[35] We have seen that using well-known methods in statistical physics it is possible to generate synthetic, hydrological time series that preserve the linear properties of the original data and the full cross-correlation function rather effectively. However, it is clear fromFigures 2, 4, and 5that preservation of the linear properties of the original data is generally not sufficient. Hence, in this section, we present the gradual wavelet reconstruction approach to capturing appropriate nonlinearity. To do this, one needs to move from a representation of the signal in the Fourier domain to one in the joint time-frequency plane, so that particular features in the original data can be fixed in space (such as the rectangular feature inFigure 1). The easiest way to do this is to work with wavelets and, while wavelet denoising techniques fix in some place wavelet coefficients that exceed a threshold and set the rest to zero, we fix in place those that exceed a threshold and randomize the rest in an appropriate fashion, which in fact turns out to be by applying the IAAFT algorithm to each frequency band (wavelet scale) in the time-frequency wavelet decomposition. This method was presented byKeylock [2007] and was used as the basis for a systematic means to exploring the properties of nonlinear time series using synthetic data by Keylock [2010]. Wavelet transforms have seen significant use in hydrology for the analysis of hydrological and meteorological data in recent years [Venugopal and Foufoula-Georgiou, 1996; Smith et al., 1998; Venugopal et al., 2006; Labat, 2010], although this has largely been for analysis (decomposition) rather than decomposition and reconstruction as used here (although, see Westra and Sharma [2006]; Nowak et al. [2011]). Hence, the continuous transform, typically used for analysis, is inappropriate in our case owing to its inexact reconstructive property.

2.3.2. Nonlinearity and Asymmetry

[36] Perhaps the key nonlinear aspect of many hydrological time series is their asymmetry. Hydrographs tend to have a steep-rising limb and a more gradual recessional component. This asymmetry may be expressed in terms of the derivative skewness or increment skewness:

display math

where the angled braces indicate a time average and Δ is a selected time interval, which, in section 4.1, is chosen as daily, weekly, and seasonal scales ( inline image). Thus, while a standard coefficient of skewness measures the skewness of the histogram, this measure characterizes the asymmetry in a time series. A Gaussian autocorrelative process may well have a non-Gaussian histogram, but it will be Gaussian in the increments. Our algorithm captures non-Gaussianity in the histogram because the values of the original data (hence, all moments) are preserved, but the representation of nonlinear properties such asλ will depend on our choice of threshold parameter, ρ. In section 4, λ is used for characterizing the quality of different synthetic series with respect to hydrograph shape. Hence, appropriate synthetic series may be generated by selecting ρ such that hydrograph shape is preserved.

2.3.3. Thresholding the Wavelet Coefficients

[37] If a threshold parameter, ρ is defined to range in value from 0.0 (an IAAFT surrogate) to 1.0 (the original data), then the wavelet algorithm can be used to fix in place those wavelet coefficients, inline image, that exceed a particular threshold, where k is an index for time and j for wavelet scale (period), as detailed in Appendix A. We adopt a maximal overlap discrete wavelet transform (MODWT) [Percival and Walden, 2000]. Like the continuous wavelet transform, this is an undecimated transform (for a time series of N points, N coefficients are derived at each scale j), but, unlike the continuous transform, perfect reconstruction is easily implemented. Like a discrete wavelet transform, it is dyadic in nature owing to the use of a hierarchical set of filters and a downsampling procedure after the extraction of the detail coefficients at each scale. However, the MODWT has the advantage that the length of the analyzed signal is not constrained to inline image, where inline image. In addition, the variance of the inline image for a chosen j is proportional to the value for the Fourier energy in that frequency band if the MODWT is adopted. Thus, inline imageis proportional to the local, time-frequency energy of the signal. SeeKeylock [2010] for more detail concerning the MODWT and related wavelet transforms. As noted in Appendix A, we use a wavelet transform with a large number of vanishing moments. The reason for this is that we treat each wavelet scale separately when we undertake the IAAFT randomization. Hence, we do not wish to spread energy across multiple frequencies/scales. The uncertainty principle means that a choice must be made between a wavelet that is precise in time/space but poor in frequency (such as the Haar wavelet), one with a larger number of vanishing moments that is precise in frequency but poorer in space, or a compromise case. For the reasons just given, we opt for the second choice. Figure 6 illustrates the wavelet spectrum that results from the wavelet decomposition of a simple sine wave with a wavelength of 186 data samples. It is clear that the Daubechies least asymmetric wavelet with 16 vanishing moments preserves energy at the correct scale more appropriately than the Haar wavelet (two vanishing moments) or the two intermediate cases with four and eight vanishing moments.

Figure 6.

The wavelet spectrum for a sine wave with a wavelength of 186 data samples displayed on a (a) linear and (b) log scale. The wavelets used have 2, 4, 8, and 16 vanishing moments, which are shown with gray, dotted, dashed, and solid black lines, respectively.

[38] As we wish to retain the linear properties in our surrogates, proportionality to a Fourier representation (hence, the wavelet spectrum) is advantageous. Hence, we define our threshold criterion, ρ with respect to the inline image. Following Keylock [2007], we define an energy function, E, as follows:

display math

and ρ as some chosen fraction of E. If inline image is the set of inline image coefficients, inline image, placed in descending rank order, then, with inline image acting as an index for Wc, the set of fixed coefficients, inline image, is given by the first n elements of Wc that fulfills the condition:

display math

Hence, inline image are the smallest number of coefficients that fulfill the energy proportion, inline image. For inline image, there are no fixed coefficients, and the resulting surrogate will be similar to that obtained using the IAAFT (although with an improved handling of edge effects [Keylock, 2008]. Trivially, for inline image, all coefficients are fixed, no randomization occurs, and the surrogates and data are identical.

2.3.4. Gradual Wavelet Reconstruction

[39] Given that not all inline image for some choice of j are fixed in general, randomization is necessary. Here, we follow Keylock [2006] who noted that if the IAAFT algorithm is applied to all inline image for a specific j, the values in the data and surrogate wavelet coefficients are identical, meaning that the Fourier proportionality of the MODWT is preserved. After the IAAFT algorithm applied to each scale has converged, with the imposition of the fixed coefficients, the inverse wavelet transform can be used to reconstruct a time series with some aspects of the original data fixed in place. For detail on how the algorithm works, the reader is referred to the step-by-step visual representation inFigures 2 and 4 of Keylock [2007], or the description provided in Appendix A, or by Keylock [2010].

[40] It was noted by Keylock [2010] that, by systematically varying the control parameter, ρ, it was possible to determine the relative complexity of particular time series compared to some metric of nonlinearity: the more complex series would converge to a statistically insignificant difference between original data and surrogates at a higher choice for ρ. This technique was applied for studying the properties of nonlinear time series and the sensitivity of metrics of complexity [Keylock, 2010] and as a means of generating synthetic inlet conditions for large-eddy simulations of turbulent velocity fields [Keylock et al., 2011]. In this latter application, ρ values were chosen to constrain the simulations in a controlled fashion. In this article, we determine empirically, for each discharge time series, the value for ρ that first gives no significant difference between data and surrogates according to inline image. At this choice, the Fourier spectrum and values in the discharge series are preserved by construction, and, in addition, the asymmetry is preserved to a sufficient degree. To ensure a reasonable number of simulations, we iterated ρ in 0.1 increments.

[41] As an example of how this method works, we return to the series M1 and M2 treated in Figures 13. An examination of Figure 7 shows that when inline image, that is, the largest wavelet coefficients that account for 10% of the wavelet energy in the time series as defined in equation (10) are fixed in place, clear differences emerge when compared to the IAAFT ( inline image) method in Figure 2. These differences are small for the simpler signal, M1 shown in the upper three panels, with some stabilization of the positions for the maxima. However, the major problem with the lower panels in Figure 2, the loss of the rectangular feature and the spreading of its Fourier modes throughout the signal, has largely been avoided, even though an insufficient amount of energy has been fixed to replicate the shape of this feature. Figure 8 shows that when inline image, the original signals are largely preserved, with the exception of some parts of the added random noise, which are still smeared out over the signal (as should be the case).

Figure 7.

Three synthetic variants of the signals given in Figure 1constructed using the wavelet-based gradual wavelet reconstruction algorithm for inline image. Surrogates series for M1 are shown in gray and for M2 are in black.

Figure 8.

Three synthetic variants of the signals given in Figure 1constructed using the wavelet-based gradual wavelet reconstruction algorithm for inline image. Surrogates series for M1 are shown in gray and for M2 are in black.

[42] One advantage of the wavelet transform is its flexibility, meaning that if one wishes, certain wavelet coefficients could always be fixed or randomized. In which case, ρ should be interpreted as the proportion of coefficients randomized of that subset made available to randomization, and the limits for equation (9)would need to be adjusted appropriately. For example, to ensure that the low-frequency behavior is always the same between data and surrogates (e.g., the annual pattern to discharge is fixed, with individual storms randomized), the large wavelet scales could always be fixed and the thresholding only applied to those corresponding to the subannual frequencies. Similarly, if one was interested in exploring the effect of antecedent conditions on a particular observed discharge, the coefficients over all scales representing one event in time could be fixed, with all others potentially randomized (depending onρ). This is part of the beauty of the approach advocated in this article. However, in this article, we choose to potentially randomize all coefficients so that inline image preserves an equivalence to the IAAFT algorithm. Hence, we retain the meaning of the gradual wavelet reconstruction continuum: 0.0 is a linear type of randomization, while 1.0 is the original data set.

2.3.5. A Comparison to Wavelet Autoregressive Modeling

[43] There are some strong similarities between the work presented here and the wavelet autoregressive modeling (WARM) approach [Kwon et al., 2007; Nowak et al., 2011]: Both involve a wavelet decomposition and then a modeling of the wavelet coefficients before a reconstruction step. Briefly, WARM involves a decomposition using an undecimated wavelet transform and modeling of the time series of wavelet coefficients at each isolated scale using a simple AR process. However, a wavelet spectrum is used to identify the scales that are to be modeled, with the rest combined into a noise term that is itself modeled as an AR process. WARM also commonly uses the continuous wavelet transform rather than the MODWT used here. There does not appear to be a good reason for this as many of the difficulties in replicating the wavelet spectrum of the original series using WARM [Kwon et al., 2007] probably result from the adoption of the continuous transform. Nowak et al. [2011] suggested another approach to capture nonstationarity in the spectra more effectively. We note that gradual wavelet reconstruction has much better replication of the underlying spectrum both because of the enhanced reconstructive property of the MODWT and the fact that we do not change the values of the wavelet coefficients at any scale (the difference between the IAAFT and an AR model). The gradual wavelet reconstruction framework also provides a control parameter, ρ, permitting the properties of synthetic series to be varied systematically. On the other hand, WARM is faster to implement (the IAAFT method must fulfill a convergence criterion, effectively running multiple AR models plus the rank-order matching step) and, as shown byNowak et al. [2011], can also produce effective synthetic data for hydrological applications.

2.4. Gradual Wavelet Reconstruction With Cross-Correlation Preservation

[44] Our approach to producing multivariate synthetic data that can converge on the properties of the original data as a function of inline image is very similar to the multivariate IAAFT algorithm in section 2.2), with the substitution of the pinned wavelet algorithm for the IAAFT:

[45] 1. Identify a particular time series to start the algorithm, y1. This could be the most complex in terms of nonlinear behavior (meaning a particular effort is made to preserve such features. In this article, we just choose the first series in the data array (which was ordered arbitrarily);

[46] 2. Generate a pinned wavelet surrogate for this series at the selected value of ρ following Keylock [2007, 2010] as explained in Appendix A, termed inline image;

[47] 3. Take the Fourier transform and find the phase difference between the data and its surrogates: inline image;

[48] 4. For each of the remaining original data series, yg, where inline image and m is the number of time series:

(i) Find the phase difference: inline image;

(ii) Generate pinned wavelet surrogates for the time series at the selected ρ as explained in Appendix A, to give inline image;

(iii) Take the Fourier transform of inline image, store the amplitudes, Ag, and find the phases inline image;

(iv) Calculate the phase difference for the surrogates inline image;

(v) Form the new phases for the surrogate series using inline image;

(vi) Generate an appropriately correlated synthetic series from the inverse Fourier transform of inline image;

(vii) Run a final IAAFT step (rank-order matching and subsequent (minimal) phase adjustment) to ensure convergence.

[49] In this article, we run the algorithm such that the final step before termination results in the preservation of the values from the original data set. Of course, if it is deemed desirable to improve the representation of the spectrum, or if the generation of new discharge values is preferable, the order of the steps to the IAAFT algorithm may be exchanged.

3. Algorithm Testing

[50] The synthetic series generated using the new algorithm is shown in Figure 9, where they may be compared directly to the original data and previous algorithms (Figure 5). Even with inline image, there is a good replication of the general variation in frequency seen in the Doppler signals (x1 and x2), although the highest frequency information, which should be located at inline image for x1, is still smeared across the whole signal. Concerning the multifractional Brownian motion signals, the inline image surrogates are aligned with the original data (unlike the IAAFT algorithm in Figure 5), but the high-frequency oscillations responsible for the different Hölder exponents of these series are not captured until inline image.

Figure 9.

Synthetic realizations of the test signal data set. In each subplot, the upper black line is the original data (Figure 4). The gray lines are obtained using the algorithm presented here for values of inline image, for the bottom, intermediate, and top gray lines, respectively.

[51] Table 2 contains similar information as Table 1, but for inline image. It can be seen that, by inline image, there is a near-perfect replication of the original correlation and cross-correlative structure. In terms of the RMSE over the whole cross-correlation function for the off-diagonal comparisons, indicated by a prime in the remainder of this paragraph, values ranged from inline image to inline image for the multivariate IAAFT method. This is, as expected, an improvement on the PCA and ICA techniques: inline image to inline image for PCA, and inline image to inline image for ICA. For inline image, the range was inline image to inline image; for inline image, inline image to inline image; and for inline image, inline image to inline image. The large drop in the maximum error between inline image and inline image reflects the improvement in the representation of the nonlinear aspects of the Doppler signal between these two thresholds as seen in Figure 9.

Table 2. Correlations (Above the Diagonal) and Maxima and Minima of the Cross-Correlation Function (on and Below the Diagonal) Among the Four Series inFigure 4 Using the Synthetic Data Algorithm Developed in This Article at Four Choices for ρa
 x1x2x3x4
  • a

    The values in parentheses are the minima.

ρ = 0.3
x11.000 (−0.184)−0.1010.082−0.347
x20.965 (−0.246)1.000 (−0.278)−0.044−0.152
x30.291 (−0.360)0.275 (−0.409)1.000 (−0.428)−0.470
x40.361 (−0.347)0.312 (−0.287)0.386 (−0.479)1.000 (−0.494)
 
ρ = 0.5
x11.000 (−0.205)−0.1010.110−0.369
x20.990 (−0.227)1.000 (−0.228)−0.059−0.131
x30.244 (−0.386)0.244 (−0.347)1.000 (−0.439)−0.476
x40.369 (−0.371)0.350 (−0.349)0.411 (−0.484)1.000 (−0.490)
 
ρ = 0.7
x11.000 (−0.191)−0.1020.107−0.352
x20.979 (−0.236)1.000 (−0.251)−0.055−0.169
x30.256 (−0.356)0.258 (−0.350)1.000 (−0.434)−0.478
x40.361 (−0.352)0.327 (−0.309)0.383 (−0.477)1.000 (−0.486)
 
ρ = 0.9
x11.000 (−0.195)−0.1040.117−0.367
x20.991 (−0.211)1.000 (−0.212)−0.066−0.136
x30.269 (−0.365)0.252 (−0.324)1.000 (−0.426)−0.471
x40.365 (−0.369)0.348 (−0.348)0.403 (−0.478)1.000 (−0.484)

4. Application to 35 Years of Daily Discharge Data for 107 U.S. Rivers

4.1. Data Sources, Preprocessing, and Validation

[52] Our data were taken from the National Water Information System online hydrological database (NWISWeb, http://waterdata.usgs.gov/nwis/) maintained by the U.S. Geological Survey (USGS), and we extracted mean daily discharge data from 1 October 1950 to 30 September 1985 from 107 sites, our choice guided by the analysis in Molnar et al. [2006] and subsequent private communications with the lead author of that work. The procedure we followed was to find the discharge records with more than 100 years of daily discharge data and then only consider those for which data were approved for the whole 35 year period.

[53] Example synthetic records for 2 of the 107 records are shown in Figure 10. These data are correlated strongly ( inline image is in the upper 2.25% of all the interdischarge correlations) but were otherwise chosen at random. The data shown in black are from the Passaic River at Little Falls, New Jersey (USGS code 01389500). The data in gray are for the Susquehanna River at Danville, Pennsylvania (USGS code 01540500). It would appear that, for these data, inline image is sufficient to replicate many of the dominant features in the original data.

Figure 10.

(a) A segment of two discharge records (sites 01389500 in black and 01540500 in gray), which have a correlation inline image. Synthetic data generated using the method developed in this article are shown for (b) inline image and (c) inline image. (d) A close comparison of the discharge event occurring from days 3800 to 3900 (shown in Figure 10a) with the synthetic data at inline image for the same period (Figure 10c); the two synthetic series at inline image are displaced downward for clarity.

[54] Formalizing such an assessment more carefully for all 107 rivers using the asymmetry measure, λ, defined in equation (8), for inline image, i.e., asymmetry over scales of 1 day, 1 week, and 1/4 year, the RMSE between the asymmetries for 10 surrogates, inline image, and the original data, inline image, was less than 10% of the original value in more than 95% of cases when inline image. That is, with 107 rivers and three choices for Δ, this was the case in all but 14 of 321 cases for inline image (95.6%), while, at inline image, inline image in 303 cases (94.4%). Twelve of the 14 cases where the asymmetry error was high were for inline image, indicating that is only the highest frequency variations where asymmetry is not preserved adequately in the surrogates at this ρ.

[55] This analysis suggests that for these data, inline image is a good choice for constraining the synthetic series such that they retain the appropriate asymmetry qualities of the original data. However, an alternative approach is to select a threshold value for ρ for each series. We employ both approaches in our analysis of total discharge flux, below. Before doing so, we explore some of the properties of the constant threshold case more thoroughly, to demonstrate the capabilities of synthetic data developed using our method.

4.2. A Comparison to Classical Wavelet Thresholding

[56] The inline image criterion chosen in the previous section still leaves 40% of the wavelet energy of the time series available for randomization. This may be compared to the equivalent value for ρ derived from classical wavelet thresholding for denoising. That is, compared to the value for ρ that delimits noise from signal based on the techniques developed by Donoho and Johnstone [1994] and Donoho et al. [1995].

[57] Wavelet denoising assumes that the majority of noise is concentrated in the highest frequency band of the data and defines a standard deviation for the noise, inline image, based on the median, inline image, of the absolute values of the wavelet coefficients at scale 1 (highest frequency):

display math

A threshold is then formulated according to the following equation:

display math

and with soft thresholding, the wavelet coefficients are denoised according to the following equation:

display math

Once the inline image values are found that fail to exceed T and are set to 0 according to equation (13), their proportion of the total wavelet energy may be found to express the thresholded noise in terms of an equivalent ρ. Based on the 107 river discharge records analyzed in the following section, the median value for ρ in this sense was inline image. Clearly, the signals were relatively noise free, and randomization based on such a high threshold choice would not introduce significant variation between surrogates and data. By contrast, the inline image choice is high enough to fix in place the asymmetry observed in the real data, while still providing latitude for randomization. Figure 11 compares the differences in the discharge time series when classic wavelet thresholding is applied and when inline image is adopted. Results are presented for the discharge data that gave the median result. There is close to an order of magnitude difference in these signals with the standard deviation of the difference signal in Figure 11a at 900 m3 s−1 compared to an average of 6940 m3 s−1 for the inline imagesurrogates and with maximum differences at 1.3% and 9.5% of the range in the discharge data, respectively. Hence, noise-based thresholding is too restrictive to explore the variation in discharge appropriately, while as determined earlier, if inline image asymmetry is not sufficiently preserved across all data.

Figure 11.

Differences between the discharge time series at the site with the median value for the Donoho and Johnstone [1994] type of wavelet threshold (01540500, the Susquehanna River at Danville). (a) The results using a Donoho and Johnstone [1994] wavelet threshold; (b–d) those for three surrogates at inline image.

4.3. Cross-Correlation Behavior

[58] To assess the cross-correlative behavior of the synthetic data as a function ofρ, Figure 12 shows two coherence spectra for discharges with both a high (R= 0.835, left-hand side) and moderate (R= 0.352, right-hand side) correlation. The selected sites are USGS sites 01031500 and 01049500 (high correlation) and 01031500 and 01315500 (moderate correlation), and the synthetic series clearly captures the main features in each plot. The RMSE statistics for these data are given inTable 3. The mean and standard deviation over 10 surrogate series are calculated, and the results for the technique where the threshold value for ρ is not global, but is calculated and applied individually for each data set, depending on the ρ that preserves asymmetry for each data set, are also shown (denoted by “var”). As the median threshold value for ρ in this latter case was 0.1, it is not surprising that the results lie closer to the inline image case.

Figure 12.

The coherence between (left-hand plots) two highly correlated discharge records and (right-hand plots) two with a moderate correlation as a function ofρ for time scales ranging from every 2 days to the annual scale.

Table 3. The Mean and the Standard Deviation of the RMSE Between the Coherence Values for Discharge Data and 10 Surrogates Generated at Varying ρ for the Data Shown in Figure 12a
ρμRMSEσμRMSE
  • a

    The RMSE is obtained for frequencies of annual scale or higher and the data, as well as selected synthetic series, are shown in Figure 6. “var” indicates that the variable threshold for ρ is adopted.

USGS Sites 01031500 and 01049500
0.00.2170.007
0.60.0800.008
var0.1920.017
 
USGS Sites 01031500 and 01315500
0.00.2170.015
0.60.0520.002
var0.1450.012

4.4. Replication of Annual Statistics

[59] The mean and standard deviation of each year's discharge data were calculated, and the minimum, median, and maximum values over all 35 years were extracted. These same measures were obtained for each of 10 surrogate sets using the same choices for ρ as in the previous subsection. The RMSE between the 10 synthetic series and the original data was calculated for each measure and then nondimensionalized by the original value to give an error statistic:

display math

where the measure is either the minimum, median, or maximum of the annual values, and this measure is applied to either the annual means (μ) or standard deviations (σ). The median, lower, and upper quartiles for these error statistics over the 107 discharge records are given in Table 4 as a function of ρ. Errors are greatest for the minimum annual mean and standard deviation, but the median error statistic over the 107 discharge records for the median and maximum annual values is of the order of 10% for the inline image and the variable ρ methods, and 75% of discharge series have an error for the maximum annual mean of less than 17.5%.

Table 4. Error Statistics for the Mean and Standard Deviation of Annual Dischargesa
ρMin. Ann. μMin. Ann. σMedian Ann. μMedian Ann. σMax. Ann. μMax. Ann. σ
  • a

    The minimum, median, and maximum are defined over 35 years of annual statistics. The error statistic is the RMSE between the 10 surrogates and the original data for these three measures, normalized by the value for the original data. The results reported are the median and lower and upper quartiles of the error statistics over all 107 discharge records.

Median Over 107 Records
0.00.3000.4160.0580.0820.1510.139
0.60.2090.2600.0470.0510.0860.035
var0.2450.3450.0580.0740.1200.102
 
Lower Quartile Over 107 Records
0.00.1560.2370.0360.0540.0880.085
0.60.1270.1710.0350.0350.0650.023
var0.1500.2140.0340.0480.0890.072
 
Upper Quartile Over 107 Records
0.00.6430.8410.1040.1420.2090.212
0.60.3440.4080.0820.0810.1140.052
var0.6790.8480.0910.1070.1750.143

4.5. Total Discharge Flux

[60] Given the overall quality of the synthetic series in terms of representation of the autocorrelative, cross-correlative, and asymmetry properties of the original data, we then have some confidence in using these surrogate series as a model for the original data.Westra et al. [2007]suggested a range of applications of such synthetic records, including the design and operation of reservoirs, irrigation systems, and hydroelectric systems. One application where the representation of cross-correlative properties is a critical aspect of valid synthetic time series is in placing confidence limits on natural variability in total discharge flux. This is important in the estimation of continental water balance, as well as, at a more local scale, the discharge into a reservoir fed from multiple sources.

[61] Taking our data set of large rivers as an example, Figure 13 shows the combined, total aggregate discharge flux for the 107 gauging stations examined in this study, together with 10 synthetic total discharge series each derived from an independent set of 107 synthetic records realized using the inline image and the variable ρ criteria. Generation of a number of synthetic records provides a means of placing bounds on expected variability, given these alternative ways of fixing in place sufficient asymmetry. Because the median value for ρ in the variable ρ case is 0.1, the alignment between synthetic and actual time series is not preserved as closely in this case as for inline image.

Figure 13.

The total daily discharge flux from all 107 sites (black line) together with 10 synthetic series (gray). (a) and (b) are for the inline image case, while (c) and (d) are for the variable ρ case. Figures 13a and 13c 8000 days of data; Figures 13b and 13d focus on highest total daily discharge event in the time interval.

[62] If the annual maxima are of interest, conventional statistical modeling would fit a distribution function to such data and then impose confidence limits based on the error associated with the fitting procedure adopted, such as maximum likelihood [Coles, 2001], probability weighted moments [Greenwood et al., 1979], or L-moments [Hosking, 1990]. However, here we can also fit to the annual maxima of the synthetic data and use this to guide statistical model selection and recommendations made to planners and engineers. Note that this provides a fairly stringent test for a synthetic record generation algorithm in that preservation of the cross correlation between data records is not necessarily sufficient to preserve the behavior of the extreme values.

4.6. Extreme Value Analysis

[63] Stemming from the classic work of Gumbel [1958], the Gumbel or extreme value type I distribution can be shown to be the limiting distribution for extreme values produced under a set of quite general assumptions. However, fitting the generalized extreme value (GEV) distribution provides a rather more flexible fitting procedure through the introduction of a shape parameter, inline image:

display math

where P(x) is the cumulative distribution function for x, while a and b are the location and scale parameters, respectively. Figure 14 indicates the upper tail for the annual total discharge maxima (squares) together with their estimated probabilities, where the values for the data are derived from the Weibull plotting positions, P(xj) = rj/(N + 1), where rj is the ranking for datum xj (ascending order) and N is the number of years analyzed. Probabilities for the fitted models are evaluated from maximum likelihood fits to equation (15) where, in our example, inline image, meaning that the data are Fréchet distributed.

Figure 14.

Fitted GEV distributions to the annual maxima of the total daily discharge flux from Figure 7. The squares are derived from the original data, plotted using Weibull positions; the black line is the GEV fit to these data, while the gray lines are the GEV fits for 10 sets of 107 synthetic time series. (a) is for inline image while (b) shows the results for variable ρ.

[64] It would appear that, based on the elementary analysis conducted here (i.e., no division of the record is undertaken using covariates with partial models then fitted to subsets of the time series [Coles et al., 2003] and no point process representation is attempted [Coles and Tawn, 2005]), the largest event, which occurred on day 7938, is rather exceptional. For this day, 13 rivers had discharges greater than 60% of the maximum recorded discharge on that river and 5 of these were in Pennsylvania, with 2 in Georgia and New York State and individual rivers in Massachusetts, New Jersey, North Carolina, and, surprisingly, Washington State (the Spokane river at Spokane, USGS site 12422500). Given the high quality of the fit to the rest of the upper part of the distribution, it would appear that the return period estimated from the Weibull formula is an underestimate. The fitted GEV model yields a 146 year return period for the event on day 7938, which is reduced to a more conservative 126 years for the minimum case from 10 series for the inline image case and 62 years for the variable ρ case. Hence, while the inline image case yields synthetic time series that clearly map onto the original record, this would appear to overconstrain the estimates for the extremes in Figure 14. Given that both approaches constrain the asymmetry appropriately, the recommendation from this work would be, for extreme value analysis, to choose ρ to preserve the asymmetry of individual time series, rather than to derive a global value that is acceptable across all series in a statistical sense.

5. Conclusion

[65] This article has sought to improve upon existing methods available for the generation of synthetic time series in hydrology and water resource management. Westra et al. [2007] presented methods based on PCA that preserve the covariance structure between time series and were able to show that such approaches could be enhanced by utilizing ICA. Using a method termed gradual wavelet reconstruction that has been previously proposed by the author for systematically generating univariate synthetic series, we have adapted this to the multivariate case.

[66] In the previous section, we presented a manner in which the algorithm developed in this article may be of utility to hydrologists, generating synthetic records that may be legitimately agglomerated to produce total flux estimates. The advantage of our framework for synthetic signal generation is that, in addition to the linear properties of the multivariate data set, salient nonlinear properties may also be preserved. It was suggested that for typical hydrographs, the asymmetry introduced by a rapid rising limb and a more shallow recessional limb were the dominant forms of nonlinearity that the synthetic series needed to replicate. It was found for the 107 U.S. discharge records studied here that a choice of a global threshold of inline image was acceptable for fixing in place the asymmetry properties for more than 95% of cases. This was based on an acceptability criterion and three choices for the lag Δ in equation (7) applied to the 107 discharge records. However, when studying the properties of the extremes, choosing the threshold for ρ for each series individually appeared to be the preferable approach.

[67] This method is an advance on those used previously, as the hydrologist can, if so desired, ensure that the synthetic series also preserves other specified nonlinear properties. Of course, applications can readily be made to precipitation, groundwater, or other relevant hydrological data to enhance estimates for resource provision, or test the behavior of process-based or statistical models for hydrological time series, along the lines of that presented byKeylock et al. [2011].

Appendix A:: The Pinned Wavelet-Based Surrogate Data Algorithm

[68] The method used to generate the surrogate time series detailed in section 2.3 is based on that by Keylock [2007]. See Keylock [2010] for more detail on the method and its applications as well as more information on the difference among discrete wavelet transform, continuous wavelet transform, and MODWT. The algorithm for the pinned wavelet iterated amplitude adjusted Fourier transform makes use of the innovation introduced by Keylock [2006]. The standard IAAFT algorithm detailed in section 2.1 is applied to each scale, inline image, of an undecimated MODWT. Hence, because the values for the wavelet coefficients, inline image, are preserved, there is no impact on their variance, which is proportional to the Fourier spectral energy in this frequency band [Percival and Walden, 2000]. Because the periodicities of the coefficients are preserved, the resulting coefficients are a legitimate realization from a wavelet filter at this scale. Given the energy function (equation (9)) and a choice for the threshold parameter, ρ, we may then refine this approach to control the extent to which the surrogates are randomized. The algorithm proceeds as follows:

[69] 1. Take the MODWT (we use a least asymmetric Daubechies wavelet [Daubechies, 1993] with a high number of vanishing moments for precision in scale) of a time series, and, for a choice of ρ, determine which of the inline image inline image are fixed in place as explained in section 2.3;

[70] 2. For a particular scale, j, determine if any of the N wavelet coefficients are to be fixed. If they are not, apply the IAAFT algorithm (section 2.1) to give a randomized realization of the coefficients at this scale. If they are:

(i) Fit a piecewise cubic Hermitian polynomial exact interpolator [Fritsch and Carlson, 1980] through the fixed coefficients and the end values;

(ii) Add to this function the randomly shuffled, unfixed coefficients at this scale and use this as the starting point for the IAAFT algorithm;

(iii) Run the IAAFT algorithm until convergence, reimposing the fixed values in the correct positions at each rank-order matching step for that method.

[71] 3. Once all scales have been analyzed, invert the MODWT to produce a new time series;

[72] 4. Because the original values of the times series have been lost during these steps, we repeat the final steps of the IAAFT algorithm to ensure convergence:

(i) Take the Fourier transform of the surrogate series and replace the squared amplitudes with those for the original data, while retaining the phases for the surrogate series;

(ii) Replace the values in this new series by those from the original series using a rank-order matching process.

[73] Note that if exact preservation of the original values in the data and its precise Fourier amplitude spectrum is deemed to be less important than ensuring that ρ converges effectively (which may be a concern at values for ρ much higher than used here), then this final IAAFT step may be abandoned.

Acknowledgments

[74] The author is grateful to the associate editor and five referees for their critical comments that improved this manuscript and hopefully made it more intelligible to hydrologists.

Ancillary

Advertisement