Geophysical Research Letters

Extended Kalman Filter framework for forecasting shoreline evolution


  • Joseph W. Long,

    Corresponding author
    1. U.S. Geological Survey Coastal and Marine Geology Program, St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, USA
      Corresponding author: J. W. Long, U.S. Geological Survey Coastal and Marine Geology Program, St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL 33701-4846, USA. (
    Search for more papers by this author
  • Nathaniel G. Plant

    1. U.S. Geological Survey Coastal and Marine Geology Program, St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, USA
    Search for more papers by this author

Corresponding author: J. W. Long, U.S. Geological Survey Coastal and Marine Geology Program, St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL 33701-4846, USA. (


[1] A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).

1. Introduction

[2] Coastal managers have an increasing need for predictions of shoreline evolution in order to evaluate vulnerability and protect coastal infrastructure, human safety, and habitats. Computationally efficient models are required that are capable of predicting the shoreline response to seasonal, storm, and longer-term forcing that either prograde or erode the beach on a variety of temporal and spatial scales. However, over time, prediction errors resulting from errors in (1) model parameterizations, (2) initial and (3) boundary conditions may grow, rendering a model prediction meaningless for management applications and vulnerability assessments. This necessitates that forecasts of shoreline evolution be based on the combination of a computationally efficient model (requiring a trade-off between the amount of process parameterization and an acceptable level of model detail) and on-going observations of shoreline position to guide, calibrate, and re-initialize the model forecast. Hence, a framework for the combination of these two pieces of information is needed. The framework must be capable of minimizing forecast error by using information contained in the model and the data, dynamically estimating unobservable, poorly constrained model parameters, separating important time scales of shoreline evolution pertinent for different management needs, and statistically quantifying forecast error.

[3] It is clear from existing literature that progress in the development of empirical [e.g., Frazer et al., 2009] and process-based models [e.g., Yates et al., 2009; Roelvink et al., 2009] and observational techniques [e.g., Stockdon et al., 2002; Plant et al., 2007] has and continues to occur. Rather than a complete review of shoreline models or observational techniques, here we develop a framework that efficiently combines model- and data-derived shoreline positions to generate more reliable forecasts as well as quantitative estimates of the forecast uncertainty. The three generic components to an assimilation framework of this type include (1) measured data that are updated occasionally, (2) a numerical model capable of predicting morphologic evolution, and (3) a formal assimilation scheme that can optimally blend (1) and (2). Assimilation methods vary in complexity but can help to estimate model parameters [e.g., Feddersen et al., 2004], boundary conditions [e.g., Wilson et al., 2010] and evolution rates (including changes in parameters/rates) as well as quantify the uncertainty in the forecasted state (e.g. shoreline position). Determining the uncertainty in the forecast will provide guidance for planning purposes, identify requirements for data collection (e.g. when uncertainty exceeds certain limits), and highlight shortcomings in the model formulation. As shown here, a data assimilation framework can provide more than an estimate of the shoreline position driven by a combination of processes that occur on different temporal scales (as would be seen by data alone). This method can separate the shoreline motions and essentially cast what is considered noise at one time scale (e.g. scatter in a linear regression model) into model skill when placed in the context of another forcing mechanism that occurs on a different timescale.

2. Methods

2.1. Shoreline Change Model

[4] Empirical, equilibrium shoreline change models that relate wave conditions to shoreline change without explicitly modeling the complex physical process interactions make skillful predictions of observed shoreline change over time spans of several years at a temporal resolution of O(hours to days) [Miller and Dean, 2004; Yates et al., 2009; Davidson et al., 2010]. The models have 3 [Miller and Dean, 2004] or 4 [Yates et al., 2009] free parameters which all rely on observations for site-specific calibration and, when calibrated, can reproduce observations over O(years). These equilibrium models address the seasonal changes that occur in shoreline position, and to some degree the storm response. Long-term trends in position due to processes like sea-level rise or alongshore gradients in sediment transport are not explicitly considered but can be incorporated by the addition of a linear trend to the equilibrium change rate. The slope of the trend relies on a regression of historical data with no updates for future conditions [e.g., Davidson et al., 2010]. Long-term rates and parameter values that fit previous observations may, however, require continual updating due to possible changes in storminess, the rate of sea-level rise, or human intervention (e.g. coastal structures, nourishment).

[5] We selected the equilibrium shoreline evolution model of Yates et al. [2009] to include in our assimilation framework, however we expand their approach by adding a long-term component (Xlt) formulated as a linear trend which represents shoreline change related to processes which are not considered by equilibrium change models, unless, for example there exists a long-term increase/decrease in wave energy [e.g., Ruggiero et al., 2010]. We define the shorter-term shoreline response (Xst) as the position and change in position driven on the timescale of changing wave energy (O(hours to days)) which is modeled with the equilibrium formulation. Hence, in the most basic form, the total shoreline position and change in position is expressed as

display math
display math

where, vlt represents the long-term rate of change of shoreline position (assumed constant or slowly varying) and the second term in equation (1b) is the wave-driven rate of change of shoreline position given by Yates et al. [2009].

[6] Equilibrium theory (and the model applied here for short-term shoreline evolution) assumes that for a given wave energy (defined in Yates et al. [2009] as E = H2, where H is the significant wave height), there exists a shoreline position such that the beach would remain in equilibrium (i.e. remain fixed with stationary wave forcing). In this particular model, ΔE = EEeq, and represents the disequilibrium of the existing short-term (wave-driven) shoreline position from the equilibrium position (Eeq) expected for the instantaneous wave energy. Yates et al. [2009] define the equilibrium shoreline position from historical observations as Eeq = aXst + b where the free parameters a and b are the slope and y-intercept of the linear best-fit line that fits the relationship between surveyed shoreline positions as a function of average wave energy observed between surveys. Following the more recent work of Yates et al. [2011], who found only a 10% increase in root-mean-square error when reducing their model to three free parameters, we use a change rate coefficient (C) that does not vary with accretive and erosive conditions. This short-term evolution model has been applied to four different sites [Yates et al., 2009, 2011] with root-mean-square errors in hindcasted shoreline position of approximately 5 m and correlations between observed and modeled shoreline positions between R2 = 0.61 to 0.94 indicating skill in predicting shoreline evolution.

2.2. Assimilation Algorithm

[7] Kalman Filtering is a simple, computationally efficient, and widely used data assimilation method with extensions applicable for nonlinear applications [Kalman, 1960; Wan and Van Der Merwe, 2001]. Here, we use the joint extended Kalman Filter (hereinafter still referred to as eKF) which uses the general Kalman Filter algorithm but performs a first-order linearization of the forecast equations at each time step [e.g., Kopp and Orford, 1963; Haykin, 2001]. Most recent contributions of Kalman filtering techniques applied to coastal geophysical applications use ensemble approaches which are necessitated by the complexity of the numerical models [e.g., Chen et al., 2009; Wilson et al., 2010]. Few, if any, studies have applied assimilative techniques to the range of simple predictive models needed to forecast at large spatial and temporal scales that exploit empirical relationships between forcing and response (e.g. sand bars, dune erosion, wave runup).

[8] Based on equation (1), there are three states (Xlt,vlt,Xst) and three parameters (C, a, b) we aim to estimate by assimilating the model and the observations of instantaneous shoreline position. Concatenating these variables into one state vector, ψ, gives

display math

To propagate each variable of the state vector through time we define a set of discrete state-space equations, f:

display math

where the represents the time derivative and k is the discrete time step index. The state estimate is determined from ψk = ψk−1 +  f(ψk−1t, where superscript denotes the a priori quantity (not yet corrected by the eKF) and Δt is the discrete time step (such that t = t0 + kΔt). The a priori error covariance is given by

display math

where Q is the matrix of noise inherent in the model (“process noise”) which is assumed constant here, and J is the Jacobian matrix of partial derivatives of the state-space model with respect to ψ and implements the linearization required by the eKF:

display math

In equation (5), i and j, represent the vector and matrix indices. The measurement update equation for the state vector is

display math

where ψ is the posterior (corrected) physical state. Equation (6) is actually the linear Kalman Filter measurement update equation which can be applied here because our measurement equation (e.g. equation (1a)) is linear. The quantity in parentheses represents the difference between the observation, d, and the corresponding modeled state, , and is commonly referred to as the innovation. Note that the filter does not require that the observed state (total shoreline position, X) and the forecasted state be the same, only that they are linearly related by H. For this set of state-space equations, H = [1, 0, 1, 0, 0, 0] indicating that the observed shoreline should be compared to the summation of the forecasted short- and long-term positions. The innovation is weighted by the Kalman gain which is computed using the following equation:

display math

Therefore, the innovation is weighted according to the error covariance of the predicted state vector, P, and the observed state, Rk. For small values of Rk (very accurate measurements) the value of K tends towards unity and the posterior state becomes equal to the observation. Alternately, when the observations are noisy or inaccurate and Rk is large, the forecast will be dominated by the model prediction. After the forecast has been updated with available data, the error covariance of the posterior state (the state including information from both the model and the data) is updated by

display math

where I is the identity matrix. At each time step when data are available, the eKF has minimized the mean-square error of the forecast (based on knowledge of model and data errors) and this posterior covariance quantifies the combined uncertainty that remains in the forecast.

3. Results

[9] The field tested and calibrated model of Yates et al. [2009] and a dense observational time series of wave height were used to generate a synthetic time series of Xst. A 10-year wave height time series is taken from a buoy that contains seasonal variations in wave energy along with characteristic noise (Figure 1). Given this time series, the synthetic shoreline position is determined using equation (1b) with a time step of 1 hour, vlt = 1.4e − 4 m/hr , C = 1.25 m hr−1/m3, a = −0.008 m2/m, and b = 0.075 m2. These are typical values from the multiple sites considered by Yates et al. [2009, 2011] and values represent a potential time series of shoreline position given the input wave energy. The baseline, highly resolved, modeled shoreline is then subsampled to provide monthly shoreline positions and normally distributed noise with a standard deviation of 0.5 meters (typical horizonal error using GPS measurements) is added to each subsampled synthetic observation.

Figure 1.

(top) Time series of squared wave height (H2) and (bottom) simulated shoreline position using equation (1) with C = −1.25 m hr−1/m3, a = −0.008 m2/m, b = 0.075 m2.

[10] The eKF is initialized with the following values for the initial state vector, the a priori error covariances, and the covariance of process noise (note that the initial vector represents a first-guess and is not equal to the initial conditions used to generate the synthetic time series):

display math

[11] The optimal choices of Q and P depend on knowledge of the true process noise and error covariance, which are unknown. Our choice of the initial error covariance is based on published field results where the model has been implemented and represents how certain we are about the initial conditions in the state vector. We assume that an observation of shoreline position is available at t = 0 and the initial error of the long and short term shoreline positions were set equal to the measurement noise. For initial errors in the three parameters governing the short-term shoreline change we use twice the average standard deviation of the calibrated parameter values reported by Yates et al. [2009] except for the value of b, which is entirely site dependent and unknown and is assigned an error covariance of unity (e.g. high uncertainty). Finally, while we could have set the long-term rate to zero and assigned a high value of uncertainty, it is likely that at least a few past observations will be available to guide an initial estimate long-term rate [e.g., Hapke et al., 2006]. We assumed an error in the long-term rate of approximately twice the initial rate provided to the model also indicating a fairly high uncertainty. Because the long-term rate and the three free parameters in the short-term evolution model are typically assumed constant, we assign a small but finite amount of process noise (Q values in equation (9)). This mainly ensures filter stability. The impact of all these choices will be discussed further in section 4.

[12] The time history of the scale-separated shoreline position and model parameters are given in Figure 2. We only show the first half of the time series to highlight the convergence characteristics. The model alone, initialized with the incorrect physical conditions given in equation (9) (ψ), would have given an erroneous forecast of the shoreline position. However, when assimilated with the monthly samples using the eKF, the estimates of model parameters and the individual short- and long-term components of shoreline position converge to near the correct values within two years. The filtering routine was also able to extract the long-term shoreline position and rate, despite initializing the model with an inaccurate value. Given the set of filter parameters that were used here, the long-term shoreline change rate required the longest convergence time. Both the short-term shoreline position and the relationship between the wave height and equilibrium shoreline position converged faster than the long-term trend. Once the parameter values converged on the true values, the levels of uncertainty also converged to the minimum levels of uncertainty which correspond to the estimates of process noise provided to the eKF.

Figure 2.

Results from the model-data assimilation algorithm. (top to bottom) Long-term shoreline position (Xlt), long-term shoreline rate (vlt), short-term shoreline position (Xst), C, a, b with “true” (solid) and modeled (dashed) results and data (asterisks) used in the assimilation process. The shaded area represents the forecast uncertainty (i.e. bounds of the root-mean-square forecast error).

[13] We ran the numerical model (including the baseline model and sampling of observations with random uniform noise) and assimilation routine ten times and averaged the convergence time from all ten runs. The average convergence times (standard deviation) of vlt, C, a, and b were 27.6(7.9), 4(2.6), 13.7(0.7), and 1.0(0) months, respectively. Here, convergence is defined as the point in the time series where all future values have a relative error of less than 20% of the true value.

4. Discussion

[14] Applications of the eKF using a variety of choices for the values of process noise, Q, and error covariance, P, show that for almost all initial values, convergence occurs but at different rates. Convergence is also affected by the quality of the data as can be seen in equation (7), where increasing the data error term (R), decreases the Kalman weight and slows convergence. The eKF weights the forecast more toward the model estimate when poor quality data are available and therefore the Kalman gain is small. Increasing the value of the process noise, Q, causes the forecast uncertainty to have an increased lower limit (after convergence) and to result in a forecast with increased variance. Also, there are correlations between parameters that allow some sub-optimal combinations of parameter estimates to perform well when the noise terms are larger or the sample rate is sparser. This can be seen between b (the short-term equilibrium shoreline position which essentially offsets the time series up and down) and vlt (the long-term rate). We find that realistic values of the initial uncertainty of the model parameters are required rather than initializing with all parameters equal to zero and applying large values of initial error covariance and expecting the algorithm to converge. Too much error on too many parameters results in an unstable filter (convergence to an incorrect combination of parameters) for all sample rates shorter than hourly observations of the shoreline and wave height inputs.

[15] The sensitivity to different sampling rates was examined by sampling the synthetic time series at intervals ranging from hourly to once every four years with 18 different sampling rates in total. The error estimates of the parameters and shoreline positions are reduced over time due to the assimilation of shoreline observations, regardless of the sampling rate. Four of the different sampling rates (monthly, biannually, annually, and biennially) are shown in Figure 3 illustrating the convergence characteristics. Each step decrease in the error indicates the reduction of forecast error due to information extracted from the data. The assimilation and relative density of the data is apparent in the error estimates by the degree to which errors are reduced gradually (dense data) or are reduced in pronounced step features (sparse data). Note that even when sampling biennially, the parameters associated with the equilibrium shoreline position (a and b) converge the fastest (less than 5 years, only two data points). The erosion coefficient (C) cannot converge with such sparse observations and, hence, error remains large. We note that at some sites, Yates et al. [2011] could not find best-fit values for this parameter within an order of magnitude during accretionary times due to the insensitivity of the model to changes in the parameter. For almost all sampling rates and using the current set of values for process noise and initial error covariance, the long-term rate has a slower convergence rate and a biennial sampling strategy would require more than 10 years of data (more than 5 points) because the algorithm focuses on reducing error in the short-term model, given our choices of P and Q.

Figure 3.

Forecasted error estimates from the Kalman filter for the parameters vlt, C, a, b. Line style indicates the data sampling rate: 1 month (dashed), 6 months (solid), 1 year (dotted), 2 years (dashed-dot).

[16] Kalman filters remain optimal estimators provided that noise is normally distributed. While this assumption is often used, the impact is not well-understood for the majority of applications. Because noise in a natural shoreline data set may not be normally distributed, we repeated the analysis presented here by including both uniformly and rayleigh-distributed noise and found no impact on the convergence characteristics.

5. Conclusions

[17] The joint eKF algorithm was applied to the process of shoreline change using a model consisting of long- and short-term shoreline dynamics. The eKF minimizes the mean square error in the predicted state using available observations. Because it is a recursive filter, it is not necessary to store all of the prior information about the physical state. The data included in the filter can be non-uniform in space and time and inferred from different types of instruments with different noise variances (e.g. shorelines derived from historical photographs or ground surveys, remote sensing, etc.). Combining a process-based model and noisy observations of instantaneous shoreline position using the eKF, four parameters and two scales of shoreline evolution can be estimated using a single observable. Convergence of all six states/parameters occurs within two years given monthly observations (Figure 2) and within several years using biennial observations. Unlike previous methodologies, the approach shown here can explicitly account for temporal variations in parameters, indicates when the parameters have converged, and has added the estimate of a long-term trend which is often neglected in equilibrium model studies. While most studies treat either long- or short-term evolution in isolation and caution against using calibrated models for long-term forecasts [e.g., Yates et al., 2011] our proposed Kalman filter method provides two advantages: 1) model parameters/states can be updated continuously and perpetually in time and do not require constant values and 2) uncertainty estimates identify confidence of the forecasts and parameter estimates and can guide data collection intervals and/or convey forecast credibility for use in coastal management. The method is computationally very fast and can be applied over a long stretch of coast where parameters/processes are expected to vary and can be run operationally such that forecast updates are produced as soon as new observations are available.


[18] This work was funded by the Mendenhall postdoctoral program at the U.S. Geological Survey. We thank Peter Howd for his review of multiple versions of the manuscript and for the constructive comments provided by two additional journal referees.

[19] The Editor thanks two anonymous reviewers for their assistance evaluating this manuscript.