cBathy: A robust algorithm for estimating nearshore bathymetry

Authors


Abstract

[1] A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.

1. Introduction

[2] Bathymetry is probably the most critical variable for understanding and modeling the dynamics and variability of the nearshore. Coastal Zone Management decisions are usually based around understanding sediment budgets and the location and health of the beach sand volumes, so require bathymetry data directly [Davidson et al., 2007]. Prediction of nearshore ocean wave, current and morphologic conditions using a series of increasingly mature numerical models can only succeed if provided with an up-to-date, accurate bottom boundary condition tied to the current bathymetry. Such data are only rarely available.

[3] This need has spurred extensive research into methods for cheap and, for military purposes, clandestine measurements of nearshore bathymetry. Some of these methods are in situ such as traditional leveling, the use of bottom-contacting vehicles such as the Coastal Research Amphibious Buggy (CRAB) [Birkemeier and Mason, 1984] or global positioning system (GPS)-equipped jet-ski systems with attached fathometers [Dugan et al., 2001a]. These methods are accurate, but manpower intensive and expensive. Others are based on remote-sensing methods that exploit various depth signatures. For clear water where the bottom is visible, multi- or hyperspectral sensors show color variations that are correlated to depth in invertible ways [Mobley et al., 2005; Lyzenga et al., 2006] yielding approximate bathymetries from satellite or airborne data. LIDAR has become a powerful and popular tool for airborne sensing of clear waters, providing accurate, albeit expensive, measurements for extensive areas each time an overflight is carried out [Sallenger et al., 2003; Irish and Lillycrop, 1999].

[4] For many areas of the world, the bottom is not visible either due to turbidity or bubbles in the surf zone so the above methods do not work and depth must be estimated through the use of ocean surface observables. van Dongeren et al. [2008] developed an assimilative methodology for the estimation of bathymetry based on a number of possible input data streams. Most commonly it exploits the dissipation patterns observed in Argus time exposure images [Lippmann and Holman, 1989] by finding the model dissipation over a series of test bathymetries that best match Argus observations. The method is complex and requires a quality control procedure for input images but produces good results in the vicinity of the surf zone where dissipation dominates. However, the algorithm is also capable of ingesting estimates of wave celerity and shoreline topography [Plant and Holman, 1997] collected from either optical or radar remote sensors.

[5] Alternately, a number of authors have tested methods based just on the relationship of wave celerity, math formula, to depth, mathematically described by the dispersion relationship

display math(1)

where σ is the radial frequency (2π divided by the period, T), k the radial wave number (2π divided by the wavelength, L), g the acceleration due to gravity, h the depth and currents and finite amplitude effects have been neglected. This idea was first investigated around World War II when sequences of air photos of enemy-held beaches were manually analyzed to determine wavelengths, wave periods and inferred depths [Williams, 1947]. However, analysis was tedious and expensive and the results poor due to the nonmonochromatic nature of most seas, the difficulty of accurately geolocating images, the sensitivity of the dispersion relationship and the noisy nature of optical images of ocean waves (discussed later).

[6] Recent decades have seen a renewed interest in this approach due to the importance of bathymetry data to nearshore modeling and improvements in remote sensing data availability and signal processing methods to extract information from noisy data. Stockdon and Holman [2000] used frequency-domain empirical orthogonal function (EOF) analysis to estimate wave number as the gradient of the phase of the first EOF for a dominant frequency. The input time series data were taken from 1-D cross-shore arrays of pixels from Argus cameras [Holman and Stanley, 2007] with corrections due to non-normal incidence based on directional estimates from small alongshore lag arrays. Results were reasonable [bias and root-mean-square (RMS) error of 0.35 and 0.91 m, respectively] although the signal processing and systematic use of data were less rigorous than in the present work. Piotrowski and Dugan [2002] and Dugan [2001b] more formally extended the analysis to both horizontal dimensions using data from a specially adapted airborne camera system (Airborne Remote Optical Spotlight System (AROSS)) to measure optical signals over a large nearshore region while Trizna and others tried similar approaches using X-band radar [e.g., Trizna, 2001]. Results were good (RMS errors were 5–10% of local depth) and included the capability to simultaneously measure currents from their high-frequency Doppler shift. But the method was based on a spatial Fourier transforms of the observed image data so spatial resolution was limited by the requirement of 256 m analysis tiles. Plant et al. [2008] developed and tested two new methods for the robust estimation of ocean wave number from which bathymetry could be derived. Like Stockdon and Holman [2000], the methods were only one-dimensional (1-D, usually cross shore). The first approach was posed as a tomographic problem based on the travel time between all possible pair of pixels in a 1-D transect. This approach was shown to provide high-spatial resolution, a factor of 10 better than Fourier approaches, and could resolve bathymetric features with horizontal scales that are at least 10 times the local depth. Objective error predictions were also computed. A second approach found wave numbers based on spatial gradients of Fourier phase at a set of frequencies, the same basis as is used in this paper but previously only in 1-D. Senet et al. [2008] suggested an alternate algorithm based on complex 3-D fast Fourier transforms (FFTs) allows for spatial inhomogeneity, so permits better resolution than standard FFT methods.

[7] The goal of this paper is to extend previous work with an algorithm that is fully two-dimensional but retains the high-resolution capabilities of non-Fourier spatial methods. The method must be robust to noise and unanticipated signals like sun glare, passing clouds and rain spots on lenses, and must return confidence estimates that can be used in downstream products such as operational nearshore prediction models. However, in contrast to van Dongeren et al. [2008], this algorithm will not depend on these models. A key aspect of this model that will distinguish it from previous work is the implementation of a Kalman filter formalism (section 2.3) to allow statistically robust integration of new estimates of variable quality with a prior running average. This component requires not only the development of good confidence intervals for new estimates, but also a realistic “process error” function to represent the slow degradation of prior estimates due to ongoing wave action and sediment transport.

[8] The next section describes the nature of the signal processing problems we must address and the details of the algorithm. This will be followed by a description of extensive tests against high-quality survey data from Duck, North Carolina, USA and one survey from Agate Beach, Oregon, USA. This will be followed by discussion and conclusions. The algorithm has been called cBathy due to the primary role of wave celerity, c, in the estimation of bathymetry. But estimates of wave number and wave angle are also returned for a set of desired frequencies.

2. The cBathy Algorithm

[9] The problem of estimating ocean wave properties from optical signals can be surprisingly challenging. Walker [1994] showed that for waves outside the surf zone viewed at the typical low-graze angles of coastal cameras, the primary source of light from the ocean comes from specular reflection of skylight by the sea surface and the primary source of wave contrast is variations in sea surface slope and the associated slope dependence of the optical reflection coefficient. Since sea surface slope depends on the wave amplitude, a, times the wave number, k, this mechanism is dominated by high wave number (short wavelength) waves, or ocean chop. This is apparent when viewing any ocean scene. Human observers instinctively filter the observed wave patterns to see the longer, coherent incident wave pattern while ignoring the short wave clutter, but a computer algorithm must understand and properly deal with these sources of noise. In the following algorithm, this will be done through both frequency domain methods (temporal Fourier transforms) and through coherence and EOF-based filtering.

[10] While the disadvantage of optical data is the high-noise level, the advantage is the huge volumes of data that are available at very low cost. A single camera can deliver around 35 MB per second, a data rate well beyond what is needed for wave characterization, and one that requires extensive data reduction. The cBathy analysis described below for the case of Duck, NC, will strive to estimate bathymetry over a 420 by 1000 m region with a spatial resolution of 10 by 25 m in the cross shore (x) and alongshore (y) respectively for a total of 1763 points. The analysis will be based on optical intensity time series data [Holman and Stanley, 2007] that are collected at approximately 8600 locations (5 by 10 m spacing) over the same region, a reduction by decimation to 0.14% of the available pixels from the five Argus cameras that span this site. Temporal sampling is done at 2 Hz, a further reduction by a factor of 15 over typical 30 Hz video rates, for record lengths of 1024 s, each hour. Even with this reduction of 4 orders of magnitude in available data usage, 17.6 million intensity samples are collected for each data run, so that there are approximately 10,000 degrees of freedom for every individual bathymetry estimate. Thus, robust signal processing opportunities are available.

[11] Figure 1 shows the typical pixel sampling array described above (decimated by 2 to reduce clutter) with the blue dots each corresponding to the locations, math formula, of available 17 min pixel time series data. The analysis is carried out sequentially at a series of user-selected analysis points, [ math formula], one of which is indicated by the red asterisk, and is based on data from the immediately surrounding pixels (green points) within a user-specified range, math formula. Within each such tile, the goal is to estimate the wave number, k, for each of a set of candidate frequencies, math formula that span the incident wave band (taken here as periods between 4 and 18 s). For each math formula pair, a depth, math formula can then be estimated using equation (1), or a single depth, math formula, can be determined that best fits all frequencies. Estimates may be poor or impossible at times due to weather, sun glare or calm seas, so estimates from hourly data collections are objectively averaged to yield a stable running average depth, math formula. Thus, the final cBathy analysis at each point consists of three stages:

[12] (1) Frequency-dependent analyses of math formula and math formula, where α, the wave angle, is a collateral product.

[13] (2) Frequency-independent estimation of the best single depth, math formula.

[14] (3) Estimation of the running-averaged depth, math formula.

[15] Each stage is now discussed.

Figure 1.

Example pixel array used for cBathy analysis. The 8600 pixels (half shown) span a 420 by 1000 m region with 5 by 10 m resolution. For each analysis point (example show by red asterisk), depth is estimated based on cross-spectral phase within a nearby region (green pixels). The background image is a rectified snapshot that merges views from the five available cameras.

2.1. Phase 1: Frequency-Dependent Analysis

[16] The first step is to Fourier transform the optical intensity time series at each pixel, math formula, such that math formula. Because our interest is in modeling wave phase and neglecting spatial variations in magnitude, the Fourier coefficients are then normalized, math formula. The full data set is then subsampled to a local data tile in the region math formula (example green region in Figure 1) and the cross-spectral matrix computed between all possible pixel pairs for each of the desired frequency bands,

display math(2)

where superscript * indicates the complex conjugate and the expected value is averaged across each frequency band.

[17] For complex natural seas, the cross-spectral matrix can mix the effects of multiple wave trains from different directions. To extract only coherent motions from this mix, the dominant (complex) eigenvector, math formula, and associated eigenvalue, λ, are extracted from C. We define the optimum wave number, k, and wave direction, α, as those values that yield the best match between observed and modeled spatial phase structure of v based on a forward model

display math(3)

where the search is accomplished using Matlab routines based on the Levenberg-Marquardt algorithm. The scalar phase angle, math formula, is of no geophysical value in the subsequent analysis and simply provides an appropriate phase shift to match the observed spatial structure of math formula.

[18] The eigenvector, v, will usually have a spatially variable magnitude expressed by math formula that can serve as an appropriate weighting for the search cost function. In addition, the need to localize the cost function to the vicinity of the analysis point, math formula within each tile was accomplished by multiplying the cost function by an additional Hanning filter weighting, math formula, where Γ has magnitude 0.5 at argument 0.5 and goes to zero at argument 1.0. Thus the nonlinear search for optimum wave number and wave angle at each point is accomplished by minimizing the error between predicted and observed values of math formula, where the weighting function is given by math formula.

[19] From the derived wave numbers for each frequency, an equivalent depth, math formula, is estimated using equation (1). In addition, for each result, 95% confidence intervals are computed for the depth, wave number and wave angle, and the skill of the fit, s, the number of degrees of freedom (a function of w) and the normalized eigenvalue, math formula, (eigenvalue divided by the average eigenvalue) are recorded. math formula is used as both part of the weighting for the different frequencies in the phase 2 depth estimation and in a quality control role to determine if an analysis result exceeds a minimum acceptable signal level.

[20] The appropriate spatial scale for smoothing, math formula, will depend on cross-shore distance, x. Close to the shore, short scales are common so the tile size and Hanning filter size can be small, whereas natural length scales of variability offshore will be larger so more smoothing is allowed. To accommodate this spatial variability, Lx and Ly are linearly increased between the inner and outer limits of the analysis domain by a user-selected factor, κ, so that the maximum stretch at the outer domain boundary is math formula and math formula. This increase in tile size will result in the inclusion of an increasing number of pixel locations and an associated worsening of run-time speeds since the number of calculations vary as the number of pixels squared. To maintain run-time speed, the number of pixels per tile was limited to a maximum number with the excess removed by decimation. Appropriate values for the maximum number of pixels per tile are based on the requirement to sample the phase map, math formula, well enough to estimate its spatial gradient.

[21] Phase 1 analysis can provide results for any of the candidate frequencies, f. However, ocean waves will typically span only a subset of the possible frequencies on any day, with other frequencies providing no useful signal. To allow bandwidth while minimizing nonuseful computation, cBathy only considers the frequency bands with the largest total coherence ( math formula) over the tile, retaining a user-defined number of bands (commonly four).

[22] To avoid returning aphysical values, several quality control checks are implemented. Estimates for which the skill, s, is less than a tolerance (usually taken as 0.5) or for which the first EOF does not return a high fraction of the variance ( math formula) are not accepted. Similarly, returned phase 1 depths, math formula, greater than a user-specified maximum (15 m) or less than a minimum depth (0.25 m) are not accepted. Since cBathy estimates depth, not bathymetry, tidal elevations must be subtracted from estimates to yield depths relative to a fixed tidal datum.

2.2. Phase 2: Frequency-Independent Depth Estimation

[23] Phase 1 algorithms provide a suite of frequency-dependent wave number and depth estimates, each with confidence intervals. The goal of Phase 2 is to objectively combine these to yield a single depth estimate at each analysis point along with error information. One method to do this would be simply average the phase 1 depths, but since the dispersion relationship is nonlinear, this would introduce bias (thus phase 1 depths are only used for diagnostic and quality control purposes). Instead, we will estimate the single depth value that provides the best weighted fit of the dispersion relationship (1) to all frequency-wave number information. Since bathymetry is expected to vary with a typical scale of Lx, this fit can include information from adjacent analysis locations in a way that is weighted by distance from the estimation location through a Hanning filter, Γ. The final depth estimate is the value that yields the best fit between modeled and observed wave numbers, i.e., between phase 1 observations of k and those predicted by equation (1) (solved iteratively) for each frequency and candidate depths in the nonlinear search. A weighted fit is used (i.e., the fit variable is kw2 where w2 is the phase 2 weighting function). Solution again uses a Matlab Levenberg-Marquardt algorithm and returns both best fit depth and 95% confidence intervals.

[24] The weighting function, w2, could depend on the distance through Γ, the skill of each fit, s, and the importance of the first EOF, expressed by math formula. We chose to use the product of all three, so math formula. Because low-skill and low-eigenvalue fits have already been culled, this filtering is most dependent on distance, then eigenvalue magnitude, then skill.

[25] Phase 2 depths, like those of phase 1, must be tide-corrected to yield data corrected to tidal datum.

2.3. Phase 3: Running-Average Depth Estimation

[26] Any long-term analysis must be robust to a variety of data failures including temporary loss of view (camera failure, fog or obscuring raindrops) or low signal to noise (an absence of waves or signal saturation due to sun glare). These problems can cause the loss of parts or all of a single cBathy bathymetry. Similarly, the outer edge of the surf zone is known to be a difficult domain since modulating wave amplitudes can be seen alternately as breaking or nonbreaking, resulting in signals with low-spatial coherence. These failures are usually limited in time or space and gaps can be filled with better estimates at different times or stages of the tide. The goal of phase 3 analysis is to compute a running average that smoothes individual (hourly) estimates in a way that objectively weights the confidence in the new estimate with that of the prior running average. This is a Kalman filtering problem and the method follows simple Kalman filtering theory [Kalman, 1960].

[27] In contrast to previous phases, the Kalman filtering is carried out only in the time domain. Thus, at any location, if our current depth estimate and standard deviation error are math formula and math formula and our prior (running average) estimates were math formula and math formula, then the Kalman filter updates our prior as

display math(4)

[28] Subscripts k and k−1 represent an adjacent pair of sampling times. Thus, the running average estimate is updated from time k−1 to time k by the innovation (new information, parenthesized component) times a Kalman gain, K, that compares the believability of the new estimate with that of the prior. If K = 0, the new estimate makes no contribution whereas if K = 1, the prior estimate is ignored. The Kalman gain is found by

display math(5)

where P and R are the error variance of the running average and of the current estimate, computed as math formula and math formula, respectively. The superscript on the prior error variance (−) indicates that it is an estimate of the variance of the estimated depth at time k, but prior to adding in the new information. This estimate is given by the prior variance plus an increase, Q, that has occurred between estimates k−1 and k, presumably due to unmodeled sediment transport processes. Thus,

display math(6)

where math formula is the time interval in days between estimates. An appropriate form for this process error, Q, is discussed below. After the k-th averaging step has taken place (equation (4)), the estimated error variance is updated depending on the Kalman gain (equation (5)) as

display math(7)

[29] The process error, Q, represents unmodeled natural variability, the depth changes that occur under the action of waves and currents. While van Dongeren et al. [2008] suggested that Q must grow with elapsed time, there is little guidance in the literature for the form of Q, including potential dependencies on wave height and cross-shore location. Nor is there much bathymetry data available with sufficient temporal and spatial sampling to estimate daily variability under a range of wave conditions.

[30] The best set of natural data for this task is the set of 36 almost daily bathymetric surveys collected over 39 days (22 September 1997–30 October 1997) during the SandyDuck field experiment held at Duck, NC, USA (http://www.frf.usace.army.mil/SandyDuck/SandyDuck.stm). Surveys were carried out using the CRAB [Birkemeier and Mason, 1984] equipped with Real Time Kinematic (RTK)-GPS, yielding a vertical accuracy of approximately 5 cm and an alongshore density of cross-shore transects of 25 m. The three nonsurvey days were break days when conditions were calm (Hmo < 0.4 m) and were interpolated. Raw survey data were loess-interpolated to a regular grid spanning 500 by 1000 m (cross-shore by longshore) with a sampling spacing of 10 by 25 m, respectively. Profile variability was estimated from the mean-square bathymetric deviation

display math(8)

in units of math formula. Averaging, indicated by brace brackets, is over the alongshore.

[31] Figure 2 shows the resulting measured process error, math formula, along with the wave height for the entire SandyDuck experiment. The peak variability (lower plot) is located seaward of the shoreline (typically at x = 100 m), is associated with variations of a sand bar centered near x = 150 m and appears to be correlated to wave height (upper plot). Variability tapers to both seaward and landward in a roughly Gaussian form. Thus we modeled Q in the following form

display math(9)
Figure 2.

Process error, Q, found from bathymetric variability during the Sandy Duck field experiment, 1997 (bottom plot). The upper plot shows the corresponding offshore wave height.

[32] For bathymetries collected during SandyDuck, variability was limited to a fairly narrow region centered on math formula m. But to accommodate known interannual variability in bar position [Lippmann et al., 1993], spread of the exponential was chosen to be much larger, math formula. The statistics of both linear and quadratic wave height dependencies were found to be comparable, so we chose n = 2 to be more consistent with sediment transport dependencies. The best fit value of CQ was 0.067/day, chosen to match the cross-shore maximum in variability on each day.

3. Ground-Truth Tests

[33] The cBathy algorithm was tested at two locations. Extensive tests were carried out at the field research facility (FRF) at Duck, NC, USA using 16 accurate surveys collected from 2009 to 2011. The algorithm was also tested against a single 2011 survey at Agate Beach, Oregon, USA, a very different environment. In all cases, optical data collection was run hourly for an extensive period but for Duck cases, cBathy analysis was carried out from 3 days prior through the day of the survey (i.e., Kalman smoothing was initialized 3 days prior to the survey). Kalman seeding for Agate Beach is discussed below.

3.1. Test Results From Duck, NC

[34] The pixel array at Duck is illustrated in Figure 1 and spans a 420 by 1000 m region (cross shore by alongshore) with 5 by 10 m spacing. The fact that the resulting 8600 pixels are spread across five cameras is not a problem since magnitude differences associated with different camera gains are normalized out in the analysis (section 2.1). At each pixel, time series are collected hourly for 1024 s at 2 Hz. Values of analysis and time series collection parameters are shown in Table 1. The role of sampling array design, (selecting an appropriate set of analysis locations, [ math formula]), in resolving bathymetric features at different scales and in modeling the associated hydrodynamics is described in Plant et al. [2009]. Values for Lx and Ly determine the filter attenuation cutoff of the method (morphological features shorter than twice these scales will be attenuated in amplitude by half) and are partially determined by the need to have enough pixel locations per analysis tile (around 50) to constrain the least square solution in phase 1. Plant et al. [2008] note that features shorter than 10 times the water depth cannot be resolved by these methods, placing a constraint on the shortest supportable length scales for this method.

Table 1. Parameters Involved in the Collection and Analysis of cBathy Data at Duck, NC
 
math formula 5 mPixel cross-shore spacing
math formula 10 mPixel alongshore spacing
Npixels8600Total number of pixel time series
math formula 0.5 sTime series sampling interval
τ1024 sPixel time series record length
math formula 10 mCross-shore analysis point spacing
math formula 25 mAlongshore analysis point spacing
Lx20 mAnalysis smoothing scale in x
Ly50 mAnalysis smoothing scale in y
κ2Smoothing scale expansion at outer boundary
hmin0.25 mMinimum acceptable depth
hmax15Maximum acceptable depth
smin0.5Minimum acceptable skill, phase 1
math formula 10Minimum acceptable normalized eigenvalue, phase 1
f math formula Analysis frequency bins
Nkeep4Number of frequency bins to retain

[35] Wave height data used in the Kalman filter were derived from the 8 m array (FRF instrument 3111) while tidal elevation data were obtained from a NOAA tide gage located at the end of the FRF pier.

[36] Figure 3 shows a good result (fourth lowest RMS error of the 16 tested). The left plot, a CRAB survey, is the best available ground truth [Birkemeier and Mason, 1984]. CRAB data were collected along cross-shore transects spaced math formula m in the alongshore then were interpolated to analysis locations, math formula. The middle plot shows the cBathy Kalman filter result, math formula, that most closely corresponded to the middle of the CRAB survey (assumed to be noon, by default). The right plot shows the difference (survey minus cBathy) between the two estimates. The central region ( math formula) is an area where a research pier crosses the beach, blocking camera views, and is a region of known bathymetric anomaly due to scour processes [Miller et al., 1983]. Thus, results from locations closer than 100 m from the pier (marked by black line on the Figure) are shown but will be omitted from further statistical analysis. Similarly, subaerial regions for which no estimates are returned or data for which the estimated error exceeded 0.5 m (near the shoreline) are omitted from the following statistical analyses.

Figure 3.

Example cBathy bathymetry product (center plot) compared to ground truth (left plot). The right plot shows the cBathy error. All depths are in meters.

[37] The cBathy estimates are excellent. Most errors are less that 0.5 m (right hand plot). The bias and RMS error of the cBathy estimate, computed over the entire (nonpier) area from shoreline to 500 m offshore are 0.28 and 0.44 m, respectively. A sand bar centered at x = 200 m is surprisingly well rendered including trough anomalies at y = 300, 500, and 1000 m alongshore and an apparent rip channel cuts through the sand bar at y = 900 m. An outer bar is visible, especially to the south.

[38] Figure 4 shows two example transects for this survey, from y = 200 through the double-barred region and from y = 850 through the complex trough and rip channel. For both transects cBathy estimates closely approximate ground truth, although the cBathy estimates for the steep foreshore are biased deep. Bar positions and shape are quite accurate.

Figure 4.

Example cross-shore transects from y = 200 (upper) and 850 (lower) in Figure 3.

[39] Table 2 shows statistics for each of the sixteen CRAB-cBathy comparisons, again computed over the entire 1000 m domain (excluding the pier region) and from the shoreline to 500 m. math formula and math formula correspond to the 80th and 95th percentile error exceedence values for each survey. ζ is the percent data return from all Phase 2 data runs within each 4 day analysis (the number of valid estimates compared to the requested array size). Wave conditions ranged widely (although actual CRAB survey days are almost always done on low-energy days), and the analyses included many foggy and rainy days (when rain drops obscured the view). Over the entire suite of surveys, the results have only a small bias deep (mean bias is 0.19 m) and have an RMS error that averages 0.51 m. 80% of estimates are correct to within 0.64 m and 95% within 1.04 m, averaged over the entire domain.

Table 2. cBathy Performance Statistics for the 16 Survey Comparisons as Well as the Dataset Mean (Bottom Row)
DateBias (m)RMS Error (m) math formula (m) math formula (m)ζ (percent)
18 Aug 20090.530.530.671.1298
16 Sep 20090.060.530.631.0584
21 Oct 20090.080.500.601.0572
10 Dec 2009−0.090.450.590.9675
14 Jan 20100.080.460.570.9683
22 Feb 20100.230.520.681.0486
5 Apr 20100.110.570.701.1383
16 Apr 20100.180.540.661.1488
4 Jun 20100.140.660.861.4689
28 Jul 20100.200.710.711.6492
15 Sep 20100.230.440.580.9278
19 Oct 20100.280.440.580.8183
22 Nov 20100.310.430.590.8291
7 Feb 20110.320.430.570.8476
18 Mar 20110.250.410.530.8579
2 May 20110.260.520.711.0494
Mean0.190.510.641.0484

[40] cBathy performance was found to vary with depth. Figure 5 (dashed line) shows the mean bias and RMS error for the entire data set, binned by depth. Both bias and RMS error are largest in the shallowest depth bin of 0–1 m and rapidly improve in intermediate depths before worsening in deep water. The shallow water bias, an overestimation of true depth, is consistent with the expected finite amplitude errors in the dispersion relationship but may also be related to the large fractional depth variations (including curvature) across the data analysis tiles close to the beach. In fact, it was determined that this bias resulted from tiles that spanned the shoreline and mixed unuseful subaerial and acceptable subaqueous signals so yielded tile-averaged statistics that are biased deep (see also Figure 4). Performance statistics were repeated excluding the shoreline region (defined as anything landward of the location of zero CRAB survey depth plus 20 m, the analysis window width) and are shown by the solid lines. Bias is now less that 0.2 m and RMS errors are less than 0.4 for depths less than 4 m. Recalculation of the bulk statistics shown in (Table 2) revealed only small changes to performance, reflecting the small fraction of depth estimates that lie near the shoreline. The main consequence was a 3% reduction in the average math formula, a statistic affected by the largest errors.

Figure 5.

Depth dependence of cBathy error expressed as bias (upper plot) and RMS (lower plot). Dashed line is for all estimates while the solid line excludes depth estimates from shoreline tiles that included both data from wet and dry pixels. Error bars indicated standard deviation over the 16 test cases.

[41] Errors in deep water were found to be dominated by data from a single offshore-looking camera (C1, looking directly offshore). Recalculation of statistics after removal of C1 pixels (only 15% of the total) improved the average cBathy performance by almost a factor of two. Over all depths, the mean bias was reduced from 0.19 to 0.10 m and RMS error reduced from 0.51 to 0.34 m. While C1 has the lowest resolution and lens quality of the five cameras, we know of no reason why this should yield the observed errors.

[42] The algorithm resolved surprising details in the inner bar and trough bathymetry, for instance the trough depth anomalies at y = 350 and 500 and the apparent rip channel at y = 900 (Figure 3). Correlation coefficients were computed between surveyed and estimated bathymetry for the region of the inner sand bar, defined as the region from 20 m seaward of the shoreline to x = 250 m. The average correlation was 0.85 with a standard deviation of 0.09.

[43] To test whether cBathy performance depends on offshore wave conditions, mean statistics were computed for each of the 64 days analyzed (16 surveys, each analyzed for the survey date plus 3 prior days). Daily means of wave height, Hmo, wave period, cBathy bias and cBathy RMS error were computed for each day and the wave height dependence plotted in Figure 6. Bias varies from slightly positive (cBathy too shallow) under small waves to negative (cBathy too deep) for large waves, perhaps consistent with finite amplitude dispersion effects. RMS error is smallest under exceptionally small waves (sometimes barely visible) and deteriorates during storms (assuming that the errors don't simply represent true bathymetric changes prior to the final survey). No bulk dependence was found on wave period (not shown). Similarly, a careful search for an expected kh dependence showed no trends, presumably because the maximum h/L values for the analyzed depth bins didn't exceed 0.06 (assuming daily-averaged wave period and the depth range of 0 to 6 m), well away from the deep water cutoff. Setup has been neglected in this analysis but could represent an O (10 cm) correction in the shallowest depths but an insignificant contribution further offshore.

Figure 6.

cBathy error dependence on offshore wave height, both averaged over individual days.

3.2. Kalman Filter Performance

[44] The phase 3 Kalman filtering is an important step for producing a robust product. Figure 7 shows the same cbathy Kalman-smoothed result, math formula, as Figure 3 (left plot). However, the middle plot shows the hourly result, math formula, taken at 7 am on the morning of the survey. Large gaps in the survey to the south (bottom of plot) result from morning sun glare and very low-wave conditions (Hmo was 0.22 m at the time). However, error estimates provide a good delineation of areas with poor results and help steer the Kalman filtering process. Errors and gaps such as are shown here are also associated with rain and fog and are not uncommon.

Figure 7.

cBathy estimates for an example morning data collection on 19 September 2010. The center and right plots show the current hour bathymetry estimate and its error, respectively while the left plot shows the Kalman-filtered results. Gaps in the hourly analysis (indicated graphically by dark red and corresponding to “nan” values, i.e., failed data returns) were caused by sun glitter and very low waves and are not uncommon.

[45] The hourly bathymetry product in Figure 7 failed to return values for 36% of the potential pixels (successful coverage, math formula). Values less than 1.0 are common for a variety of reasons including fog and rain. Figure 8 shows the histogram of percent successful returns for the 671 hourly collections being examined. While most collections had good returns, 20% returned less than 80% coverage and 1.5% returned no valid points at all. In addition, a significant proportion of results have high-predicted errors (Figure 7). Thus, the phase 3 Kalman filtering is a required step for averaging through gappy and noisy hourly results.

Figure 8.

Histogram of fraction of successful data return for each hourly analysis in the dataset.

[46] The Kalman filter operation rapidly fills in gaps with consecutive estimates. For the 16 runs considered, coverage reached 98% after an average of 33 h but fill times varied from 3 h (good conditions) to 72 h (considerable fog). However, this measure is different from the time it takes to reduce noise and stabilize estimates. The latter measure is difficult to determine since predictions stabilize to different error levels within the 4 days averaging window depending on wind and wave conditions. Visually, estimates seem to stabilize in approximately 1 day, faster than had been expected. However, the best estimates were associated with very low-wave conditions (perfect survey days for the CRAB), so the final day of each sequence was usually the best.

[47] The role of Kalman filtering in smoothing through natural cycles of errors is made more obvious in Figure 9. The lower plot shows the final Kalman bathymetry result, math formula, for an example transect at y = 850 m for a representative case of 22 November 2010. A single bar is apparent at x = 200 m while values landward of x = 90 m correspond to dry beach so are invalid. The upper plot shows the deviation of hourly bathymetry estimates from this profile, i.e., math formula, for the 41 hourly estimates starting 3 days earlier and ending at sunset on the day of the survey. The estimated time of the survey is marked by a horizontal blue line at run number 34.

Figure 9.

Deviation of hourly bathymetry estimates, math formula, from Kalman-filtered bathymetry, math formula, for an example transect at y = 850 m for the survey comparison completed on 22 November 2010. The lower plot shows the Kalman result, math formula, including deterioration to invalid values for x<90 m (the shoreline). The upper plot shows the time series of differences between the 41 hourly cBathy estimates spanning 4 days and this final Kalman result. The time of the actual survey is shown by the horizontal line at run number 34.

[48] Tidal variations are apparent by the change in the width of the dry beach, shown as the blue region of invalid data on the left, wider at low tide and narrower at high tide. The role of breaking over the bar is also apparent. On the first day of analysis (upper part of Figure, run numbers 1–10), the wave height reached 1.4 m and Argus images show significant breaking over the bar. cBathy hourly results show a deep anomaly associated with the onset of breaking and a shallow region in the trough. Vestiges of the breaking continue to occur on the second day as wave heights dropped to 0.5 m and in the final 2 days when wave heights were lower. These shorter time scale deviations do not appear to be bathymetric in nature but instead represent failings of the linear dispersion relationship for breaking waves and errors in the phase 1 wave number estimation algorithm near the onset of breaking. Fortunately, these anomalous values are associated with larger error bars (not shown) so pass through the Kalman filter with little impact.

[49] The magnitudes of error predictions (equation (7)) were found to under-predict observed error. For frequency-combined results, math formula, the under-prediction averaged a factor of 3.3 (standard deviation of 0.62) whereas for Kalman-filtered results, the ratio of observed to estimated error was 7.17 (standard deviation of 1.47). Plant et al. [2008] found the same problem using a different technique and concluded that predicted error surfaces were still useful indicators of relative merit but that overpredictions likely arose from insufficient understanding of the effective number of degrees of freedom in the analysis. For our cBathy results, features in observed error, for example a small mis-location of sand bar position, were also found to correspond well to features in predicted error even though the magnitudes of the predicted error were too small.

3.3. Agate Beach Test

[50] Agate Beach is a dissipative Oregon beach located 5 km north of the city of Newport. An Argus Station is located atop Yaquina Head at an elevation of 128 m, with three cameras providing a view of a large region to south. Tide data were retrieved from National Ocean Survey (NOS) station 9435380 while wave data came from Coastal-Marine Automated Network (CMAN) buoy 46050.

[51] cBathy data collections were begun in 2009 for development and testing purposes using the landward-most two cameras. A jet ski survey on 13 July 2011 provided an opportunity to test cBathy accuracy on a west coast beach. To allow better offshore coverage, the cBathy pixel array was extended on that day to include an offshore camera (three camera sampling continues to this day).

[52] The high-camera elevation allows analysis of a much larger region than at Duck, spanning 1600 by 2500 m in the cross shore and alongshore, respectively. Because the tide range is often 3 m or more, this region includes a substantial intertidal area for which good data are returned only at high tide. The cBathy array included approximately 10,000 pixels with a cross shore and alongshore spacing of 15 and 30 m, respectively. Collection and analysis parameters are the same as in Table 1 with the exceptions of analysis grid spacing is 25 by 50 m, analysis smoothing scales are 50 by 100 m and κ was to 3.0. No data are available to tune the process noise model for this site, so values were subjectively chosen ( math formula).

[53] Ground truth data were collected by the Morphology Monitoring Group at Oregon State University (Ruggiero, personal communication) using RTK-GPS position and fathometer depths on a jet ski, and consisted of a suite of 15 cross-shore transects that were subsequently interpolated to a grid using loess interpolation. The survey methodology is described as being subdecimeter accurate [Ruggiero et al., 2005]. The region of common coverage between the Argus and jet ski data spanned 1175 by 1995 m in the cross shore and alongshore, respectively.

[54] Figure 10 compares the jet ski survey (left plot) with the last cBathy result for 13 July. In contrast to the Duck results, depths as deep as 14 m are measured, a consequence of the greater field of view. The depth plots (left and center plots) are in reasonable agreement but the difference plot (right plot) shows that cBathy overestimates the true depth in shallow water and underestimates far from the camera. The dependence of bias and RMS errors on depth is shown by the dashed line in Figure 11. Several factors could contribute to errors. In shallow water, surf zone waves act as bores and the wave speed depends on wave amplitude in ways not represented by the linear dispersion relationship. In addition, wave set up can contribute to cBathy depth overestimation. Also, the jet ski data themselves have larger error in shallow water due to bubble problems and noisy interpolation (jet ski data with interpolation errors greater than 0.35 m and cBathy data with predicted error greater than 0.5 m have been removed from the statistics in Figure 11 but are part of the mapped data in Figure 10).

Figure 10.

Comparison of 13 July 2011 jet ski survey (left plot) with cBathy estimate from that day (middle plot). The right plot maps the error (survey-cBathy). All depths are in meters. The cBathy time stamp is in GMT. The cameras view from the bottom of the page.

Figure 11.

Depth dependence of bias (upper plot) and RMS error (lower plot). Dashed lines correspond to the 13 July cBathy estimates while solid shows the errors for deeper waters for a later 30 July cBathy estimate, after longer wave periods had contributed. Negative bias corresponds to cBathy depth overestimates.

[55] In deeper water, the dominant error is an area of underestimation for x>1100 and y>−600 m (see also the large bias and RMS errors in the 10 m depth bin in Figure 11). In fact, data collection for the offshore region (x > 1400 m) had only been started on the morning of 13 July, so cBathy estimates were primarily based on only 1 day of data for which the wave period was only 7.11 s (depth over wavelength was greater than 1/6 for this offshore region, so closer to deep water conditions than shallow). To allow a greater influence by occasional longer period waves, a second comparison was made between the 13 July jet ski results and Kalman filtered cBathy results from 13 to 30 July. The updated comparison was restricted to depth greater than 6 m to avoid contamination by visually apparent bathymetry changes in the more dynamic shallower parts of the profile and to focus on the deeper region that responds only slowly to summer waves. The solid line in Figure 11 shows the bias and RMS errors are greatly reduced with the incorporation of more days of wave observation including periods of longer period waves (56% of the days had offshore peak periods greater than 10 s). For the composite data set (13 July data for h ≤ 6 m, 30 July data for h > 6 m), the bias and RMS error were found to be −0.41 and 0.56 m, respectively for all depths from 0 to 15 m.

4. Discussion

[56] The cBathy data collection and analysis protocols outlined above provide a good tool for quantitative nearshore monitoring and prediction. Errors are small and flagged reasonably by confidence estimates and products are delivered automatically for daylight hours. Even in the absence of new data due to fog or other problems, estimates degrade gracefully as the Kalman process error builds. Bathymetry is usually the limiting input to nearshore prediction and cBathy provides a low-cost, logistically robust solution to the bathymetry data starvation problem.

[57] The stage three Kalman filter step is key to the success of the algorithm. It allows integration of results of mixed quality in an optimally weighted way to create a temporally smoothed product with sensible error statistics. The least known component of the Kalman filter is the form to chose for the process error, Q. This term represents the rate of growth of unmodeled variance, in this case the rate of increase of bathymetric variability from a prior state due to the combination of sediment transport processes at all scales. The existence of a remarkable 39 day set of daily bathymetry surveys at Duck allows direct estimation of Q at that site and for that particular sand bar system. However no comparable data exists for other beach types like the highly dissipative Agate Beach and, even for Duck, sand bars are known to change location on interannual time scales [for example, Ruessink et al., 2003]. Long and Plant [2012] explore the impact of different choices of process error on the performance of Kalman Filtering.

[58] The current algorithm uses only linear dispersion for depth estimation, an approximation that should result in errors in the surf zone where finite amplitude effects increase wave celerity. Indeed, errors are smallest for small waves (Figure 6), including near-glassy seas that one of the authors assumed would provide no useful signal. Kalman filtering through different stages of the tide helps by automatically weighting better estimates, for instance at high tide when there may be no breaking over a sand bar. Similarly, filtering over the tide eliminates problems at the plunge point where linear dispersion is a bad approximation and spatial coherence is low.

[59] While depth estimates based on linear dispersion are always returned and represent a first order bathymetry solution, cBathy also returns vector wave number estimates as a function of frequency that can be used in more sophisticated data assimilation schemes that are consistent with finite amplitude dispersion effects. Thus, first order estimates can be used as a seed for more sophisticated methods that exploit other optical (or other source) variables.

[60] The dispersion relation (equation (1)) can also be written to include the effect of currents and Doppler shifting. In this case, the analysis can be expanded to solve for bathymetry as well as directional currents [see, e.g., Piotrowski and Dugan, 2002]. Doppler shifts depend on the current velocity relative to the wave celerity so are strongest for short waves. Thus analysis to determine both bathymetry and currents should additionally examine higher frequencies than are needed just for bathymetry work. Alternately, since the Doppler effect depends on the component of wave celerity in the direction of the current (usually near zero for alongshore currents in the surf zone), differentiation between depth- and current-dependent effects are more apparent for directionally spread wave fields such as in tidal inlet environments. Velocity estimation will be added to cBathy in future years.

[61] The shoreline bias found here (Figure 5) is worst for a steep beach such as Duck where analysis tiles can include signals from both relative deep water and dry beach. For flatter beaches such as Agate, this should in principle be less of a problem although the one example tested still shows bias.

[62] Errors from the Duck case are averaged over an area of 420 by 1000m and tend to be worst at the offshore boundaries. Performance statistics would be improved if only shallower waters (< 4 m) had been considered but it was decided to present results from the full data set. Errors at Agate Beach were low in depths as great as 14 m and distances from the camera as great as several kilometers as long as data were ingested into the Kalman filter over an extended period of time. It is recognized that longer period waves are required to successfully probe these greater depths, so Kalman averaging is likely needed over longer periods of time (this happens automatically). The range to which useful results can be obtained is limited by a maximum graze angle (arctangent of the range divided by the camera height). At low-graze angles (near horizontal looks), pixel footprints stretch excessively in the range direction so smear wave content within each pixel. Thus, the 128 m camera height of Agate Beach supports roughly three times the available range as the 43 m height at Duck, given the same wave conditions.

[63] The bulk of the testing discussed here was done on a barred East Coast (of the US) beach due to the availability of data from this site. In some ways, the steep sand bars at such a site form a challenge for bathymetry algorithms due to the large bathymetric and hydrodynamic gradients. In contrast, West Coast beaches commonly have low slopes, nonabrupt shorelines, larger tidal ranges and broader surf zones full of well-developed breaking bores. Each of these aspects may make bathymetry estimation easier (although bores may require finite amplitude dispersion correction). More ground truth comparisons should be done on flatter beaches to better understand cBathy performance on these different sites. This, of course, requires the availability of more West Coast surveys in the vicinity of Argus Stations or equivalent sampling systems.

[64] No testing has been done in semienclosed seas. Shorter period waves would require a denser pixel spacing and accommodation of higher frequencies in the choice of frequency analysis bands. The depths to which bathymetry can be reasonably estimated varies as the deep water wavelength, so the square of the wave period, so depth range would clearly be lost. As with the tested beaches, performance is expected to be best for low-amplitude waves including conditions so small that no useful signal would be expected (the signal processing methods are powerful).

[65] While cBathy is tested here with optical data, the methods should also apply to any sensor (radar or infrared) that can return time series data over a dense array of locations.

5. Conclusions

[66] A new algorithm to estimate submerged bathymetry based on ocean wave celerity is presented and tested. The analysis is based on a moderately dense (subwavelength) spatial array of optical time series broken up into analysis tiles whose size, unlike those for spatial Fourier methods, can be smaller than an ocean wavelength. Analysis consists of three phases. Vector wave number for each of a suite of candidate frequencies is first found from the phase slopes of the first complex EOF of the cross-spectral matrix. A single depth estimate for each location for each data collection is then found from a weighted least squares fit to the most important frequency-wave number pairs. Finally, a Kalman running filter is computed in time to yield an estimate that is robust to camera or weather problems such as fog or rain.

[67] The algorithm was tested in a standard configuration against 16 ground truth surveys at Duck, NC, collected over 2 years. The average bias and RMS error over a 420 by 1000 m region were 0.19 and 0.51 m, respectively, with the worst errors near the offshore limit of data collection and very near the shoreline where analysis tiles mix wave signals with those from the dry beach. Nearshore sand bars were surprisingly well rendered including details of the bar trough and rip channels that dissect the sand bar. Result from a single survey at Agate Beach, a dissipative site, are similar with a mean bias and RMS error of −0.41 and 0.56 m, respectively, over a region that extended several kilometers from the cameras and included depth as great as 14 m.

[68] The new algorithm is an advance over prior work in that it allows good two-dimensional spatial resolution, it uses EOFs to best weight coherent wave motions and it uses a Kalman filter in time to span data gaps and problem areas and to gracefully degrade prior estimates in the absence of new information. Processing is fully automated.

Acknowledgments

[69] We would like to thank John Stanley for all the work that keeps Argus alive and productive and for the production cBathy analysis discussed here. Many thanks to Gabriel Garcia for guiding me through my first experience writing in Lyx. Thanks also to the Jesse McNinch and the staff of the FRF for restarting CRAB surveys and for providing the Duck survey data used here, and to Diana Di Leonardo and Peter Ruggiero for supplying analyzed jet ski survey data for Agate beach. We are grateful for the support of the ONR Littoral Geosciences and Optics program, grant N00014-11-10393 and the Multi University Research Initiative, grant number N00014-10-1–0932. NRL was supported by the Office of Naval Research through funding of the rapid transition project “Estimation of surf zone bathymetry using Unmanned Aircraft Systems.”

Ancillary