High-resolution 3-D monitoring of evolving sediment beds


Corresponding author: P. Bouratsis, 111 Hancock Hall, Virginia Tech, Blacksburg, VA 24061, USA. (polyb86@vt.edu)


[1] A new photogrammetric technique has been developed for monitoring the morphology of evolving stream beds. A pair of commercial cameras is used to record the evolution of the bed, and a computational approach that consists of a set of computer-vision and image-processing algorithms is employed to analyze the videos and reconstruct the instantaneous 3-D surface of the bed. Time- and space-resolved measurements are obtained to generate accurate representations of the bed. The required setup for the implementation of the technique is relatively simple and minimally intrusive. A thorough description of the algorithms that were used and detailed instructions for the implementation of the technique is provided. High-resolution measurements of a gravel bed in a clear-water, bridge scour experiment were carried out to demonstrate the operation and validate the capabilities of the technique. The new technique shows advantages compared to existing methods in terms of spatial resolution, temporal resolution, simplicity, and cost.

1. Introduction

1.1. Monitoring of Bed Evolution in Hydraulic Experiments

[2] The study of sediment transport, bed-form development and migration, and other erosional/depositional processes in riverine environments are time-dependent phenomena that require thorough spatial investigation and continuous monitoring of the morphological characteristics of the river bed. To better understand the interplay between the fluid flow and the erodible boundary, the dynamic nature of these phenomena should be adequately represented. The inspection of the bed at a single, or even infrequent states, provides us with some snapshots that are usually inadequate to piece together the full sequence of events.

[3] Although fairly sophisticated instrumentation is widely used in laboratory experiments to investigate the flow field via point and space (e.g., laser Doppler or particle image velocimetry), for time-resolved measurements over mobile beds [Hill and Younkin, 2006; Zhang et al., 2009; Sambrook Smith and Nicholas, 2005; Bottacin-Busolin et al., 2008], the technology currently available for monitoring the 3-D bed topography evolution of an erodible boundary is sorely lacking.

[4] Regarding scour around in-stream structures, several past studies have focused on the topography of the bed at the equilibrium state only [Chiew, 1984; Dongol, 1994]. This approach is inadequate because it attempts to describe a time-dependent phenomenon with the state of the bed at a single instant. Furthermore, it has been observed repeatedly that the duration of flood events in the field is typically too short for the final equilibrium state to be reached in the case of a prototype structure [Melville and Chiew, 1999; Mia and Nago, 2003]. Additionally, time-resolved, 3-D measurements of the evolving bed topography are essential for the development, calibration, and validation of accurate numerical models that examine the morphodynamics of the bed in cases of fluid-structure interaction [Escauriaza and Sotiropoulos, 2011], dam-break flows [Xia et al., 2010], flows around vegetation [Wu et al., 2005], flows at meandering streams [Zeng et al., 2008], and models that examine the dynamics of bed-form stability and evolution [Blondeaux, 2012]. Also, such data can be used for the investigation of the effectiveness of scour evolution prediction formulae [Kothyari et al., 2007; Melville and Chiew, 1999]. Finally, it has been well articulated that topographic data are essential for the development of accurate numerical models that study the flow in riverine environments [Legleiter et al., 2011].

[5] The characterization of the bed topography in past studies was usually subject to significant limitations that compromised the spatiotemporal resolution of the obtained data and involved intrusive, or complicated experimental setups. Furthermore, it precluded the capability of taking synchronized measurements of the flow field and corresponding instantaneous scour topography.

[6] The most common approach to the study of an evolving scour hole is the measurement of the bed elevation at a few locations only. This has been achieved either by using gauges and probes sensors [Oliveto and Hager, 2002; Babu et al., 2003; Fael et al., 2006; Berger et al., 2010] or by employing scales that are being recorded with periscopes or cameras inside the transparent walls of the flume or of the model [Mia and Nago, 2003; Yanmaz and Altinbilek, 1991; Lu et al., 2011; Adduce and La Rocca, 2006]. Another technique involves the use of probes that are mounted on traversing mechanisms [Ballio and Radice, 2003; Link et al., 2008; Dargahi, 1990]. The data obtained with this method may be capable to illustrate synoptically the surface of the bed, though in an asynchronous way. This is a major problem, especially during the initial stages of scour around in-stream structures when the rate of bed erosion is very high. Additionally, when the moving probe is located under water, it may act intrusively and alter the nature of the phenomenon since the scour hole characteristics are susceptible to modest perturbations in the approach flow field. Laser scanners have also been used in sediment transport experiments. Gonzalez et al. [2008] and van der Werf et al. [2006] used custom-made, traversing laser-scanning systems to study the bed-form evolution of sandy beds. This method can provide reliable results, and it can be successful when the free surface is relatively tranquil; however, in the vicinity of hydraulic structures, the flow is dominated by surface waves that are diffracting the laser beam. Also, when applied to monitor 3-D surfaces, bed representations are obtained asynchronously.

1.2. Laboratory Photogrammetric Applications

[7] Several recent efforts involve the application of photogrammetry for monitoring the surface of sediment beds. The basic principle in this methodology is that the precise location of any point detectable in two images, which have been taken from one or multiple cameras at different positions, can be estimated. Photogrammetry is well established, and it has been widely applied in various areas such as terrestrial mapping, industrial surveying, and robotics [Remondino and El-Hakim, 2006; DeSouza and Kak, 2002; Band, 1986]. However, each application has different requirements, and the development of suitable techniques for every case is a subject of active research.

[8] Close-range photogrammetry has been applied in laboratory environments to study geomorphological processes. Stojic et al. [1998] and Lane et al. [2001] developed digital elevation models of river models to study their topographic and sediment transport characteristics. Similarly, Brasington and Smart [2003] and Rieke-Zapp and Nearing [2005] examined the evolving morphology of basin models under the influence of rainfall. Also, Umeda et al. [2008] presented scour hole topographies around a cylinder. However, a basic requirement, in all of these studies, was the interruption of the flow and the draining of the model to obtain photographs of the bed without the presence of water. This approach can be rather tedious when the bed topography is measured at multiple instants, and the disruption of the experiment may introduce disturbances that affect the erosional processes. Also, intermittent bed elevation measurements usually involve the assumption that the behavior of the bed during the intermediate periods can be obtained by interpolation. This might not be always correct because the rate of scour may vary over time at different locations.

[9] On the contrary, Butler et al. [2002] were able to carry out two-media photogrammetry over static gravel beds in a hydraulic flume and in the field, in the presence of water, to perform roughness characterization. In this study, considerations of the refraction correction that should be carried out in two-media experiments were taken into account. However, in this application because the location of the water surface must be precisely known, the free surface was covered with a transparent Perspex sheet. Also, the use of a transparent sheet can smoothen the surface of the water, so that the quality of the images is sufficient for metric investigation. In most hydraulic models of open-channel flows, covering the free surface would be significantly intrusive and could potentially alter the nature of the phenomenon. Other studies have achieved the estimation of water depth nonintrusively, applying digital-image-processing techniques [Carbonneau et al., 2006; Lane et al., 2010]. These studies used aerial photographs of rivers, and these methodologies are only suitable for large-scale, field studies.

[10] The results obtained from all the aforementioned studies are either static or very low time-resolved representations of the bed. Some recent efforts have been able to obtain time-resolved data in hydraulic experiments by using target points on the monitored surface to provide texture. A common approach is the use of structured light, where a laser or a light projector and optics are combined to create a pattern on the surface. Such an approach was followed by Foti et al. [2011] and Astruc et al. [2012], who projected a grid on the bed and then estimated the location of the grid points. Ankamuthu et al. [1999], instead of using structured light, placed a set of stones on a sand bed and estimated their location. Also, 2-D results were obtained by Zech et al. [2008] by projecting and recording a laser line on the bed of a laboratory flume in a dam-break flow study. These techniques are capable of obtaining accurate, time-resolved measurements of the bed. However, the use of structured light or targets increases the complexity and cost of the experimental setup [Foti et al., 2011]. In applications where the free surface is very rough and wavy, such as in the case of flows around hydraulic structures, the structured light cannot be projected through the free surface, and the complexity of the setup is further increased. Additionally, the spatial resolution of the measurements may be compromised when dot or grid patterns are projected, since the elevation of only a few locations of the bed are reconstructed.

[11] Finally, several kinds of commercial software appropriate for close-range photogrammetric applications have been released during the last few years. Though their accuracy and efficiency are constantly improving, they exhibit various limitations that prevent their use in scour experiments. These limitations involve the use of targets on the recorded surface, restrictions relative to the orientation of the cameras and their working distance, special illumination requirements, or applicability only on static surfaces.

[12] The main requirements for the application of photogrammetry in hydraulic flumes are as follows: (1) the images of the bed exhibit repetitive patterns and lack of distinctive features, (2) the data collection rate should be sufficiently high to capture the continuously deforming surface, (3) the experiment should not be interrupted, (4) the sediment surface cannot be physically accessed during the experiments, and (5) the position of the cameras is constrained by the geometry of the model.

[13] Here we present a new stereovision-based technique for continuous measurements of the bed morphology that overcomes the limitations of the aforementioned techniques and meets the requirements mentioned in the previous paragraph. The technique is capable of reconstructing instantaneous surface representations of the evolving bed with high spatial resolution during scour experiments. Two calibrated cameras are partially submerged in the flow and record videos of the evolving bed geometry. The technique takes into account the texture of sediment beds and does not require the use of targets or structured light. A set of computer-vision and image-processing algorithms, which were developed here or adopted from other sources, are used for reconstructing accurately the surface of the bed. A code performing the steps of each algorithm was written to analyze the videos on a workstation. The highest temporal resolution of the technique is dictated by the frame rate of the cameras. The technique does not require the interruption of the experiment or the use of additional instrumentation, and it can capture the 3-D bed topography dynamically and nearly nonintrusively. Other advantages of the technique are the automation of the data processing and the flexibility for modifications, so that it can be customized for different applications.

[14] The technique was applied to a bridge scour experiment, and some of the results and the details of this experiment are included here to provide a better understanding of its capabilities, advantages, and limitations. The precision that was achieved in the bridge scour experiment was less than a grain diameter, which was more than adequate for the purposes of this application. Other means, such as acoustic range finders, laser scanners, or other traditional photogrammetric applications, can achieve even higher relative precisions; however, the spatiotemporal resolution, the capability for synchronous measurements of 3-D surfaces, the low degree of intrusiveness, and the low demand on instrumentation and space render this technique as one of the most appropriate methods for bed monitoring in various laboratory experiments. On the other hand, in experiments where point or line-profile measurements are adequate, or a dynamic representation of the bed is not required, the use of one of the aforementioned techniques may be more appropriate.

2. Description of the Technique

[15] The application of stereovision, as performed here, requires that a point of a surface that has been recorded by two calibrated cameras is detectable in both views. If the location of this point in the two frames (pixel coordinates) can be identified, then its location in space (world coordinates) can be estimated, using the calibration parameters. The experimental procedure of this technique involves the continuous recording of an evolving sediment bed by two submerged video cameras. The computational approach that is followed, after the completion of the experiments, can be divided into four main steps, namely, the preprocessing, the image processing, the geometrical transformations, and the postprocessing step. In Figure 1, a flowchart of the computations, the inputs, and the outputs of every step of the technique is presented. In section 2.1, the guidelines for the positioning of the cameras and a general description of the requirements of the experimental setup are described. In sections 2.2–2.5, the analysis of the videos is described, following the sequence of Figure 1. Some of the photogrammetric principles that are described in the latter sections are well established; however, they have been included here for completeness. Details about the experimental setup of the bridge scour experiment, considerations about the accuracy that can be achieved, and the exact parameters that were used for the application of the technique are included in section 3.

Figure 1.

Overview of the technique.

2.1. Considerations of the Experiments

[16] This is one of the few studies that involve underwater photogrammetry for fast-evolving beds in a laboratory environment. The technique has been applied only to bridge scour experiments. Although several types of cameras were tested at a preliminary level, results were obtained only with a specific type of camera. Therefore, the description of the required experimental setup can only be given based on the experience that has been gained so far.

[17] In our experiments, two Olympus Stylus Tough-6020 waterproof cameras were submerged in a hydraulic flume and recorded videos of the scour hole evolution around a cylinder. The cameras were recording at a rate of 30 frames/s, and the resolution of the videos was 720 × 1280 pixels. The dimensions of each camera were 10 cm × 6.5 cm × 2.6 cm, and their angle of view was approximately 45°. The basic criteria for the choice of the cameras were their resolution, size, shape, and angle of view.

[18] Preliminary tests determined that the camera resolution was sufficient to provide videos of high quality that satisfy the requirements of the technique. The distance of the cameras from the bed is dictated by the flow depth and the width of the flume and is expected to be very small compared to typical close-range photogrammetric studies. Although the restricted geometry may complicate the computational approach, it gives an advantage regarding the selection of the cameras: the precision of a stereo system is improved when the size of the pixels in the space of the surface is small [Lane et al., 2001], which can be achieved even with moderate pixel resolutions, in small-scale measurements. In general, the precision is highly affected by the pixel resolution, and it is recommended that the videos are recorded in high definition. The camera selection should depend upon the particular nature of the experiments and the spatial/temporal information we need to extract from them.

[19] Regarding the distance between the cameras (i.e., the baseline), the bridge scour experiments indicated that when it was wide, the coincident views were significantly different, which has been referred to as the “look-angle problem” [Lane et al., 2001]. This problem is caused by the high relative relief of the surface compared to its distance from the cameras, and it impedes the correlation of coincident images. Ordinarily, the cameras should be as far as possible from the test section to mitigate the look-angle problem and minimize the intrusiveness of the data collection. Additionally, the angle between the plane of the cameras' lenses and the bed should be as small as possible, since it has been well articulated that vertical photogrammetry is more straightforward than oblique photogrammetry [Chandler et al., 2002; Barazzetti et al., 2010].

[20] Based on the distance of the cameras from the bed, the minimum required angle of view of the cameras should be estimated since the size of the monitored surface depends on the common field of view of the cameras. Generally, wide-angle lenses should be avoided because they introduce severe distortion that is hard to be taken into account. In an effort to minimize intrusiveness, smaller size cameras having a shape that allowed for streamline positioning were chosen. The technique requires that the cameras are firmly mounted throughout the duration of the experiments.

[21] The current experiments showed that low-cost, charged-coupled device, commercial cameras can provide high-quality data. Also, it was found that the level of noise in the videos was relatively low, and it could be adequately removed with a simple filtering procedure. The noise of the videos and consequently the efficiency of the technique greatly depend on the light conditions. The results were optimized when the bed was well illuminated. Also, the temporal resolution of the results is defined by the frame rate of the cameras.

[22] With the cameras mounted at their final position and the flume filled with water, an object with known dimensions, the calibration target, is recorded. The procedure that has been adopted here is using a checkerboard as the calibration target. The checkerboard should be recorded simultaneously by both cameras at several orientations and distances from the cameras. A minimum of 15 images of the checkerboard is recommended for calibration. The recording of the calibration target can be carried out prior or after the completion of the experiments, and the calibration images processed after the completion of the experiment. However, the cameras should not be relocated at any time since the relative positions of the cameras will be estimated from the calibration images. The approach that is followed here is an in situ calibration, and it has been suggested to be more effective than the use of precalibrated cameras [Chandler et al., 2001]. In case the cameras cannot be synchronized using an electrical signal, or specialized hardware, pulsed laser signals can be projected in areas that are visible by both cameras.

[23] The technique requires that a set of reference points, whose position is precisely known, are recorded by the cameras to be used in the coordinate transformation step (Figure 1). Four reference points are required for the coordinate transformation step; however, more should be used, if possible, to minimize potential errors, solving an overdetermined system, instead. Following the guidelines of this section, the evolution of the sediment bed can be recorded continuously.

2.2. Preprocessing

[24] The first task of the analysis is the synchronization of the two videos so that the time lapse between them is precisely known. Then, the instants when the bed will be reconstructed are chosen, and the respective frames are extracted. Some of the selected frames should include views of the checkerboard to be used for the calibration of the cameras.

[25] Next, the camera calibration is carried out to obtain the parameters that are required to correlate the information contained in the videos with the dimensions of any recorded surface. These consist of the external parameters, which describe the relative position of the two cameras, and the internal parameters, which describe the characteristics of each camera [Hartley and Zisserman, 2003]. The “Camera Calibration Toolbox for MATLAB®” (http://www.vision.caltech.edu/bouguetj/calib_doc/, Bouguet) is used to carry out the calibration of the stereo system. The toolbox is using the corners of the squares of the checkerboard, as they appear in the images, to estimate the internal and external calibration parameters of the stereo system that will be used in a subsequent step of the technique.

[26] The estimated external calibration parameters are represented through a 3 × 3 rotation matrix (R) and a 3 × 1 translation vector (T) that describe the relative position of the two cameras. The estimated internal calibration parameters are a calibration matrix (Ci) and a vector of distortion coefficients (kci) for each camera (i). The internal parameters C and kc are defined in equations (1) and (2), respectively.

display math(1)
display math(2)

where fx and fy describe the focal length in terms of horizontal and vertical pixels, respectively, and cx and cy are the pixel coordinates of the principal point. k1, k2, and k3 are the radial distortion coefficients, and p1 and p2 are the tangential distortion coefficients. The explanation of the calibration procedure and the corresponding parameters is out of the scope of this manuscript. The toolbox provides the values of the distortion coefficients and the uncertainty of the estimation. It is recommended that if the value of a coefficient is of the same order of magnitude with the respective uncertainty, it should be withheld.

[27] The reader is referred to Hartley and Zisserman [2003] for a detailed description of the aforementioned parameters. The toolbox provides specific guidelines on how the calibration should be carried out and how the results can be optimized for a given set of images. As described in Figure 1, the output of the first step is the ensemble of the images that will be analyzed and a set of camera parameters, derived from calibration, for each sensor.

2.3. Image Processing

[28] The main objective of the second step of the technique is the correlation of coincident frames and the detection of the pixel coordinates of the same set of points of the sediment surface. This procedure is called correspondence establishment, and a correlation-based algorithm was developed to accomplish it. Correlation-based algorithms are appropriate for images that lack distinctive features, such as edges, corners, or color discontinuities [Shapiro and Stockman, 2001].

[29] This algorithm divides coincident frames in small templates and statistically compares their pixel intensities to find the best match. Gray-scale images are used, since color information is not required in the computational approach. Prior to the correlation of the images, the operations of image smoothing and rectification are carried out to increase the efficiency of the correspondence establishment process.

[30] If the frames are intensely corrupted by noise, then their properties are altered and their correlation may yield erroneous results. The noise may be caused by the turbidity of the water, bubbles, or low illumination, and it depends on the quality of the cameras. For this reason, an averaging or Gaussian filter can be used to mitigate the noise in the images [Gonzalez and Woods, 2008].

[31] After the image denoising, the images are being stereorectified. In this procedure, the calibration parameters are used to transform geometrically the images in a way that any two corresponding points of the left and right frames have the same vertical pixel coordinate [Hartley and Zisserman, 2003]. This property improves significantly the speed and the performance of the correspondence establishment algorithm. The reason is that every template that has been extracted from one frame has to be compared with the templates of the other frame that are located on the same horizontal line. This way the comparison of the templates takes place in 1-D, instead of 2-D space and the computational time is significantly reduced, while the possibility for the establishment of erroneous correspondences is reduced, as well. Here, the image rectification is carried out using the “Camera Calibration Toolbox for MATLAB.” Additionally, visual inspection is performed to narrow the width of the search area. The first and the last extracted pair of frames are visually examined, and a few correspondences are identified. Then the horizontal coordinates (x1 and x2) of the correspondences are compared, and a value for the maximum and the minimum horizontal disparity are determined. The step of visual inspection for the estimation of the disparity limits increases even more the speed of the calculations; however, it is not mandatory for the application of the technique.

[32] An example of the aforementioned processes is illustrated in Figure 2. In Figure 2 (top), two coincident raw frames of a scour hole around a cylinder are presented. Figure 2 (bottom) shows the same frames after being filtered and rectified. Four corresponding points of the bed have been marked with dots in the left and right frames of Figure 2 (bottom). It is observed that the correspondences of the two images with pixel coordinates (xp1,yp1) in the left image and (xp2,yp2) in the right image, lie on the same horizontal line, so that yp1 = yp2. Also, the search areas in the right frame have a significantly reduced width because of the disparity limits that were imposed.

Figure 2.

(top) A pair of raw coincident frames that were extracted from the videos. (bottom) The same pair of frames after the smoothing and the rectification procedure. The dots represent a few selected correspondences between the two frames. The dotted lines demonstrate the effect of rectification. The squares of the left frame are the extracted templates that are cross-correlated with the rectangles of the right frame, which are the extracted windows, to find correspondences. Flow direction is from left to right.

[33] Subsequently, templates are extracted from the left images and are compared with windows of the right images, using the criterion of normalized cross correlation [Lewis, 1995]:

display math(3)

where g is the result of the normalized cross correlation, f is a matrix containing the pixels of the window from the right image with coordinates (xw,yw), t is a matrix containing the pixels of the template of the left image with coordinates (xt,yt), inline image is the mean value of the pixel intensities of the template, and inline image is the sum of the pixel intensities of the window in the region under the template. The size of the templates and the windows remains constant. The position (xw,yw) where the normalized cross correlation is maximized is designated as a candidate correspondence, if the value of the normalized cross correlation is over a preselected threshold. If the result of equation (3) is always lower than the threshold, then no correspondence is found for a template. If the normalized cross correlation is maximum in more than one location in the respective window, then the sum of absolute differences is carried out at those locations to select the candidate correspondence. In the example of Figure 2 (bottom), templates centered at four points of the left image are cross-correlated with the windows of the right image that are located on the same horizontal line. The location where the normalized cross correlation was maximized is marked on the right image of Figure 2 (bottom). In Figure 3 the result of the normalized cross correlation between the template and the window that are located in the lowest position in Figure 2 (bottom) is presented. The location of the maximum value in Figure 3 is marked in the respective window of Figure 2 (bottom) as a candidate correspondence. In Figure 3, it is also observed that there are only six locations where the locally maximum result of the normalized cross correlation is higher than the threshold value.

Figure 3.

The result of the normalized cross correlation of the template that is located in the lower position of Figure 2 (bottom), with the respective window. The dashed line represents the threshold value for the result of the normalized cross correlation.

[34] For verification, a reverse procedure is followed; templates of the same size are extracted from the right image, centered at the locations of the candidate correspondences. These templates are cross-correlated with windows from the left image, and a second set of candidate correspondences are found in the left image. The coordinates of the candidate correspondences in the left image are compared with the coordinates of the centers of the templates that were initially extracted from the same image. The candidate correspondences in the left image that are less than three pixels away from the center of the respective templates of the same image are designated as final correspondences; all the other candidate correspondences are discarded. Figure 4 shows the location of all the final correspondences in the left and right images of Figure 2 (bottom). It can be observed that there are some areas of the bed where no correspondence could be found. Regions without successful correspondence are possible since some of these areas do not belong to the common field of view of both cameras (e.g., at the edge of the images), or the plane of the bed at these areas is almost parallel to the cameras' viewing axes, and its features appear to be distorted. In Figure 4, the number of the identified correspondences is 7517, and the area of the bed is approximately 370 cm2.

Figure 4.

The pixel location of all the correspondences that were identified between the two coincident frames at the second step of the analysis.

[35] Contrary to the first step, the parameters of the analysis in the second step vary according to the experimental setup and the recorded surface. These parameters are the size of the templates, the spacing between templates, the size of the windows, and the threshold of the normalized cross correlation.

[36] In general, the templates should be large enough to include an area of the image with adequate texture, in terms of pixel intensities; otherwise, the correlation magnitudes are expected to be very low. Since the texture of a single grain is lower than the texture of a group of grains, templates should be large enough to include more than one grain. On the other hand, the possibility for the establishment of erroneous correspondences increases with the template size, as has been reported in many photogrammetric studies [e.g., Lane et al., 2000]. Also, larger templates increase the required computational time.

[37] As far as the distance between the centers of the templates is concerned, smaller distances increase the spatial resolution of the results. The highest spatial resolution can be achieved if one template is extracted for every pixel of the image. The computational time of the correspondence establishment algorithm is proportional to the number of extracted templates.

[38] Regarding the threshold of the cross-correlation magnitude, lower values will result in more erroneous results. However, high thresholds may reduce significantly the number of candidate correspondences and compromise the spatial resolution. If the desired result of the technique is 3-D surfaces, instead of point clouds, then the accuracy is significantly affected by the density of the clouds. In most photogrammetric studies the selected correlation threshold is usually 0.6–0.8. As it will be discussed in the next sections, a lower threshold may be appropriate for underwater photogrammetry in hydraulic flumes. The reverse correspondence establishment and the outlier removal process that are included in the technique can remove most of the errors that may be caused in this step without compromising the spatial resolution. Therefore, the accuracy and efficiency of this technique cannot be characterized exclusively by the results of cross correlation. The most appropriate way for selecting the parameters of this step is a trial-and-error procedure, where the entire technique is applied, and the results are compared with independent validation measurements. Also, the reconstruction of the established correspondences at different states of the bed and the visual inspection of the results can provide an estimate of the efficiency of this step. This is because most erroneous correspondences are expected to lead to gross errors that can be easily identified, especially due to the fact that the reconstructed surface is continuous.

2.4. Geometrical Transformations

[39] After the completion of the first two stages of the technique, all the required inputs from the videos have been extracted. The calibration parameters and the set of correspondences are used in the third stage of the technique to reconstruct a cloud of points that represent the bed at each instant. Here the process of stereotriangulation that is followed for the reconstruction of the world coordinates of a single point of the bed is explained. The reconstruction that is accomplished is valid only up to a similarity level. This means that in the cloud of reconstructed points, the ratio of any two lengths can be found, but the value of each length is not known. The algorithm that was used is based on well-known concepts that are primary components of many photogrammetric applications. The pixel coordinates of every correspondence, xd, that was found in the previous step are written in a homogeneous representation, so that xdi = [xdi ydi 1]τ; where xd and yd are the horizontal and vertical pixel coordinates, respectively; i = 1 and 2 denote the left and right views, respectively, and τ denotes the transpose matrix. First, the homogeneous pixel coordinates of every correspondence are normalized in equation (4) using the calibration matrices defined in equation (1) [Hartley and Zisserman, 2003].

display math(4)

where xhni = [xni yni 1], i.e., the homogeneous, normalized, pixel coordinates. Then, the inhomogeneous normalized pixel coordinates, xni = [xni yni], are corrected for distortion in equation (5) using the distortion vectors of equation (2).

display math(5)

where xui = [xui yui], i.e., the undistorted pixel coordinates; and δt, δr, and r are defined in equations (6), (7), and (8), respectively.

display math(6)
display math(7)
display math(8)

[40] The external orientation matrix of each camera, oi, is defined in equation (9) using the external calibration parameters.

display math(9)

where I3 and 03,1 are a 3 × 3 identity matrix and a 3 × 1 zero vector, respectively. Then the direct linear transformation is applied to the undistorted pixel coordinates as described by Hartley and Zisserman [2003]. For every point found in both views, a new matrix (A) is defined as in equation (10).

display math(10)

where inline image is the jth row vector of the ith orientation matrix. Matrix A is being factorized to three matrices (U, Σ, and V) through singular value decomposition, so that A = UΣVτ. The world coordinates of the point of the bed, x = [x y z], that is related to the pair of correspondences found in the two views, are defined in equation (11).

display math(11)

where Vj,k is the element of the matrix V that is located on the jth row and kth column.

[41] The cloud of points has been reconstructed in a reference frame that is not convenient for further analysis. Hence, the points should be scaled, rotated, and translated to an appropriate coordinate system. This is achieved using the set of reference points that was positioned in the test section. The coordinate transformation is carried out using equation (12).

display math(12)

where xc = [xc yc zc], i.e., the transformed world coordinates; sc is a scale factor; Rc is a 3 × 3 rotation matrix; and tc is a 3 × 1 translation vector. The estimation of sc, Rc, and tc is carried out using the reference points. Those points are reconstructed following the steps of this technique, and the aforementioned parameters can be estimated by comparing the physically measured coordinates with the reconstructed coordinates, rearranging equation (12), and minimizing it with a least squares method. The procedure that was explained here is followed for all the correspondences of both views at the selected instants. The reconstructed points with coordinates equal to xc are further processed at the final step of the technique.

2.5. Postprocessing

[42] Despite the filtering and the constraints that were imposed in the second stage of the technique, it is expected that some of the established correspondences are invalid. Therefore, some of the points that were reconstructed in the previous stage are erroneous and need to be removed. For this reason, an outlier removal algorithm that makes use of continuity criteria was developed here. At first, a global threshold is applied to the reconstructed points. This is imposed to remove points with elevation significantly greater than that of the gravel bed before the commencement of the experiment and points whose horizontal distance from the cylinder was higher than the expected field of view. Then, a local windowing function is applied at the vicinity of each reconstructed point. Centered on every reconstructed point, an area that is large enough to include approximately three grains, is defined. A planar surface is fitted to the points of each area using a least squares method. Then points whose distance from this surface is greater than one grain diameter approximately are marked as outliers and are discarded. This criterion is based on the assumption that areas of this size have an almost uniform slope. The size of the areas and the threshold distance should be customized according to the application and the representative grain diameter size.

[43] The remaining points are segmented to a few larger subareas with horizontally neighboring points based on their elevation and its gradient. Points whose elevations are greater than two standard deviations of the mean elevation of each subarea are marked as outliers and are discarded. In Figure 5, the reconstruction of the points that were found in Figure 4 is presented. Also, the identified outliers are highlighted in Figure 5. It can be observed that the number of the erroneous points is low compared to the number of the reconstructed points (approximately 5%). Obviously, the performance of the outlier removal algorithm is improved when the percentage of the erroneous reconstructed results is small and their location in space is not biased.

Figure 5.

The reconstruction of the correspondent points that were identified in Figure 4. The red circles are the points that were marked as outliers in the fourth stage of the analysis, and the blue circles are the valid points.

[44] Finally, a surface is fitted to the points after the outliers have been removed. In general, various approaches can be followed for the surface fitting procedure. If there is confidence that almost all the outliers have been removed, then linear interpolation can be adequate for the representation of the bed topography. However, if some outliers remain in the processed data, then a robust fitting technique should be adopted instead (e.g., Lowess statistical test) [Ott and Longnecker, 2010].

3. Bridge Scour Experiment

3.1. Experimental Setup

[45] The bridge scour experiment, where this technique was applied, is presented here to provide further insights. The experiment was carried out in a tilting flume, with dimensions16.2 m × 1.2 m, located in the Baker Environmental Hydraulics Laboratory of Virginia Tech. The slope of the bed was set to 0.0005, and the sediment was gravel with 50% and 90% of the material being finer than d50 = 3.55 mm and d90 = 4.7 mm, respectively. The model pier was a plexiglass cylinder with diameter D = 15.24 cm. The approach flow depth was 20.5 cm, and the mean velocity was 62 cm/s. The experiment was performed under clear-water scour conditions. The cameras were firmly mounted on the right sidewall (looking downstream) of the flume and were partly submerged during the experiments, as illustrated in Figure 6. A fixture that consisted of two optical posts and four knuckles was built to hold the cameras. The knuckles allowed the rotation of the cameras by 360°. Preliminary tests showed that the fixture could keep the cameras steady, without any change in orientation caused by the flow or vibrations. The cameras were 36 cm (2.4 × D) away from the pier, and their horizontal distance from the edge of the scour hole at the end of the experiment was 21.5 cm (1.4 × D). They were oriented such that their smallest dimension was perpendicular to the direction of the flow, as Figure 6 shows. The baseline was 100 mm, and the angle between their lines of sight was 32°. Prior to the experiments, a set of 30 dots were carefully drawn on the surface of the pier to serve as reference points. Two 100 W lamps were mounted on the flume to illuminate the bed.

Figure 6.

The cameras recording the scour hole, mounted on the sidewalls of the flume. Flow direction is from left to right.

3.2. Accuracy Considerations

[46] Based on the type of the cameras and their position, initial estimations of the error can be taken into consideration prior to the application of the technique. Theoretical estimations of the expected error of stereo systems can be provided by relationships that have been used in existing photogrammetric studies [e.g., Lane et al., 2001; James and Robson, 2012; Benetazzo, 2006]. In these relationships, the error is usually a function of the distance of the cameras to the surface, the relative position and orientation of the cameras, and their internal characteristics, such as the focal length, the position of the principal point, the pixel size, and the angle of view. Benetazzo [2006], who used a similar triangulation process to carry out wave reconstruction, provided estimations of the maximum expected error that is caused by the camera operation and the triangulation system, as follows:

display math(13)

where the inline image axis is aligned with the baseline of the cameras, the inline image axis is perpendicular to the baseline of the cameras and pointing toward the recorded surface, and the inline image axis is defined by the right-hand rule. Also, H is the distance of the cameras to the surface, T is the length of the baseline, N is the number of pixels along one dimension of the cameras, inline image is the angle between the cameras' line of sight, and inline image is the angle of view of the cameras. Applying equations (13) for the setup of the stereo system that was used here (H = 400 mm, on average; T = 100 mm; N = 720; inline image = 32°, and inline image = 45°), the maximum expected errors along the 3 camera axes are inline image = 0.58 mm, inline image = 0.23 mm, and inline image = 2.33 mm.

[47] Additionally, another approach was followed to obtain an estimation of the accuracy and the precision of the technique. A checkerboard and a Rubik's cube, with known dimensions, were recorded under several angles, using a similar setup with the one that was adopted in the bridge scour experiment. All the steps of the technique, except for the correspondence establishment and the surface fitting, were followed to reconstruct each object while being at 10 different locations. Instead of the automated correspondence establishment, the pixel coordinates of the points on the surfaces of the two objects were manually selected. The objects that were used and an example of the points that were reconstructed are illustrated in Figure 7.

Figure 7.

The objects that were used for the investigation of the error and an example of the results of the technique.

[48] Investigating the error of the results, in a total of 1200 reconstructed points, it was found that the accuracy that is expressed as the mean value of the errors was approximately 0.5 mm and precision that is expressed as the root-mean-square (RMS) of the errors was 0.7 mm. These results can provide an estimation of the error that originates from the cameras' operation, the calibration, the triangulation, and the coordinate transformation, cumulatively.

[49] However, the error that was observed at the bridge scour experiment was higher than the one observed from the reconstruction of the checkerboard and the Rubik's cube. This was expected since the error that originates from the correspondence establishment and the surface fitting procedures was not taken into account in the estimations above. Previous studies have shown that automated correspondence establishment techniques tend to deteriorate the accuracy of stereo systems [Milledge et al., 2009]. Also, the surface fitting procedure is expected to generate additional errors, especially at the areas of the bed where the cloud of reconstructed points is less dense.

3.3. Application of the Technique

[50] At first, the flume was very slowly filled with water, to ensure that scour did not take place. The depth of the water was increased to cover the cameras, using the tailgate of the flume. Then the cameras started recording. Since the cameras that were used could not be synchronized automatically, a laser pointer was used to create light pulses. A 63.5 mm × 81.6 mm × 2 mm checkerboard was submerged at the test section and was recorded from different angles by the cameras, while being handheld. Figure 8 contains the views of the checkerboard that were obtained from the right camera. After the collection of the videos with the calibration target, the flow was adjusted to the described conditions. At that point the cameras commenced recording the test section. The experiment lasted for 250 min.

Figure 8.

Images of the calibration target obtained at different angles from the same camera.

[51] After the completion of the experiment, the videos were imported in a workstation, and the steps described in Figure 1 were followed to analyze the data and reconstruct the morphology of the evolving bed. At first, the exact frame lapse between the two videos was found by observing the recorded laser light signals; the lapse was 11 frames. Then, a set of 74 frames were extracted from each camera. Eighteen pairs of frames were used as images of the calibration target and were recorded before the commencement of the experiment. The remaining 56 pairs of frames were images of the bed at selected instants throughout the experiment. One pair of coincident frames is presented in Figure 2 (top).

[52] The camera calibration was carried out using the “Camera Calibration Toolbox for MATLAB.” The edges of the checkerboard were manually selected in the images, and the results were optimized using the guidelines of the toolbox. The error of the calibration, after optimization, was 0.1 pixels. Only the first two components of the radial distortion (k1 and k2) were recovered, and the third component (k3) was set to zero.

[53] All the extracted frames of the bed were transformed to gray-scale images, and a smoothing Gaussian filter was applied to them, to mitigate the noise. Since the images were not severely corrupted by noise and to avoid the distortion of their features, the standard deviation and the window size of the Gaussian filter were relatively small [Gonzalez and Woods, 2008]. The smoothened images were then rectified making use of the estimated calibration parameters.

[54] For the correspondence establishment algorithm, the size of the templates was selected to be 35 × 35 pixels. Smaller templates resulted in lower correlation scores, because their texture was lower. This was expected since every grain was represented by 10–40 pixels, on average. In an effort to obtain high spatial resolution results, the spacing between the centers of the templates was set to nine pixels in the horizontal and vertical directions. From the manual investigation of the maximum disparity, it was concluded that the size of the search windows could be reduced to 35 × 500 pixels, instead of 35 × 1280 pixels (where 1280 pixels is the width of the images).

[55] For the selection of the threshold of the cross correlation, several values were tested for three pairs of frames that depicted the bed before scour, after 120 min of scour and at the end of the experiment. For these frames, the steps of correspondence establishment with various thresholds, triangulation, coordinate transformation, and outlier removal were carried out, and the reconstructed clouds of points were visualized. It was observed that even though lower thresholds increased the gross errors initially, the reverse correspondence establishment and the outlier removal routines were able to detect and remove the errors. On the other hand, higher values of the threshold resulted to the removal of many inliers, reducing the spatial resolution of the technique. For these reasons, the selected threshold was set at 0.3. The low correlation scores are attributed to the oblique angles, the high relative relief of the bed, and in some cases the noise of the images that was caused mostly by bubbles and could not be removed with the smoothing filter. The same factors and the fact that the templates were relatively large resulted in low signal-to-noise ratios (SNRs), in the cross-correlation results. As will be discussed later, the last reconstruction of the bed was compared with validation measurements to make sure that the accuracy was satisfactory with the given threshold.

[56] The stereo-triangulation was carried out by using the calibration parameters in equations (4)-(11). For the coordinate transformation the reference points on the cylinder were manually selected in a pair of frames, and they were reconstructed. Equation (12) was used to calculate the rotation matrix, the translation vector, and the scale and to transform the coordinates of the ensemble of reconstructed points. Then the outlier removal step was carried out as described in section 2.5. After the imposition of the global threshold, areas of 1 cm2 were defined around every point, surfaces were fitted, and points that were more than 3 mm away from the surfaces were removed. It was found that the percentage of removed points was always less than 12%, and the bed was represented by a cloud of at least 6500 points. Finally, surfaces were fitted with linear interpolation.

3.4. Results

[57] Some of the results of the bridge scour experiment are presented in this section. The focus here is on the demonstration of the capabilities of the technique, rather than the findings of the experiment. Some of the information that can be extracted with the use of the presented technique is emphasized; however, a detailed presentation of the results will not be included here.

[58] The bed topography can be reconstructed in detail as presented in Figure 9, where the reconstructed surface of the scour hole 120 min after the commencement of the experiment is presented. In Figure 9, which is representative of all the surfaces that were reconstructed, every 1 cm2 of the bed surface is represented, on average, by 17 independent pointwise elevation measurements. It should be mentioned here that part of the scour was occluded by the cylinder and could not be reconstructed. For the results here, symmetry was assumed to recover the area that was not visible. Although the scour hole is not expected to be perfectly symmetric, this assumption was made to provide a better representation of the results. Manual inspection of the final state of the bed indicated that assuming symmetry for the presentation of the results was not unreasonable. Additionally, scour measurements in only one plane of symmetry is a common practice in bridge scour experiments. It is shown (Figure 9) that the detailed morphological characteristics of the bed can be found at any instant of the bridge pier scour process. In several bridge scour studies it is assumed that the slope of the scour hole is uniform and equal to the angle of the sediment repose; however, here it is shown that the slope varies around the pier. This technique allows the determination of the local slope at any location within the scour hole. The reconstruction of the bed surface at consecutive instants can be used for the development of a video-representation of the scour evolution. An example is given in Figure 10 where the bed topography at the initial stages of scour is plotted. To the best of our knowledge, this is the first time that such detailed information about the geometric characteristics of a scour hole has been obtained without the interruption of the experiments.

Figure 9.

The reconstructed topography of the bed 120 min after the initiation of scour. Flow direction is along the x axis.

Figure 10.

Instantaneous bed surface representations at (upper left) 1, (upper middle) 2, (upper right) 3, (lower left) 4, and (lower middle) 5 min from the initiation of scour. Flow direction is from left to right.

[59] In Figure 11 the elevation models of the part of the scour hole that is located 0–2.25 cm away from the surface of the pier are plotted at different instants. A comparison of the scour evolution, at the front and the sides of the pier, during the first 30 min of the experiment is shown. Such analysis can provide a qualitative description of the scour evolution at different parts of the scour hole. A more thorough description of the temporal and spatial evolution of the scour rates is given in Figure 12. The scour rates presented in Figure 12 are the average difference of the five consecutive representations of the bed that were taken at 1 min intervals. Through this, the evolution of the spatial distribution of the rate of scour can be examined. Among other things, this analysis can be useful for investigating the impact of the horseshoe vortex activity on bed scour development. Finally, from Figure 9 it can be concluded that pointwise measurements at arbitrarily chosen locations cannot characterize successfully the phenomenon. Instead, the current technique can provide quantitative measurements of the temporal evolution of the shape, size, excavated volume, and the scour rates of the entire scour hole as well as at specific locations.

Figure 11.

The reconstruction of the scour hole around the perimeter of the pier at six different instants. The maximum radial distance of the reconstructed surfaces from the perimeter of the pier is 0.15D. Flow direction is along the x axis.

Figure 12.

The scour rates of the scour hole at (upper left) 5, (upper right) 10, (lower left) 20, and (lower right) 30 min from the initiation of scour. Flow direction is along the x axis.

3.5. Validation Measurements

[60] A more accurate estimation of the error was obtained by a direct comparison of the results of the technique with independent validation measurements. After the completion of the experiment and while flow was still running, videos of the bed at a semiequilibrium state were obtained. Then the flume was carefully drained, and a point gage was used to measure the bed elevations at 35 locations in the scour hole. The entire computational approach was followed, and the reconstructed surface was compared with the point-gage measurements. It was found that the mean error was 0.86 mm, and the RMS of the error was 2.31 mm. From the histogram of the errors (Figure 13), it is observed that the residuals are well distributed around 0, and they span from −5.5 to 4.5 mm. Visual inspection of the results indicated that the highest residual were observed in areas of the scour hole, where the cloud of points was less dense.

Figure 13.

Histogram of the residuals of the verification measurements.

4. Evaluation of the Technique

[61] The aforementioned validation approach is the most stringent and appropriate method to investigate the capacity of the proposed technique for monitoring evolving sediment beds. The validation errors were significantly higher that the errors that were observed from the reconstruction of the objects shown in Figure 7 and similar to the maximum expected errors of equation (13). This is due to the fact that the errors that originate from all the steps of the technique, as well as their propagation through the calculation process, were taken into account. In general, errors are expected to originate in most of the steps of the technique. Any calibration procedure can estimate the internal and external camera parameters only up to a specific accuracy. Also, some erroneous correspondences may still exist even after the outlier removal step, where parts of the bed may be oversmoothed. Finally, the position of the reference points is only known up to specific accuracy. Several photogrammetric studies have evaluated the quality of their results by calculating the error using only control points, without taking into account the error that originates from the correspondence establishment.

[62] The precision here (2.31 mm) is less than the median grain size diameter (3.55 mm) and although the relative errors are higher than those reported in some traditional photogrammetric studies, the technique is able to provide an accurate dynamic representation of the bed. The reduction of the precision between the reconstructed objects of Figure 7 and the topography of the bed is mainly attributed to the correspondence establishment issues. Previous studies have also underlined that automated correspondence establishment techniques that make use of the texture of the bed tend to deteriorate the accuracy of stereo systems [Milledge et al., 2009]. Also, the application presented here is characterized by additional particularities that make the correspondence establishment more challenging. Those particularities are mainly associated with the geometry of the experimental setup. Specifically, the cameras had to be positioned as far as possible from the pier, to minimize intrusiveness, and consequently, the bed was recorded from oblique angles. Furthermore, the relief of the scour hole was high compared to the distance of the cameras to the bed. Lane et al. [2001] identified the high relative relief as one of the greatest challenges for photogrammetric investigations of fluvial geomorphology and underlined the “look-angle problem” that causes coincident views to become significantly different. Similarly, Chandler et al. [2001] reported that the correlation scores and the percentage of successful correspondences are associated with the relief of the ground and observed increased errors at the sides of gravels or clusters.

[63] Additionally, the quality of the images plays a crucial role in the accuracy of the results. Even after the image smoothing step, bubbles, typically in abundance in flows around bridge piers, were evident in several extracted frames. Finally, when compared with standard aerial photogrammetry, the correspondence establishment in close-range photogrammetric techniques is known to represent a far more challenging scenario.

[64] The aforementioned parameters can also explain the low SNR of the cross correlation. The computational approach that was followed here was designed to tackle those challenges and correlate the images without the use of structured light. Specifically, the rectification of the images and the reduction of the correlation area, through visual inspection, imposed additional constraints to the correspondence establishment procedure. Also, the threshold of the normalized cross correlation was set low to allow for a high number of candidate correspondences to be included, which in turn compensate for the look-angle and high-relief problems. A selection of a higher threshold (0.6–0.8), as has been reported in other studies [e.g., Lane et al., 2000; Rieke-Zapp and Nearing, 2005], would result in a significantly decreased spatial resolution. Then the reverse correspondence establishment procedure was used to refine the selected final correspondences. Finally, the outlier removal algorithm was able to detect erroneous points effectively and compensate for the effects of the low threshold value.

[65] Overall, the purpose of this work was to develop a reliable bed monitoring technique that combines high spatial and temporal resolution, and simplicity in the experimental setup, so that it can be used in a wide range of laboratory flume experiments. Furthermore, the technique can be modified to accommodate the needs of different applications. If lower temporal resolution is acceptable, then instead of recording videos the cameras can shoot photographs in a “burst mode.” In most cases, this would result in the reduction of the frame rate by an order of magnitude but would increase significantly the resolution of the images and improve the precision of the results. On the other hand, high-speed cameras can be used to record at a much higher frame rate. Another possible modification would be the use of more than two cameras, if this is allowed by the geometry of the setup. Using more than two cameras and applying appropriate algorithms can reduce the errors originating from the correspondence establishment procedure [Barazzetti et al., 2010; Chandler et al., 2002] and the triangulation procedure [Hartley and Zisserman, 2003] and increase the area of the reconstructed surface.

[66] Additionally, in some applications, adopting the structured light approach could be beneficial. If the difficulties that are associated with the experimental setup and the state of the free surface, as described in section 1.2, can be overcome, then coincident frames would be correlated using the light patterns instead of the bed texture, increasing the efficiency of the correspondence establishment. If the structured light is a dot, or grid pattern, then the precision could be further increased by applying a subpixel precision algorithm [Psarakis and Evangelidis, 2005; Astruc et al., 2012]. Additionally, the structured light approach could be used with much finer beds, where the absence of texture would be more evident. However, some recent studies have used the texture of clay or fine sand beds to apply close-range photogrammetry successfully [Rieke-Zapp and Nearing, 2005; Gessesse et al., 2010]. In the latter case, some improvements in the precision would be possible since the correlation templates that are dictated by the size of the grains could be smaller. On the other hand, the technique is expected to fail in highly turbid flows, where clear views of the bed are not possible.

[67] It should be noted that the technique can be customized, so that the cameras are not submerged. Here the largest part of the scour hole was occluded when viewed through the transparent sidewalls of the flume. Furthermore, recording the bed evolution through the free surface was not feasible due to the presence of surface waves. In such a case, light refraction considerations should be taken into account as described by Butler et al. [2002].

5. Conclusions

[68] A new technique for obtaining uninterrupted, instantaneous, highly detailed, 3-D representations of the evolving bed topography in scour experiments has been presented. The technique requires the use of two relatively freely positioned cameras that record the sediment bed during the experiment. The computational approach employed for the analysis of the video frames has been elaborated. The video analysis consists of a preprocessing, an image processing, a geometrical transformation, and a postprocessing step. The analysis that is carried out at each stage is described, and detailed instructions for the application of the technique are provided, both in terms of the experimental setup and the processing of the data. Algorithms were developed for establishing correspondences between coincident frames and removing outliers. These algorithms are crucial to the overall process since they facilitate the application of photogrammetry by tackling the challenges that originate from the restricted geometry of hydraulic flumes. Additionally, a number of well-established algorithms were incorporated here to carry out the tasks of camera calibration, image rectification, and stereo-triangulation. Overall, the combined use of appropriately chosen algorithms resulted in a new methodology for generating 3-D image reconstruction of the bed, in hydraulic flumes at high spatial and temporal resolution.

[69] The application of the technique in a bridge scour experiment has been presented in order to provide a better understanding of its implementation and its capabilities. Based on the camera specifications employed in this experiment, the instantaneous morphology of the evolving scour hole can be obtained at a rate of 30 surfaces/s. The coordinates of more than 6500 points of the scour hole were estimated and used to characterize the bed surface at every instant. The accuracy that was achieved was lower than one grain diameter. It is shown that the results of this technique can be further analyzed to provide a quantitative description of the shape of the scour hole, the slope, and the scour rates as they vary with time. These results provide details at unprecedented levels compared to currently available technology, for similar applications.

[70] The present technique overcomes several limitations of prior methods by allowing for continuous and nonintrusive scour measurements. The technique does not require the use of targets on the bed, structured light or advanced cameras. The relatively simple methodology and low-cost setup have the potential to make this technique applicable to a wide range of experiments and easily combined with other instrumentation for the measurement of the flow field (e.g., particle image velocimetry. The main objective of the present work was to develop a method that can be routinely used to visualize the surface of evolving gravel beds and obtain data of high quality. Several adjustments can be made to accommodate the needs of various applications.


[71] The authors would like to thank the National Science Foundation (EAR 0738759) and the Research Office of the United States Army Corps of Engineers (ARO 53512-EV) for their support for this study.