Map Building Fusing Acoustic and Visual Information using Autonomous Underwater Vehicles
Article first published online: 24 JUL 2013
© 2013 Wiley Periodicals, Inc.
Journal of Field Robotics
Volume 30, Issue 5, pages 763–783, September/October 2013
How to Cite
Kunz, C. and Singh, H. (2013), Map Building Fusing Acoustic and Visual Information using Autonomous Underwater Vehicles. J. Field Robotics, 30: 763–783. doi: 10.1002/rob.21473
- Issue published online: 6 AUG 2013
- Article first published online: 24 JUL 2013
- Manuscript Accepted: 18 JUN 2013
- Manuscript Received: 25 OCT 2012
- National Science Foundation Censsis ERC. Grant Number: EEC-9986821
- National Oceanic and Atmospheric Administration. Grant Number: NA090AR4320129
We present a system for automatically building three-dimensional (3-D) maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree-of-freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the onboard velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot's trajectory, the map, and the camera location in the robot's frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios on robots with diverse sensor suites. The results of using the system to map the structure and the appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.