Mapping, navigation, and learning for off-road traversal
Article first published online: 17 DEC 2008
Copyright © 2008 Wiley Periodicals, Inc.
Journal of Field Robotics
Special Issue: Special Issue on LAGR Program, Part I
Volume 26, Issue 1, pages 88–113, January 2009
How to Cite
Konolige, K., Agrawal, M., Blas, M. R., Bolles, R. C., Gerkey, B., Solà, J. and Sundaresan, A. (2009), Mapping, navigation, and learning for off-road traversal. J. Field Robotics, 26: 88–113. doi: 10.1002/rob.20271
- Issue published online: 17 DEC 2008
- Article first published online: 17 DEC 2008
- Manuscript Accepted: 11 NOV 2008
- Manuscript Received: 8 APR 2008
The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision as the main sensor. The system is very robust—we can typically give it a goal position several hundred meters away and expect it to get there. In this paper we describe the main components that comprise the system, including stereo processing, obstacle and free space interpretation, long-range perception, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain. © 2008 Wiley Periodicals, Inc.