Work done while at the Autonomous Space Robotics Lab, University of Toronto.
Lighting-invariant Visual Teach and Repeat Using Appearance-based Lidar
Version of Record online: 20 DEC 2012
© 2012 Wiley Periodicals, Inc.
Journal of Field Robotics
Volume 30, Issue 2, pages 254–287, March/April 2013
How to Cite
McManus, C., Furgale, P., Stenning, B. and Barfoot, T. D. (2013), Lighting-invariant Visual Teach and Repeat Using Appearance-based Lidar. J. Field Robotics, 30: 254–287. doi: 10.1002/rob.21444
- Issue online: 4 FEB 2013
- Version of Record online: 20 DEC 2012
- Manuscript Accepted: 24 OCT 2012
- Manuscript Received: 17 MAY 2012
Visual Teach and Repeat (VT&R) is an effective method to enable a vehicle to repeat any previously driven route using just a visual sensor and without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically alter the appearance of the scene. In an effort to achieve lighting invariance, this paper details the design of a VT&R system that uses a laser scanner as the primary sensor. Unlike a traditional scan-matching approach, we apply appearance-based computer vision techniques to laser intensity images for motion estimation, providing us the benefit of lighting invariance. Field tests were conducted in an outdoor, planetary analogue environment, over an entire diurnal cycle, repeating a 1.1 km route more than 10 times with an autonomy rate of 99.7% by distance. We describe, in detail, our experimental setup and results, as well as how we address the various off-nominal scenarios related to feature-poor environments, hardware failures, and estimation drift. An analysis on motion distortion and a comparison with a stereo-based system is also presented. We show that even without motion compensation, our system is robust enough to repeat long-range routes accurately and reliably. © 2012 Wiley Periodicals, Inc.