Using Embodied Data for Localization and Mapping
Version of Record online: 25 NOV 2013
© 2013 Wiley Periodicals, Inc.
Journal of Field Robotics
Volume 31, Issue 2, pages 263–295, March/April 2014
How to Cite
Schwendner, J., Joyeux, S. and Kirchner, F. (2014), Using Embodied Data for Localization and Mapping. J. Field Robotics, 31: 263–295. doi: 10.1002/rob.21489
- Issue online: 7 FEB 2014
- Version of Record online: 25 NOV 2013
- Manuscript Accepted: 4 OCT 2014
- Manuscript Received: 11 FEB 2013
- Federal Ministry for Economics and Technology (BMWI)
- German Space Agency (DLR). Grant Number: 50 RA 0907
Mobile autonomous robots have finally emerged from the confined spaces of structured and controlled indoor environments. To fulfill the promises of ubiquitous robotics in unstructured outdoor environments, robust navigation is a key requirement. The research in the simultaneous localization and mapping (SLAM) community has largely focused on optical sensors to solve this problem, and the fact that the robot is a physical entity has largely been ignored. In this paper, a hierarchical SLAM framework is proposed that takes the interaction of the robot with the environment into account. A sequential Monte Carlo filter is used to generate local map segments with a combination of visual and embodied data associations. Constraints between segments are used to generate globally consistent maps with a focus on suitability for navigation tasks. The proposed method is experimentally verified on two different outdoor robots. The results show that the approach is viable and that the rich modeling of the robot with its environment provides a new modality with the potential for improving existing visual methods and extending the availability of SLAM in domains where visual processing alone is not sufficient.