†The research leading to this article has received funding from the European Community's Seventh Framework Program (FP7/2007-2013) under grant agreements no. 231855 (www.sfly.org), no. 266470 (www.mycopter.eu), and no. 285417 (www.fp7-icarus.eu). Stephan Weiss is technologist at NASA-JPL/CalTech (email: firstname.lastname@example.org). Markus W. Achtelik, Simon Lynen and Laurent Kneip are currently Ph.D. students at the ETH Zurich (email: “email@example.com, firstname.lastname@example.org, email@example.com”). Michael C. Achtelik is CEO of Ascending Technologies GmbH (email: firstname.lastname@example.org). Margarita Chli is a senior researcher at and deputy director of the Autonomous Systems Lab (ASL) at ETH Zurich (email: email@example.com). Roland Siegwart is full professor at the ETH Zurich and head of the ASL (email: firstname.lastname@example.org).
Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium
Article first published online: 6 AUG 2013
© 2013 Wiley Periodicals, Inc.
Journal of Field Robotics
Volume 30, Issue 5, pages 803–831, September/October 2013
How to Cite
Weiss, S., Achtelik, M. W., Lynen, S., Achtelik, M. C., Kneip, L., Chli, M. and Siegwart, R. (2013), Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium. J. Field Robotics, 30: 803–831. doi: 10.1002/rob.21466
- Issue published online: 6 AUG 2013
- Article first published online: 6 AUG 2013
- Manuscript Accepted: 13 MAY 2013
- Manuscript Received: 12 SEP 2012
The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest in the robotics community, as their deployability in missions of surveillance and reconnaissance has now become a realistic prospect. The state of the art, however, still lacks solutions that can work for a long duration in large, unknown, and GPS-denied environments. Here, we present our visual pipeline and MAV state-estimation framework, which uses feeds from a monocular camera and an Inertial Measurement Unit (IMU) to achieve real-time and onboard autonomous flight in general and realistic scenarios. The challenge lies in dealing with the power and weight restrictions onboard a MAV while providing the robustness necessary in real and long-term missions. This article provides a concise summary of our work on achieving the first onboard vision-based power-on-and-go system for autonomous MAV flights. We discuss our insights on the lessons learned throughout the different stages of this research, from the conception of the idea to the thorough theoretical analysis of the proposed framework and, finally, the real-world implementation and deployment. Looking into the onboard estimation of monocular visual odometry, the sensor fusion strategy, the state estimation and self-calibration of the system, and finally some implementation issues, the reader is guided through the different modules comprising our framework. The validity and power of this framework are illustrated via a comprehensive set of experiments in a large outdoor mission, demonstrating successful operation over flights of more than 360 m trajectory and 70 m altitude change.1