Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of three-dimensional (3D) sensors and by the difficulty of heuristically programming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3D geometry and learning from proprioception, and describe initial instantiations of them as developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain and learning to extend the lookahead range of the vision system. © 2007 Wiley Periodicals, Inc.