Two dogs, new tricks: A two-rover mission simulation using K9 and FIDO at Black Rock Summit, Nevada



[1] An experiment illustrating two rovers cooperatively exploring a field site was performed at Black Rock Summit, Nevada, in May 2000. The rovers FIDO and K9 are mechanically identical prototype planetary rovers designed at the Jet Propulsion Laboratory. FIDO carried high-resolution false-color infrared and low-resolution monochrome stereo cameras and an infrared point spectrometer on a mast-mounted pointable platform, a manipulator arm equipped with a color microscopic imager, and a coring drill for sample collection. K9 carried on a mast-mounted pointable platform high-resolution color and low-resolution monochrome stereo cameras, and a Laser Induced Breakdown Spectrometer for standoff elemental analysis. A team located at Jet Propulsion Laboratory commanded the two rovers for 3 days. K9 obtained stereo images of targets, and three-dimensional models were constructed to determine the best locations for FIDO to obtain core samples. A drilling target was selected 1.5 m from the starting position of FIDO. Six command cycles and 2 m of traversing were required for FIDO to reach, drill into, and place an instrument on the target. K9 required 11 command cycles to traverse 60 m and obtain full-coverage stereo images of two rock targets along its route. Virtual reality-based visualization software called Viz provided situational awareness of the environment for both rovers. Commands to K9 were planned using Viz, resulting in improved rover performance. The results show that two rovers can be used synergistically to achieve science goals, but further testing is needed to completely explore the value of two-rover missions.

1. Introduction

[2] Rovers represent an important capability for Martian surface exploration. The Pathfinder mission first demonstrated the practical benefits of rovers on Mars [Rover Team, 1997; Golombek et al., 1997, 1999; Moore et al., 1999]. The Pathfinder Sojourner rover traversed about 90 m while analyzing a variety of rocks and soils as well as performing experiments to determine the material properties of the Martian surface. The Mars Surveyor program is planning a series of rover missions designed for in situ exploration culminating with the eventual return of Martian samples to Earth. This program will employ capable rovers carrying sophisticated science payloads. The next rover mission, called the Mars Exploration Rover, or MER, is currently under development. Two rovers, each carrying a suite of instruments called the Athena payload [Squyres et al., 1999, 2001], will be launched to different landing sites in 2003. The expected operational lifetime of these rovers (nominally 90 Mars sols) is comparable to the duration of the Pathfinder mission, but the rovers must each traverse 1 km or more, or 10 times farther than the Sojourner rover. Missions beyond 2003 are planned with 10–100 km traverse range, while still having similar constraints on mission duration. To achieve these increases in traverse distance, and to maximize scientific return, scientists and mission planners need adequate capability for accurately controlling the robots and easily interpreting the rover-based observations. The interaction between rover operations and science teams that must plan these activities is another crucial element. Methods must be found to involve scientists effectively in the daily planning process and to keep complex operations moving smoothly and efficiently over many months.

[3] Mission simulations using instrumented rovers in terrestrial field sites can provide the experience needed to design mission operations and science analysis methods critical to performing effective exploration of Mars [Committee on Lunar and Planetary Exploration (COMPLEX), 1999]. Mission simulations at terrestrial field sites are a cost-effective way to expose a broad segment of the science community to the experience of rover operations. They are also an important element in training science and operations teams to perform accurately and efficiently. The accuracy of science interpretations based on rover-derived data can be evaluated with ground truth from the field, thereby enhancing their training potential. The most robust approach is to closely simulate a rover mission on Mars with science teams using only observations of equivalent quality to those obtainable from actual Mars missions. Field experiments along these lines have previously been performed using the Russian-developed Marsokhod rover [Greeley et al., 1994; Stoker, 1998; Christian et al., 1997, Stoker et al., 2001], the FIDO rover developed at the Jet Propulsion Laboratory [Arvidson et al., 2000, 2002], and the Nomad rover developed by Carnegie Mellon University [Cabrol et al., 2001a, 2001b].

[4] The field experiment reported in this paper was the first to involve two rovers deployed at the same location cooperatively exploring a site. One objective of this experiment was to evaluate how two rovers might be used together to achieve the science objectives of a mission. The two-rover experiment was based, in part, on programmatic guidance from NASA, the sponsor of the activity. NASA is interested in using multiple cooperating robots for future planetary exploration and wanted to explore science applications for such missions. The upcoming MER mission plan calls for two rovers to be operated simultaneously on the Martian surface, although at different locations. It is therefore useful to explore the challenges in operating two rovers simultaneously and to evaluate whether one operations and science team can deal with both rovers or whether it is better to have separate teams for each rover.

[5] A second objective of this experiment was to use newly developed virtual reality (VR) visualization software for planning rover operations and to evaluate the performance improvement afforded by this capability. Visualizing robotic operation in a remote environment is crucial for mission success. Even with the rather crude graphic rendering capabilities of the 1970s, terrain visualization capabilities were used on the Viking lander mission [Liebes and Schwartz, 1977] to help understand the environment. Sophisticated VR-based capabilities were used for the Mars Pathfinder Mission [Stoker et al., 1999], and even greater capabilities were planned for the Mars Polar Lander mission [Stoker and Zbinden, 2000; Nguyen et al., 2000]. The sophistication of reconstruction and rapidity of information display has tracked that of computer technology. Large-scale applications of planetary mapping and synthetic reconstruction of terrain from orbital data sets are now in common use [DeJong et al., 1991; Kirk et al., 1992; Batson et al., 1994; Batson and Eliason, 1995; Li et al., 1996]. Also, new paradigms for human-computer interaction now allow scientists to visualize information with unprecedented ease and fidelity.

[6] One of the key technologies for enhancing situational awareness is that developed for virtual reality. VR can improve the capabilities of scientists to understand information from robotic exploration vehicles by giving them user interface tools that provide a sense of presence in the remote environment [McGreevy, 1992, 1993]. Throughout the last decade, operator interface tools using VR have been developed for controlling underwater exploration vehicles [Stoker et al., 1995], a walking robot exploring a volcanic caldera [Fong et al.,1995; Bares and Wettergreen, 1997], and planetary surface rovers in mission simulations [Stoker, 1998, Christian et al., 1997; Stoker et al., 2001]. In some previous field experiments [Christian et al., 1997; Stoker, 1998], VR was used to display the rover's position and state within the terrain to operators and science teams, but because VR requires fast graphical rendering, interactive rendering of image textures was not possible on workstations of that vintage. For science analysis it is important to have image information coregistered with three-dimensional (3-D) models. Three-dimensional terrain models with coregistered image texture were first used on the Mars Pathfinder mission [Stoker et al., 1999] viewed with a VR visualization interface that allowed the terrain model to be browsed interactively with scene rendering at frame rates up to 30 Hz. However, Pathfinder 3-D models were built with stereo images from the lander's IMP camera [Smith et al., 1997] and had limited utility for planning rover operations. Sojourner rover cameras were poorly suited for 3-D modeling.

[7] The experiment reported in this paper utilized Viz, a new visualization package uniquely tailored to plan, visualize, and replay rover activities. Viz was used in this experiment to plan commands to the K9 rover and to visualize the operations of both rovers.

2. Geological Setting

[8] The field experiment was conducted in northeastern Nevada in an uninhabited area along state highway Route 6, between Tonopah and Ely, northeast of the Lunar Crater volcanic field. The site and its geology are described by Arvidson et al. [2002]. The area is characterized by extensive outcrops of rhyolite flows and ash deposits that have been hydrothermally altered, together with an abundance of relatively recent basalt flows and ash deposits [Quinilivan et al., 1974]. Furthermore, lava flows overlie sedimentary deposits and may have played a role in hydrothermal alteration. Thus the materials found at the site represent the types of materials that will be explored on Mars by rovers. Another advantage of the site is that it was the focus of the 1989 NASA Geologic Remote Sensing Field Experiment, in which airborne remote sensing was acquired, including visible to near-infrared, thermal infrared, and radar data [Arvidson and Guinness, 1989] augmented by surface measurements acquired by a field team for calibration and ground truth. These represent the types of data that will be acquired by orbital observations and analyzed to provide regional context for landers and rovers on Mars. Finally, the field site represented a variety of terrains (from smooth to very rough) and hazards (block fields, slopes, ravines) that are useful for testing rover mobility.

[9] Data from the 1989 NASA Geologic Remote Sensing Field Experiment, and a set of simulated descent images, were provided to the Science Team in advance of the test to help prepare for the experiment. Arvidson et al. [2002] provides additional information about these data sets. Figure 1 shows the location of the landing site where the two-rover field experiment was performed. This site was in a small arroyo located just to the south of, and intersecting, a cliff-forming outcrop that is part of the adjacent hill.

Figure 1.

(a) Aerial image of the landing site acquired 29 May 1994. East-west length is ∼8 km. North is up in the image. (b) Descent image of landing site, acquired 21 March 2000, 10:30 AM local time at altitude of 1524 m. The highway is 10 m wide. North arrow is shown. The starting position of the rovers is designated “Landing Site” in both images.

3. Mission Approach

[10] The test was designed to examine a scenario in which two rovers cooperate to explore the same field site. The design of the scenario took advantage of and was informed by instrument payloads already on the FIDO and K9 rovers. At the time the experiment was planned, FIDO was optimized for in situ analysis and sample collection, whereas K9 was only capable of remote sensing. Previous rover tests have shown that sample collection and in situ analysis are time-consuming activities and trade-offs must be made between exploration (traversing long distances) and sampling. Thus we chose an approach which used two rovers to optimize both exploratory traversing and in situ analysis. FIDO was assigned the role of sampling rover, performing drilling and in situ analysis, while K9 was assigned the role of “scout” rover, traversing long distances while collecting stereo imaging for producing high-resolution 3-D models of targets to evaluate FIDO's ability to successfully drill them. Previous experiments with the FIDO rover showed that a scout would be useful to acquire detailed mapping of drill targets so that FIDO could approach from the optimum direction to move over and drill into targets with the fewest fine-scale maneuvers.

[11] The mission plan was as follows. The two rovers were to start next to one another (1–2 m apart) as if they were landed as a single package. Each rover was to acquire a full (∼360°) panorama in order to characterize the landing site and the rover's position within it. Using these initial images, the Science Team was to define a set of potential coring targets. K9 was to be commanded to move to the first target, acquire stereo imaging, and develop a 3-D model of it and the surrounding area. K9 would then circle the target and obtain stereo images from additional vantage points for developing a full coverage 3-D model of it. This information would be used to decide on the scientific importance of the target and whether or not FIDO could obtain a core from the rock. Terrain models generated from these data would help determine an optimal approach for FIDO to drill the target. Coring requires FIDO to drive over the target and lower the Mini-Corer onto it, so only targets with acceptable size and shape can be drilled.

[12] While K9 was approaching and mapping the first target, FIDO would remain at the landing site, acquiring other data of scientific interest. Once K9 data confirmed the importance and approachability of a target, FIDO would be commanded to approach it, deploy the Mini-Corer, and drill it. The Mini-Corer force sensor would be used to determine if the bit had reached the surface, and a Bellycam image would be acquired for confirmation. During the period when FIDO approached and attempted to drill the target, K9 would move to assess the potential for drilling other targets selected from the original panoramas. In so doing, K9 would also collect additional imaging and compositional data during its traverses, hence providing a more complete characterization of the area surrounding the landing site. These data would be used to select a second drill target, which K9 would then image in stereo to determine suitability for drilling and the optimal approach vector for FIDO. The sequence would then continue with FIDO approaching the target and deploying the drill while K9 moved on to map additional targets. This sequence of operations is shown schematically in Figure 2. We expected that one well-characterized target would be mapped by K9 and drilled by FIDO each day of the test.

Figure 2.

Operational steps in the two-rover mission scenario.

[13] Mission operations for this test were conducted in a control center at the Jet Propulsion Laboratory in Pasadena, California, where both the Science Team (ST) and Rover Operations Team (ROT) were located. The ST had no prior knowledge of the field site and, other than the data provided as part of the test, knew nothing about the site. A field team located at the field site maintained the rovers and satellite telemetry equipment during the test. The ST and ROT were not allowed to communicate with the field team. In this sense, this was a blind test closely approximating the operation of two rovers on Mars.

[14] The two-rover mission occurred after FIDO had already been operating for several days in a nearby area [see Arvidson et al., 2002]. Thus, by the time the two-rover scenario began, the Science Team had developed hypotheses about the field site based on the simulated orbital and descent image data provided and had refined their hypotheses using data acquired from FIDO.

[15] Both rovers were directed using command cycles that closely approximate directing a rover on Mars. Each command cycle involves the following steps: (1) the operations team creates and tests a command sequence for the next requested activities and uplinks them to the rover's onboard computer; (2) the rover receives the instruction set and performs the activities requested; (3) the rover sends acquired data back to the Science Team; (4) the Science Team interprets the data, decides what to do next, and requests the next set of actions. The test did not attempt to simulate clock time on Mars or communication time delays, but the time involved in going through these steps meant that command cycles typically took 2–4 hours.

[16] FIDO and K9 were independently commanded. Thus we assumed the mission was designed to allow independent communication with each rover. This represented one option in a wide range of possible engineering choices and constraints on a mission. Each rover performed as many command cycles as possible during the test duration. The number of command cycles and data requested/received was documented separately for each rover. All personnel (ST and ROT) were housed in one room, so all involved were aware of each rover's progress. This simplified coordination between the two rovers.

[17] No constraints were placed on data volume requested during a command cycle for either rover. In an actual mission the data volume that can be sent depends on power and communications resources. The main effect of this simplification was that large panoramas could be sent in one communication opportunity whereas this might not be possible in a real mission.

[18] K9 and FIDO were operated by separate ROTs, but science decisions and data analysis for both rovers were performed by one ST. As K9 and FIDO were developed by separate groups, it was not practical to train one group to operate both rovers in the limited time allotted to the test. Furthermore, rover operation is a demanding task requiring the full attention of a dedicated team. However, as FIDO and K9 were cooperating to achieve one set of objectives, we expected that a single ST could adequately plan for and interpret observations from both rovers.

4. Rover Capabilities

4.1. FIDO Rover

[19] FIDO is a prototype planetary surface rover built at the Jet Propulsion Laboratory. It is described by Arvidson et al. [2000] and Arvidson et al. [2002]. The FIDO payload was developed to simulate the complex surface operations expected of a sample return mission focusing on identification of rock targets, approaching the targets and conducting in situ measurements, and drilling/verifying cores. Figure 3 shows the FIDO rover and its systems. Table 4 of Arvidson et al. [2002] shows details about the FIDO payload. The payload includes a mast that extends to 1.94 m height for acquisition of stereo imaging and spectral reflectance data. The mast is stowed on the rover deck when the vehicle is moving. The mast head houses a three-band (0.65, 0.75, 0.85 μm) false-color infrared stereo imaging system capable of surveying the terrain in stereo with high spatial resolution (0.38 mrad/pixel) for scientific purposes. The characteristics of the camera system, known as the FIDO PanCam, were chosen to approximate the capabilities of the human eye in terms of the use of stereo, distance above the ground, resolving power, and pointing capability. Pointing is accomplished by mounting cameras on a pan and tilt platform. Also included is Navcam, a low spatial resolution, monochromatic, wide field of view stereo imaging system used for traverse planning. Hazard avoidance cameras are located on the front and back of the vehicle to acquire stereo images and terrain maps of the areas to be traversed. Onboard autonomous hazard-avoidance software is used to decide whether obstacles are too high to be successfully traversed. If judged to be a hazard, the software then commands the vehicle to search for and implement a traverse to go around the obstacle, while still trying to reach a waypoint designated remotely by the Science Team. A mast-mounted infrared point spectrometer (IPS) is bore-sighted with Navcam and Pancam and acquires spectral radiance information over the wavelengths from 1.3 to 2.5 μm with a spectral resolution of ∼10 cm−1. An IPS pixel covers approximately 9 by 9 Pancam pixels. The IPS can be used both in a point mode and in a raster mode to form an image cube. The IPS instrument is described by Haldemann et al. [2002].

Figure 3.

(left) The FIDO rover showing its payload, including mast carrying a science and navigation camera and an infrared point spectrometer (a), hazard avoidance cameras (b), and a four degree-of-freedom arm equipped with a Color Microscopic Imager (c). Under the chassis is the Mini-Corer core drill (d). Belly cameras (e) help visualize drilling. (right) The K9 rover showing its payload, including mast-mounted color stereo cameras (a), navigation cameras (b), and optical element for Laser Induced Breakdown Spectrometer (c). Mounted on the front and aft body are cameras for hazard avoidance (d).

[20] A four degree-of-freedom arm is included on the front of the FIDO Rover. The end effector on the arm includes a Color Microscopic Imager (CMI) which can be placed against rock and soil targets to acquire close-up views. The Mini-Corer on FIDO is a rock drill that is mounted on the underside of the rover chassis and can be commanded to pitch down and acquire 0.5 cm diameter by up to 1.7 cm long cores. Cameras mounted on the underside, or “belly,” of FIDO monitor drill deployment. The core can be extracted from the rock and examined with the microscopic imager. Once a core's presence is confirmed, it can be either ejected or kept and placed in a caching tube.

[21] For these field trials, FIDO was commanded using the WITS rover operations program described by Backes and Norris [2000] and Backes et al. [2000].

4.2. K9 Rover

[22] The K9 rover was developed as a test bed for autonomy, navigation, instrument, and mission operations technologies for future rover missions. The K9 chassis, built at the Jet Propulsion Laboratory, is kinematically identical to the FIDO chassis. The electronics architecture was developed at NASA Ames Research Center to allow incorporation of autonomy technologies, facility instruments for high-resolution imaging and elemental analysis, wide-angle imagers for obstacle avoidance and navigation, and a modular interior and exterior layout to facilitate future integration with additional user payloads.

[23] For the May 2000 field test the instrument payload was focused on the task of performing high-resolution remote sensing of geological targets. Figure 3 shows the K9 rover and its instruments. A fixed mast carried two stereo camera sets (HawkEye and WideEye) on a pan (azimuth range −89 to +258°) and tilt (elevation range −88° to +77°) platform. Instruments were mounted 1.5 m above the ground plane. Table 1 gives detailed specifications of the K9 camera system. The HawkEye camera system, designed for scientific imaging, is a stereo pair of color cameras with a 14.2° × 17° field of view (FOV). The angular resolution, stereo, and color capability approximate the performance of the human eye, albeit with a wider stereo baseline, which accommodates building stereo models over a wider distance range from the camera. WideEye is a pair of wide-angle monochrome imagers (22.5° × 17° FOV) cosighted with HawkEye, designed to be used for navigational purposes. Additional copies of the WideEye camera set were incorporated on the front and rear of the K9 chassis and are used for hazard avoidance.

Table 1. K9 Camera Specifications
FunctionScience ImagingNavigation
Image Format800 × 960 pixels510 × 492 pixels
Pixel Size10.8 × 10.8 microns9.2 × 7.2 microns
Angular Res..31 mRad/pixel.77 × .60 mRad/pixel
Stereo Baseline27.9 cm10.9 cm
Color MethodFilter WheelPanchromatic

[24] K9 also carried on its mast a Laser Induced Breakdown Spectrometer (LIBS), a remote-sensing instrument for measuring the elemental composition of rocks and soils from up to 20 m away [see Wiens et al., 2002]. The LIBS operates by illuminating a target site with a high-intensity laser and converting a small amount of site material to a plasma that radiates visible light which is analyzed using a spectrometer to determine elemental composition. Some of the LIBS power and control electronics were mounted inside the K9 rover chassis, and some were external. A set of optics was comounted with the WideEye and HawkEye cameras. The mast-mounted equipment consists of a high-power laser, a spectrometer foreoptic, a motor-driven focusing stage, and a small rangefinder connected into rover electronics by a fiber optic cable, a serial cable, and a motor control cable.

[25] LIBS was integrated with K9 and deployed at the field site, and data were collected prior to the start of the simulated science mission [Wiens et al., 2002]. However, a major forest fire in Los Alamos, New Mexico, caused the LIBS support team to depart the field site before the start of the two-rover mission. Thus LIBS measurements were not collected during this simulation.

4.3. Visualization of Rover Operations Using Viz

[26] Demonstrating and evaluating the Viz VR-based visualization system was a key objective of the test. Viz is a software package that allows data visualization and user interaction with 3-D objects rendered along with the 3-D scene. The architecture of Viz is based on the client-server paradigm. The core 3-D rendering module is implemented as a server to which clients connect to interact with objects in the virtual environment. Viz can read Virtual Reality Modeling Language files defining the geometry and kinematics of robotic mechanisms. This allows the articulated motion of the rover over the terrain to be visualized. Along with the 3-D terrain model, an articulated interactive model of the rover is displayed, including instrument platforms which represent those on the MER. The user can select and move the instrument platforms while getting feedback for computing relevant joint angles. Sensor fields of view can be displayed in Viz as colored, semitransparent, sensor-centered pyramids (Figure 4b). Simultaneous display of multiple viewpoints allows a subwindow to display a simulated image sensor view along with an “outside” view (Figure 4c). This provides a very intuitive means to point instruments, such as cameras or robotic arms. The user can interactively aim the camera at interesting features and generate pointing for command sequences. Viz uses a simulation methodology developed by Flueckiger [1998] called “Virtual Robot” to compute the position and orientation of the rover body and all its linkages, given input commands. The algorithm allows the wheels to follow the terrain and can produce simulated traverse behavior of the rover. This simulation capability allows operators to test various driving command sets as well as provides an informative visualization of the rover driving on the terrain. Viz can display animated command sequences by reading a sequence file and moving the model of the rover or instrument platforms accordingly within the 3-D scene. Intermediate positions are extrapolated to give the visualization the impression of smooth movements of the objects. Events involving imaging or spectra are represented by displaying the instrument field of view for one second. Features incorporated in Viz allow the user to perform interactive measurements on the 3-D terrain. Tools are implemented to measure position, distance, area, and volume within a user-specified curve and to plot the profile of the terrain along a line defined by two points. Markers can be displayed in the scene to mark specific points like a pencil would be used to mark a map (Figure 4d). Objects of known size and scale bars can be placed in the scene to provide an intuitive sense of scale (Figure 4e).

Figure 4.

Image of a Viz screen illustrates features. The background is a 3-D terrain model from Mars Pathfinder. (a) Articulated rover model placed in the terrain. (b) Camera field of view. (c) A second window showing estimated camera view. (d) Virtual markers map activities. (e) Human figure provides a sense of the scale. (f) The rover is part of the background terrain model.

4.4. Field Test Use of Viz

[27] Viz was used extensively during the field test for planning commands to the K9 rover. A typical commanded operation was to first turn toward and then drive to a distant object, point the mast camera at the object (from its new position at the end of the drive), and acquire stereo images of it. Using Viz, the virtual rover was presented as a 3-D object within a terrain model captured from previously acquired stereo images. The virtual rover was turned and moved along a desired course to its end position. Then the pan and tilt position of the camera needed to capture the object was determined using the simulated instrument view. Slippage of the wheels during the drive results in uncertainty in the rover's position at the end of a drive command. So, in order to be assured of capturing the desired target at the end of a move, a panorama of images across a span of pan and tilt angles was typically captured, with the panorama centered on the predicted position of the target.

[28] Navigating a course was accomplished using a combination of point turns and straight line drives. K9 did not incorporate automated hazard avoidance for this test, so obstacle avoidance was accomplished using Viz to determine hazard locations and plan drives to avoid them. The rover used dead reckoning to attempt to execute commanded turns and traverses. However, errors occurred due to wheel slippage and other unknowns. Viz was used to understand the rover's actual position at the end point of each move to avoid cumulative errors.

[29] Future plans for Viz call for using it to construct command sequences. However, for this test, rover commands were constructed with a simpler graphical interface called the Virtual Dashboard. Figure 5 shows two of the Virtual Dashboard's panels. A command sequence is constructed by entering command events sequentially into the Virtual Dashboard. For example, a drive command followed by pointing the camera is entered by specifying the drive speed and distance and then specifying the camera pointing for image acquisition. As each command is entered, it is written to a sequence file in a language called Contingent Rover Language, or CRL [Bresina et al., 1999]. CRL is a sequencing language that allows sequences to contain conditional branches. This allows the rover operator to create more robust sequences by enabling the planner to take into account situations that may arise during execution of the sequence, such as failure of a command to execute as expected. Once the operator has completed command entry, the sequence file is uploaded to the rover where the onboard Conditional Executive interprets the sequence, starts execution, monitors its progress, and selects alternative plan branches if conditions change.

Figure 5.

Panels of the graphical user interface called the Virtual Dashboard in which rover commands were created. Sliders and grabbers are used to select rover turn angles, drive speeds and distances, camera pointing, etc.

[30] Once generated, the animation feature was used to visualize commands in Viz for error checking. The operator could watch an animated movie showing every action the rover was commanded to perform and thereby easily spot any errors. For example, sign errors in turn commands could be instantly spotted as the virtual rover would turn in the wrong direction and not move toward the desired object.

5. Mission Description

[31] FIDO and K9 began the test separated by 1 m at the simulated landing site. Figure 6 shows an aerial photograph of the landing site. Each rover first acquired a 360° panorama using its high-resolution science cameras (PanCam and HawkEye). The starting panoramas were acquired in the panchromatic modes and, to achieve acceptable data volume, used pixel averaging to reduce image resolution to 1/4 of the maximum achievable angular resolution. Figure 7 shows the starting panorama acquired by K9. On the basis of this panorama, the Science Team identified two different rock units in the floor of the arroyo: dark-colored rocks and light-colored rocks. Near the starting position (1.5 m from the starting position of FIDO) a group of rocks on the arroyo floor nicknamed “the Campfire” was identified. This grouping contained several examples of light-colored rocks that appeared to be the correct size and shape for drilling. The team decided to select a light-colored rock from this group as the first coring target. However, as FIDO was between K9 and the Campfire, it was impractical to use K9 to scout this target because it would have required driving around FIDO. FIDO was already positioned well to obtain the required stereo imaging and the suitability of one of the targets for drilling was verified by building 3-D models from these images and visualizing the drill deployment using Viz. K9 was directed to scout dark-rock targets farther down the arroyo. The Science Team hypothesized that these dark rocks were probably derived from the basalt flow that could be seen in the simulated orbital images (Figure 1a) which capped the cliff adjacent to the arroyo. K9 was sent toward one of these targets, selected from the initial panorama.

Figure 6.

Aerial photograph of the test site showing the path of K9 down the arroyo (dotted line) and where FIDO operations occurred.

Figure 7.

Monochrome panorama acquired by K9 from the starting position of the test. The FIDO rover is at the far right. Structure in the bottom center of the image is the front of the K9 chassis. The shadow cast by the K9 mast is on the ground at the lower left of the image.

[32] A summary of each rover's actions during each command cycle follows next. The starting panorama was counted as the first command cycle (CC1) for both rovers.

5.1. FIDO Operation

[33] FIDO's first task was to acquire a drill core of a light-colored rock initially 1.5 m from its starting position. On CC2, stereoscopic PanCam images and IPS spectra were acquired of the Campfire targets, followed by a 1 m drive in the direction of the rocks. Three-dimensional terrain models were built from the PanCam stereo images using the Ames Stereo Pipeline [Stoker et al., 1999] and displayed in Viz to help determine if the rock could be drilled. A rover sequencing error prevented FIDO from reaching the targets on the first drive. On CC3, FIDO completed its drive to the rock designated for coring, and the drill was deployed on the target but slipped off the rock. In addition, IPS data were acquired of the nearby cliff wall. On CC4, CMI data were acquired of the target, and FIDO was repositioned to again attempt to drill. Terrain models were built using Bellycam images and displayed in Viz to help visualize the drilling operation. On CC5 the drill successfully penetrated the rock but achieved only 3 mm depth; no core was acquired. On CC6 the rover was repositioned for arm deployment on another nearby target, but arm placement did not occur as bad weather forced the end of the test on the morning of the third day.

5.2. K9 Operation

[34] The first task for K9 was to acquire stereo images and complete (no occlusion) coverage of a dark rock target initially 7 m from its starting position. K9 drove 5 m toward the target and acquired the desired imaging and then drove 2 m past the target and imaged it from the other side. Viz was used to plan the drive and imaging commands. Three-dimensional models of the object were built from each set of images and merged together to determine the shape of the rock, which was irregular and unsuitable for coring. Another potential coring target was identified 12 m farther down the arroyo. K9 was commanded to drive toward this rock, image it from 2 m distance, and then drive past it and image the other side of it. However, this command sequence failed after 6 m of driving with telemetry indicating that K9 was dangerously tilted. K9 had initially headed off course because the wheels slipped on a rock during a turn, and had driven into a large creosote bush at the edge of the arroyo. K9 was next commanded to acquire a stereo panorama as well as front and rear bumper camera images to assess the problem. FIDO also acquired images in the direction of K9. Figure 8 shows a visualization of the scene from Viz that helped to understand K9's position. The image from FIDO also showed the problem (Figure 9). K9 backed away from the obstacle and proceeded with imaging the front side of the second scout target. It then drove past the target and imaged it from behind. Three-dimensional models of this target also showed it to be irregular in shape and unsuitable for drilling. K9 was then directed to drive to the bend in the arroyo and to image the contact between the outcrop and the arroyo. This was accomplished in two more long drives. Table 2 shows K9's actions in each command cycle.

Figure 8.

View from Viz created with data taken after K9 drove into a bush. The visualization of the rover in the scene helped to understand the problem.

Figure 9.

FIDO Image of K9 with front wheels on a bush.

Table 2. K9 Operations Summary
1Acquire 360° panoramaAssess initial state, characterize landing site, select targets to map prior to coring.
25m drive, acquire stereo panorama (90° × 60°)Image target 1 for 3-D mapping. Determine if target is suitable for coring.
3Turn, 3.8 m drive, acquire stereo imagesImage target 1 on the side occluded from the previous position.
4Image cliff wall, acquire 180° stereo panoramaImaging for science and prepare for drive to next scout target.
5Turn, drive 10 m, acquire stereo images, drive 2 m, acquire stereo imagesDrive to target 2, image it, then drive past it and turn back to image the back side.
6Acquire 360°stereo panorama and images from bumper hazard camerasAssess rover state after CC5 ended with rover in fault condition.
7Back up 2m, turn, acquire imagesAttempt to recover from fault mode, reassess position.
8Acquire stereo imagesImage target 2 after fault recovery.
9Turn, drive forward, acquire stereo images, turn, acquire 180° navigation panoramaDrive past target 2, image its other side, turn in expected direction of next drive. Acquire navigation pan to plan next drive.
10Drive 8.5 m, slight turn, acquire 180° navigation panoramaDrive toward bend in arroyo.
11Turn, drive 7 m, acquire color imagesDrive to bend in arroyo. Acquire images of the contact between the arroyo and adjacent canyon wall.

6. Results and Recommendations

[35] Viz resulted in a major improvement in K9's ability to achieve desired objectives on the first try as compared to previous experiments which did not use the Viz planning tools [Stoker, 1998; Stoker et al., 2001]. Predicted pointing of cameras following a move invariably captured the desired target. To insure that the target was captured by images obtained after a drive, we acquired a panorama centered on its predicted position. In most cases the predicted and actual image pointing were quite close. Using Viz to estimate turn direction and drive distance led to significant improvement in the accuracy of placing the rover in the desired location as compared to previous tests. The total drive distance (60 m in 11 command cycles) was greater than had been accomplished in previous experiments, although controlled experiments using Viz are needed to prove an improvement in traverse distance. We recommend that Viz or an equivalent visualization system be used in future rover missions.

[36] The test showed that two rovers could cooperatively achieve objectives that would compete for mission time in a single rover mission (exploring versus in situ analysis and sample collection). K9 was able to scout potential drill targets and obtain good enough data to evaluate them. Furthermore, this could be done considerably more quickly than the targets could be drilled so several targets could be evaluated to select the best target before FIDO would have been ready to approach another target. To achieve its objectives, K9 was required to perform relatively long traverses between targets. However, this could still be accomplished more rapidly than sampling. Field tests have repeatedly demonstrated that investigating science targets, particularly if in situ measurements are performed, takes more mission time than driving between targets. This was illustrated by the fact that FIDO performed in situ observations on one science target in 6 command cycles, while K9 had driven 15 m and obtained remote-sensing observations of one target in 5 command cycles.

[37] An advantage of having two rovers working together was illustrated by the use of FIDO cameras to help diagnose K9's condition when it drove into the bush. The Pathfinder IMP camera, located on the stationary Pathfinder lander, was similarly used to image the Sojourner rover [Smith et al., 1997]. In addition to diagnosing problems, the IMP images were used to determine the rover's position after each move, preventing navigational errors from accumulating over time. Similarly, two rovers could image each other after each move for positional updates, enabling better navigational accuracy for both rovers. Recommendation: The advantage of using the “buddy system” to improve rover safety and efficiency should be further evaluated.

[38] The test convincingly demonstrated that a scout rover could survey a large area to select appropriate drill targets. However, the test did not prove that data from one rover could be used to speed the performance of another in drilling. The test ended before K9 scouted an appropriate target which FIDO then drilled. Recommendation: A future test should be set up with controlled conditions to determine if scouting by one rover can substantially improve sample acquisition performance of another.

[39] Our initial estimate was that K9 could scout and FIDO could sample a rock target during each day of the test, or in approximately three command cycles. In reality, rover activities took much longer. It is possible that with more practice, rover performance would improve, so longer tests are desirable for estimating mission accomplishment. Still, extrapolating the results from this test provides a basis for estimating the capabilities of current generation rovers such as the upcoming MER missions. K9, performing only remote-sensing observations, traversed 60 m in 11 command cycles. Assuming that a command cycle is performed each day, a mission focused on traversing and using only remote sensing might traverse a distance of 500 m in 90 sols of operation on Mars. FIDO required 3–5 command cycles to accurately place instruments on specific rocks. Taking this as a guide, a mission focusing primarily on measurements requiring instrument placement may measure 20–40 targets in a 90 sol period, with the range depending on the accuracy required for the instrument placement. A mission with an equal balance of these objectives may traverse roughly 250 m while measuring 10–20 targets.

[40] In this test, separate ROTs controlled the two rovers, but one Science Team analyzed the results. Operating one rover fully occupied each ROT, and operating two rovers simultaneously with current commanding systems seems impractical. In this scenario, with both rovers operating in the same area but having convergent science objectives, the Science Team was able to keep track of and interpret observations from both rovers. However, the upcoming MER missions will have two rovers operating simultaneously at different locations on Mars. A future test should evaluate whether one Science Team can effectively analyze data and give appropriate guidance to two rovers at different sites. Along these lines, it was evident that the Science Team paid greater attention to data from the FIDO rover than to data from the K9 rover. This probably resulted from their greater familiarity with it (it had already been in use for 1 week at the start of the two-rover mission) and the more sophisticated instrumentation it carried. As rover data is a scarce resource, and success depends on a rapid cycle of analyzing and interpreting information and requesting new actions, it is important to have adequate science staffing of a rover mission. Unlike other types of missions, where the data acquired can be analyzed over a period of years without affecting outcome, rovers require a high degree of interaction and decision-making on the part of the Science Team. Recommendation: Each rover should have a dedicated science team that is trained in rover operations by participating in mission simulations.

[41] The geological interpretations of the site, based on the test data sets, were generally correct (see discussion by Arvidson et al. [2002]). However, as has occurred in previous tests [Stoker, 1998; Stoker et al., 2001], a number of things were not noticed by the Science Team or their importance not recognized. For example, K9 traversed past a large bone in the Arroyo (Figure 10) that was not recognized in any of the image products. Ultimately, enabling rovers to notice unexpected things (like the bone) will require a much higher degree of rover autonomy than is currently available or envisioned for any platform. Both the target rocks for the K9 scout were selected as examples of “dark rocks, probably basalts.” Closer inspection of the first target showed that this rock was lighter in several areas where fresher surfaces were exposed, giving it a mottled appearance at a distance. Ground truth from the field showed that this rock was a weathered basalt with lighter fresh exposed surfaces. As K9 moved closer to the second “dark rock” target, images revealed a patchy coloration and irregular shape. One member of the Science Team guessed that it might be the stump of a tree. The closest images showed that the rock was most likely conglomerate, which was confirmed by the ground truth. The initial mistaken interpretations could probably have been corrected if IPS spectra or high-resolution imaging had been obtained on the scout targets, but these were not requested prior to sending K9 on its scouting mission. The targets were selected on the basis of an initial low-resolution panorama, and highest-resolution imaging of them was not acquired before K9 was sent off to scout them. Recommendation: All available remote-sensing instrumentation should be used to select targets for rover investigation prior to traversing.

Figure 10.

K9 passed within 2 m of the leg bone of a large animal during its traverse. This object was not noticed by the remote science team.

[42] To save time and to conserve data volume, much of the imaging data were taken at lower spatial resolution than was possible. Spatial resolution has been shown to be an important factor in accurate interpretation of geologic imaging [Stoker et al., 2001; Newsom et al., 2001; Cabrol et al., 2001a, 2001b]. Because the PanCam and HawkEye imagers are capable of very high spatial resolution, the associated data volume can be overwhelming, particularly considering the data limitations of space missions. No limitation was imposed on data volume allowed per command cycle for this test, and yet the Science Team never requested the highest spatial resolution imaging. Imaging small areas of targets of interest with the highest available spatial resolution, and perhaps even using superresolved imaging [Cheeseman, 1996; Stoker et al., 1999] of small areas of targets, should be considered as a part of any analysis of the geology. Recommendation: Science teams for rover missions need to understand how spatial resolution and image compression methods can affect the appearance and correct interpretation of geologic targets.

[43] Obtaining accurate color information is another important factor in achieving a correct interpretation. Scientists who are used to looking at things with their eyes can be fooled by inaccurate color presentation from an imaging device. Previous tests have shown that displaying color inaccurately can lead to gross misinterpretations [Newsom et al., 2001]. During this test a color panorama of the cliff was taken by K9 under conditions of variable cloudiness. The HawkEye imager uses a filter wheel to produce color images, so the color of the resulting mosaic varied markedly from image to image. The ROT and field team spent several hours trying to diagnose a perceived problem with the camera's shutter. The Science Team, convinced of a problem, requested that the data be reacquired, even though this would have required a command cycle and represented a large data allotment on a Mars mission. Recommendation: An image-processing specialist, familiar with the limitations of imaging systems, should be part of the science team.

[44] The Science Team made considerable use of descent imaging of the site to plan the mission. Use of similar data has also been an important component of previous rover field tests [Greeley et al., 1994; Stoker, 1998; Arvidson et al., 1998, 2000; Stoker et al., 2001; Cabrol et al., 2001a]. However, the upcoming MER mission will not have descent imaging due to the difficulties of accommodating it on an airbag-based landing system. The image resolution available from orbit will provide geologic context for rover observations but provides insufficient spatial resolution to plan traverses. It is important to understand how the use of lower-resolution planning data affects mission strategy. For example, it could influence the Science Team to attempt much longer traverses to get to science targets visible on the lower-resolution orbital imaging. Recommendation: Future training tests for the MER rover should use aerial context information comparable to that available on the actual mission.

[45] Improving a science team's ability to correctly understand rover observations is an important product of mission simulation. Science teams need to be trained to use rover observations to their maximum effectiveness. Recommendation: The science teams should be shown the ground truth and, where it differs from their interpretations, asked to understand why things were misinterpreted. Ideally, the science team should visit the field site and compare their rover-based interpretations with what they identify in the field, paying particular attention to what they missed and why.


[46] We thank the Mars Exploration Technology Office for suggesting and supporting this test. We gratefully acknowledge the cooperation of the Jet Propulsion Laboratory FIDO rover team in performing the test. We thank Nathalie Cabrol for a helpful review of the manuscript.