Journal of Geophysical Research: Planets

Athena Microscopic Imager investigation

Authors


Abstract

[1] The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400–700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 × 31 mm across a 1024 × 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (∼2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500–700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars.

1. Introduction

[2] The successful imaging experiments on previous Mars landers returned thousands of valuable images of the Martian surface [e.g., Arvidson et al., 1989; Golombek et al., 1999]. The capabilities of these imagers have steadily improved, while mass and power requirements have decreased. The 1976 Viking Lander cameras were designed for panoramic imaging, with a minimum detectable object size at the foot of the lander of 1.5 mm [Huck and Wall, 1976; Patterson et al., 1977]. The 1997 Imager for Mars Pathfinder (IMP) took slightly lower spatial resolution images of Mars from the lander, but with higher signal/noise and greater spectral resolution [Smith et al., 1997a, 1997b]. While IMP had a close-up lens that viewed a tip-plate magnet at 0.13 mm/pixel, sand-size particles could not be resolved on the Martian surface by either the Viking or IMP cameras. The Mars Pathfinder Sojourner rover cameras yielded images with resolution as good as 0.7 mm/pixel, which is insufficient to resolve fine sand grains but useful in characterizing soil and rock properties in several locations [Moore et al., 1999; Bridges et al., 1999]. Acquisition of higher-resolution images would have aided the interpretation of Alpha-Proton-X-ray Spectrometer data from Sojourner if they had resolved petrographic texture or constrained the extent of dust or other coatings.

[3] The Mars Exploration Rover (MER) missions [Crisp et al., 2003] provide opportunities to conduct field geologic investigations on Mars in early 2004. The Athena science payload [Squyres et al., 2003] on MER was designed for field operations and includes a Microscopic Imager (MI) that is intended to provide images of natural surfaces similar to the view through a geologist's hand lens. Technically, the “microscopic” imager is not a microscope: it has a fixed magnification of 0.4. In photographers' parlance, the system makes use of a “macro” lens. The instruments on the Athena payload were chosen for their ability to work together synergistically to maximize the scientific information extracted from a single Martian rock or soil target. An important driver for the design of the MI therefore was that it must provide close-up image documentation of the targets viewed by the other in situ instruments on the payload. Of the two in situ spectrometers (APXS and Mössbauer), the APXS has the larger field of view, with a diameter of about 38 mm. The field of view of the MI was chosen to be 31 mm across (42 mm on the diagonal), providing a good compromise between high spatial resolution (given the fixed size of the MI CCD) and good coverage of the APXS field of view. The Athena MI has a significantly larger field of view and lower resolution than the microscope on ESA's Beagle 2 lander, which has a resolution of 4.1 microns/pixel across a 4.2 mm square field of view (N. Thomas et al., The microscope for Beagle 2, submitted to Planetary and Space Science, 2002). The MI resolution is similar to that of the close-up lens configuration of the Beagle 2 stereo camera, which is 62 microns/pixel at the center of the field of view (A. Griffiths, personal communication, 2003).

[4] Other aspects of the MI design came about from practical considerations. Because of the desire to have a single electrical interface for all the MER cameras, the MI was designed with the same CCD as the Pancams, Navcams, and Hazcams [Bell et al., 2003; Maki et al., 2003]. Its format of 1024 × 1024 pixels and the 12-μm size of the pixels established the physical dimensions of the optics.

[5] Maintaining adequate focus is a special concern for close-up imaging of rough surfaces, and some consideration was given early in the design of the MI of an active focus mechanism. However, it was clear that a proper choice of f/number could provide a significant depth of field; we settled on f/15, providing an effective depth of field of ±3 mm. This value is larger than the fine positioning accuracy of the Instrument Deployment Device (IDD) on which the MI is mounted [Squyres et al., 2003]. We therefore opted for the mechanically simple approach of maintaining a fixed focus in the MI optics and moving the focus position by moving the IDD along a line perpendicular to the target surface. In a typical MI imaging sequence, the IDD is used to move the MI away from the target in steps of a few mm, acquiring an image at each step. Multiple images taken at various distances will be acquired to ensure good focus on all parts of rough surfaces. By combining a set of images acquired in this way, a completely focused image can be assembled.

[6] It would have been extremely desirable for scientific purposes to have the MI be a color camera. We designed a compact filter wheel for the Pancam cameras [Bell et al., 2003], and we devoted considerable effort early in the payload development to accommodate this filter wheel in the design of the MI as well. Ultimately, this simply proved impossible - the tight volume constraints at the front of the MER rover did not allow the filter wheel to be included. We also considered color illumination sources, but these could not be accommodated either.

[7] With no filter wheel or color lamps available, we use two other techniques to obtain color information for MI images. One involves the transparent dust cover. This cover is tinted orange, and it effectively provides a single color filter for MI imaging. The other involves Pancam: MI images typically are obtained of targets that are about 2 meters away from Pancam. At a range of 2 meters, there are about 60 Pancam pixels across an MI field of view (31 mm × 31 mm at best focus). While there is some minor defocus blur at this close range in Pancam images, they still provide substantial color information across an MI image. We are devoting significant effort to developing software tools that will allow the low-resolution color information from Pancam images to be combined with the high-resolution textural information of MI images.

[8] This paper provides a broad overview of the Athena MI investigation, with some overlap with the other camera papers in this issue [Bell et al., 2003; Maki et al., 2003]. Details of the MI design, calibration and plans for operation are given below, following a summary of the MI science objectives. Lastly, plans for data processing, product generation and archiving are discussed.

2. Science Objectives

[9] To contribute to the achievement of the science objectives of the MER missions [Crisp et al., 2003], the Athena Microscopic Imager will: (1) image fine-scale morphology and reflectance of natural rock and soil surfaces, (2) image fine-scale texture and reflectance of abraded rock surfaces, (3) aid in the interpretation of data gathered by other Athena instruments by imaging areas examined by them at high resolution, and (4) monitor the accumulation of dust on the capture and filter magnets.

[10] A wealth of geologic information can be obtained through studying rocks and soils with microscopes that have resolutions sufficient to enable detailed characterization of coatings, weathering rinds, individual mineral grains, or clasts. Such characterization is particularly important for analyses of aqueous sedimentary rocks. The size, angularity, shape, and sorting of grains reveal much about conditions of transport and deposition. Such information, which can be provided by the Microscopic Imager for sand-size and larger grains, will be extremely useful for understanding past aqueous environments on Mars. A variety of structures may be imaged that could provide diagnostic information about sedimentary environments, both in sedimentary rocks and unconsolidated soils. Across the size range from about 100 μm to 10 cm there are many well-documented sedimentary structures, formed within siliciclastic, carbonate and evaporitic environments, that reveal much about sedimentary processes and sedimentary environment [Collinson and Thompson, 1989; Ricci Lucchi, 1995]. Examples include stratification (e.g., cross laminations), bedforms (e.g., ripples), chemical precipitation (e.g., crystal fabrics) and dissolution (e.g., stylolites), desiccation features, and sediment fabric. The MI will also be used to study textures and layering in recent sediments such as duneforms and aeolian lag deposits. Observations of the size, shape, color, and sorting of aeolian sediments will be compared with previous theoretical, remote sensing and laboratory studies of windblown material on Mars [e.g., Iversen et al., 1976; Sagan et al., 1977; Greeley et al., 1980, 1992; Edgett and Christensen, 1991, 1994; Thomas et al., 1999] in order to better understand the origin and evolution of these materials. An example of the type of soil images expected from the MI is shown in Figure 1. A library of images of various terrestrial soil types is being assembled and will be used to aid the interpretation of MI data from Mars.

Figure 1.

Monochrome version of FIDO Color Microscopic Imager [Haldemann et al., 2002] data from May 2001 field test, showing natural soil illuminated by skylight (target in shadow). View is about 13 mm across, 20 microns/pixel, taken at f/10.

[11] Microscopic imaging also provides useful information on volcanic rocks and impact breccias. Vesicularity patterns give an indication of lava volatile content and distribution. Grain size and texture provide information on crystallinity of the magma when emplaced, its depth of origin, and how quickly it cooled. Microscopic imaging can be used to identify small veins of precipitated minerals like the carbonates in the Martian meteorite ALH84001. An example of the type of rock images expected from the MI is shown in Figure 2. In addition to images of natural surfaces, the MI will be used to image surfaces prepared using the Rock Abrasion Tool (RAT) [Gorevan et al., 2003; Squyres et al., 2003]. Comparison of microscopic images taken of a rock target before and after abrasion will allow mineralogy and potential weathering processes to be studied.

Figure 2.

Image of rough side of rock target AREF146, taken by engineering model MI under room lighting. Field of view 31 mm square, 30 microns/pixel.

[12] The MI will also be used to image the filter and capture magnets mounted on the front of the rover [Madsen et al., 2003]. These permanent magnets will be imaged frequently by Pancam and occasionally by MI during the landed mission, as airborne dust slowly accumulates on them. In order to monitor the thickness of the dust layer over time, the glass-bead blasted aluminum surface of the magnets has been marked by three types of tiny impressions. A stainless steel sphere (2 mm in diameter) was mounted on a high-precision drilling machine and pressed 5, 10 or 20 μm into the surface. The resulting holes are shallow craters with diameters of 200, 280 and 400 μm. Their actual depths were checked by optical microscopy and were found to be within specification to better than 2 μm. These holes are concentrated in clusters of 2, 3 or 5 holes (Figure 3). Within each cluster they are horizontally spaced 1000 ± 3 μm from each other. The surface markings have been designed for MI imaging and are not expected to be visible in Pancam images. The interpretation of the MI images in terms of volume/mass of accumulated dust is subject to significant uncertainty. However, this experiment should provide much more precise constraints on the dust layer thickness than previous methods, which were based on the optical contrast between dust-covered and dust-free areas of the magnet surface [Madsen et al., 1999].

Figure 3.

Filter and capture magnet surface markings. Largest impression (bottom image) is 20 μm deep and 400 μm in diameter. Resolution about 40 μm/pixel, similar to MI resolution.

[13] Stereoscopic MI data can be obtained by moving the camera laterally using the IDD, allowing the detailed topography of the target to be derived. Such high-resolution topography may help constrain the mineralogy of grains that show cleavage faces. For rocks and soils that exhibit interesting spatial heterogeneity, the IDD can also be used to acquire MI mosaics. The combination of MI and other Athena observations will provide strong constraints on the mineralogy, genesis, and modification of Martian surface materials [Squyres et al., 2003]. Finally, because imaging observations of Mars have not yet been made at the scale expected from the MI, new discoveries and insights are likely.

3. Instrument Description

[14] The MI was designed by camera specialists on the Athena science team and at Caltech's Jet Propulsion Laboratory (JPL). The JPL camera team completed the detailed design and fabrication of the MI, with components supplied by several vendors. The characteristics of the major components of the camera and its interface to the MER flight system are described below.

3.1. CCD and Electronics

[15] To reduce complexity and cost, all MER cameras share the same electronics design. Some aspects of the MER camera design were inherited from the cameras built for the Athena Precursor Experiment (APEX) [Squyres et al., 1999]. The MER cameras include a Mitel front-side illuminated, frame-transfer charge-coupled device (CCD) with 1024 × 2048 pixels. Half of the array is covered by aluminum and is used for image storage during readout. Immediately following image integration of 0 to 335.5 seconds, the image is transferred into the storage area in 5.12 msec. Readout of a full image then requires 5.2 seconds, after which another integration may begin. The serial register has 16 extra “reference” pixels on each end that are read out along with each line of data [see Bell et al., 2003, Figure 3]. The reference pixels are not exposed to light and therefore measure the bias level as each line of data is read out. The value of the last reference pixel is always replaced with the camera serial number. Within the operating temperature range of −55°C to +5°C, the MI has a full well depth in excess of 150,000 electrons and read noise of about 30 electrons. The gain of the MER science cameras (∼50 e-/DN) was designed to optimize the 12-bit digitization over the expected full well of the CCDs. The video offset can be set by command to bias the dynamic range of the CCD analog output relative to the range of the analog-to-digital converter. After conversion, 12-bit digital image data are sent to the rover computer. Further details of the MER camera electronics design are reported by Bell et al. [2003]. The non-operating (survival) temperature range of the cameras is −110°C to +55°C. The temperature of the MI CCD and electronics will not be controlled during flight, so variations in performance with temperature were carefully measured. Temperature sensors on the MI CCDs and electronics will return data for each image obtained, allowing temperature calibration to be applied.

3.2. Optics

[16] The MI optics employ a fixed focus, f/15 Cooke triplet design that provides ±3 mm depth-of-field at 30 μm/pixel sampling (Figure 4). The field of view is therefore 31 × 31 mm at the working distance. The focal length is 20 mm, and the working distance is 66 mm from the front of the lens barrel to the object plane. The first element in the optics assembly is a durable sapphire window that is less likely to be damaged by windblown debris or inadvertent contact with objects on Mars. It is included to protect the rest of the MI optics. The object to image distance of 100 mm was selected with instrument accommodation as the primary constraint. This design places the MI best focus position at approximately the same distance from the IDD turret axis as the target position for the other IDD instruments. Because the MI has a relatively small depth of field (±3 mm), a single MI image of a rough surface will contain both focused and unfocused areas.

Figure 4.

Cutaway diagram of MI optics barrel, showing sapphire window, lenses, and filter.

[17] As described above, the instantaneous field of view (IFOV) of the MI (30 μm/pixel) was chosen to yield an overall field of view that is compatible with the field of view of the other Athena instruments and the MER CCDs. Another consideration was the difference in resolution between the MI and the cameras that provide context images for MI observations. The ratio of MI to Pancam spatial resolution will typically be about 20, depending on the distance from Pancam to the target. Larger resolution differences make it more difficult to place MI data into the context of Pancam and other images. Our experience with images taken by the Field Integrated Design and Operations (FIDO) [Haldemann et al., 2002] rover showed that a resolution ratio exceeding 20 would not be desirable.

[18] Once the IFOV of the MI and the Mitel CCD were chosen, significant effort was devoted to selecting the focal ratio of the MI optics. To simplify operations and to minimize the number of MI images required to image a rough surface in good focus, we wanted to maximize the depth of field:

equation image

where F is the focal ratio and γp is the pixel pitch (12 microns). Note that depth of field increases with increasing focal ratio. However, blurring of the image due to diffraction also increases with increasing focal ratio: The diameter of the first dark ring in the classical Airy diffraction pattern is 2.44λF, where λ is wavelength and F is defined above. We therefore decided to limit the spectral response of the MI to visible wavelengths (400–700 nm) to reduce diffraction blur and simplify the optical design. Addition of a Schott BG-40 (light blue) filter yields a spectral response that is similar to that of the human eye (Figure 5). This restriction of the MI bandpass also increases the exposure time needed to image typical scenes on Mars and therefore reduces transfer smear (described below). The effective wavelength of the MI, for typical Mars spectral radiance, is 570 nm.

Figure 5.

Spectral response of MI S/N 105 at various temperatures. Note that the temperature dependence of the spectral response is minor.

[19] In addition to the tradeoff between diffraction blurring and depth of field, the tradeoff between blurring and undersampling was considered in detail. Blurring is a source of degradation that permits the retrieval of information about all spatial frequency components for which the signal-to-noise ratio (SNR) is sufficiently high, but undersampling is a source of noise that causes irretrievable loss of information [Fales et al., 1984]. McCormick et al. [1989] showed that the informationally optimized tradeoff between undersampling and blurring, and hence the relationship between the image-gathering response and the sampling passband, depends on the SNR. Huck et al. [1997] defined the optical design index as Optical Design Index ≡ equation image, where symbols are defined as above. They show that the end-to-end system is informationally optimized when the optical design index is between 0.35 (for SNR = 200) and 0.8 (for SNR = 16). The SNR of MI images will be affected by image compression to various degrees depending on the selected level of compression. We could not accurately predict the SNR of compressed MI images of Mars, so we decided to optimize the optical design for low SNR. The f/15 design we chose yields an optical design index of 0.7 and results in slight undersampling but beautifully crisp images (Figure 2). Modeling of the Cooke triplet design indicated that the depth of field would be at least 3 mm on each side of best focus [Smith et al., 2001].

3.3. Dust Cover

[20] Protection of the MI optics from contamination in the dusty Martian environment is obviously important. Despite every precaution, there is a significant chance that the camera could be accidentally brought into contact with soil at some point during the MER surface mission. In addition, atmospheric dust is likely to fall onto all exposed surfaces of the MER system. We have therefore provided a motor-operated dust cover (Figure 6) that includes an O-ring and labyrinth to keep dust from getting on the optics. The cover has a transparent window so that images can still be acquired in the event of a motor failure. The cover is fully opened (by rotating it 180°) immediately before each MI imaging sequence and closed upon sequence completion. It was designed to operate at temperatures between −55°C and +40°C and was tested across this range. The MI dust cover was developed by the IDD team rather than the camera team and was not integrated with the flight cameras during stand-alone camera testing.

Figure 6.

Schematic diagram of MI, dust cover, and contact sensor. Dust cover is rotated open by stepper motor. Ball at end of contact sensor was removed in final design.

[21] The dust cover window is made of Kapton polyimide film, which has an orange tint. Color information can be obtained by taking images of a target with the cover closed and open. During preflight camera testing, a similar film was used to take images of various rock targets. An example of the type of color image products that we will generate using such data is shown in Figure 7. In this example, the MI images were enhanced and therefore do not show true color. However, the images taken with and without the dust cover show color variations that are also visible in the true color (scanned) image. We therefore expect that such basic color information will be helpful in interpreting MI images of rocks and soils on Mars. The spectral transmittance of the Kapton film that is used in the flight units was measured at NASA Johnson Space Center; the spectral response of the system (with the dust cover opened and closed) to rocks observed at the Mars Pathfinder landing site [Maki et al., 1999] is shown in Figure 8.

Figure 7.

Digital flatbed scanner images (true color at 42 μm/pixel) in left column and digital MI S/N 110 (30 μm/pixel) color composite images in right column for rock AREF222 (flat surface polished with 60 grit paper). MI color composites have image taken without dust cover window in green channel, image taken with dust cover in red channel, and difference in blue channel. Bluish shading at upper left in (b) is due to reflection of room light off dust cover window sample. Images (c) and (d) show the circled region in (a) and (b) at higher magnification. Images (e) and (f) are higher magnification views of the crystal corner in (c) and (d). Images (e) and (f) show individual pixels, with the scale bar in (e) two pixels (84 μm) wide by 500 μm long.

Figure 8.

Response of MI 105 (at −10°C) to spectral radiance of rocks measured by Mars Pathfinder, with and without dust cover. Effective wavelength without dust cover = 570 nm; with dust cover = 582 nm.

3.4. Contact Sensor

[22] Rocks and outcrops are key targets for investigation by the in situ instruments on the IDD. Because the depth of field of the MI is limited, accurate positioning relative to targets is needed. Shadowing of the target by a contact sensor array, as seen in FIDO Color Microscopic images [Haldemann et al., 2002], was a concern. Therefore a single contact sensor is included for the MI, in a location that is well outside of the MI field of view (Figure 6). The contact sensor shaft extends 42 mm in front of the MI optics and includes a spring near its base to reduce the risk of bending or breaking the shaft. The contact sensor is shorter than the distance from the MI optics to the best focus position so that all areas of very rough targets (such as vesicular volcanic rocks) can be imaged in good focus. The contact sensor is intended for use on rock targets, not soils. The contact sensor was designed to operate at temperatures between −120°C and +55°C, and was tested across this range. Within this temperature range, the actuation force of the contact sensor was measured on the engineering model, before and after 400 actuations. The actuation force was between 0.86 and 1.31 N in all cases. One of the flight contact sensors was also measured under ambient conditions and required an actuation force of 0.78 N. Operational scenarios that make use of the contact sensor are described in section 5.2 below. The contact sensor was developed by the IDD team rather than the camera team and was therefore not integrated with the MI during stand-alone camera calibration.

3.5. Integration and Interfaces

[23] The MI is mounted on the instrument turret at the end of the IDD, along with the three other Athena in situ instruments (APXS, Mössbauer, and RAT). For launch and landing, the IDD and its instruments are stowed next to the front of the rover. After deployment, the IDD will be used to place any of the in situ instruments against selected targets on Mars [Squyres et al., 2003]. An image of the MER 2 instrument turret taken during system assembly and testing is shown in Figure 9. The MI contact sensor and dust cover were integrated onto the IDD at the same time as the camera and electronics, so testing of these components together was not possible at subsystem levels. MI serial number (S/N) 105 was mounted on Spirit (MER-A) and MI S/N 110 was mounted on Opportunity (MER-B).

Figure 9.

IDD instrument turret during MER 2 testing. RAT at left, MI at center, APXS at right (Mössbauer spectrometer not visible). MI dust cover is shown closed, with contact sensor to lower left.

3.6. Mass, Power, Volume, and Data

[24] The mass of the MI is 210 g without the dust cover and contact sensor; total system mass is ∼290 g. The camera requires +7V and −10V from the rover, and the dust cover motor requires 5V. The camera consumes as much as 4.3 W of power during CCD flush; during integration and readout the power drops to 2.5 W. The dust cover motor requires 0.5 W of power to open or close the dust cover. The MI optics and CCD assembly is 48.6 mm high (along the boresight), 51 mm long and 41 mm wide. The electronics box is 78 mm by 75.1 mm by 34 mm, including the connector and mounting hardware (Figure 10). A raw, 12-bit image with reference pixels represents almost 13 Mbits of data, or 2.1 Mbytes when stored as 16-bit integers. Subsets of the full image array can be selected and/or pixels can be binned to reduce data volume. Image compression will be used to maximize the information contained in the data returned to Earth.

Figure 10.

MI S/N 110 with protective cover over optics. Optics and CCD assembly (top) connected to electronics box (bottom) by flex cable.

4. Calibration and Testing

[25] Both MI flight units were assembled, tested, and calibrated at Caltech's Jet Propulsion Laboratory (JPL) in 2002. After each MI was integrated onto the IDD and MER flight system, they underwent testing at the MER system level. The calibration and test procedures, preliminary results, and plans for in-flight calibration are summarized below. Details of the MI calibration plan are given in MER project document 420-1-437 (JPL D-19695). The Pancam and MI calibration plans were reviewed by an independent panel of imaging scientists, and their recommendations were incorporated into the plans and implemented. The MI performance requirements relevant to instrument calibration are summarized in Table 1. The performance of the two flight MIs is essentially identical; examples of data from both cameras are shown below.

Table 1. MI Performance Requirements
ParameterValue
Instantaneous Field of View (IFOV) on-axis30 ± 1.5 micrometers/pixel
Field of View (FOV)1024 × 1024 square pixels
Spectral bandpass400–680 nanometers
Effective depth of field≥±3 millimeters
Optics MTF over spectral bandpass at best focus≥0.35 at 30 lp/mm
Absolute radiometric calibration accuracy≤20%
Relative (pixel-to-pixel) radiometric calibration accuracy≤5%
Signal to Noise Ratio (SNR) for exposures of ≥20% full well over the spectral bandpass within the calibrated operating temperature range≥100
Accuracy of temperature sensor on the CCD package±2°C
Working f/#15 ± 0.75
Operating temperature range within calibrated specifications−55 ± 2°C to +5 ± 2°C

4.1. Test Procedures and Equipment

[26] Many of the MI components were tested before they were built into the cameras, primarily to verify performance. Many component-level tests are important to overall camera calibration, including spectral transmission of the optics, filters, and dust cover windows, calibration of temperature sensors, and performance of the CCDs. The spectral transmission of the optical barrel assemblies was tested by the optics vendor, Kaiser Electro-Optics. The spectral transmission of the MI filters was measured at JPL, and the dust cover window spectral transmission was measured at the NASA Johnson Space Center. The temperature sensors were calibrated at the vendor, Rosemount Aerospace. The CCDs used in the MER cameras were thoroughly tested at JPL; the results of these tests (including photon transfer/linearity, dark current, flat field, residual bulk image, and spectral quantum efficiency) were used to select the best CCDs for the flight cameras. An example of the spectral quantum efficiency results is shown in Figure 11. Residual bulk image is most prevalent at low temperatures and long (near-IR) wavelengths and is therefore not expected to be significant for the MI. The details of the CCD tests are given in MER project document 420-1-485 (JPL D-20247), and more examples of CCD test results are given by Bell et al. [2003].

Figure 11.

Spectral quantum efficiency of CCD S/N 409 (used in MI S/N 105).

[27] The MER science cameras were assembled, tested, and calibrated in D. Thiessen's clean laboratory environment at JPL. The laboratory configuration and equipment were customized for MER testing and calibration. Most of the science camera testing and calibration was done in two labs, one for ambient testing and another for thermal/vacuum testing. The geometric and other tests that were not significantly affected by temperature were performed at room temperature and pressure on optical benches with electrostatic discharge protection. The lab configuration for MI ambient calibration is shown in Figure 12. Three science cameras (2 Pancams, 1 MI) were tested together in the thermal/vacuum chamber, all three viewing external targets and sources through an optical-grade quartz window. The thermal tests and calibration were performed under high vacuum (<10−6 torr) at a variety of temperatures spanning the expected temperature range on the surface of Mars. Flight-acceptance thermal cycling was performed before camera calibration, and some calibration data were acquired during the acceptance tests. At very low temperature (−110°C), the optimum video offset for each camera was determined by measuring the dark current in zero-exposure images and avoiding clipping the signal to zero DN (see Table 3). Most of the MI calibration was done at the extremes of the operating temperature range (−55°C and +5°C) and at one intermediate temperature (−10°C). The full suite of MI tests, the required accuracy of measurements, and environments are summarized in Table 2. All tests were successfully performed during the period July–September, 2002; 18.4 Gbytes of MI calibration data were generated and copied to the USGS for reduction and analysis. The preliminary results of MI calibration are summarized in the next section.

Figure 12.

MI ambient test equipment. Camera (not shown) mounted on camera mounting bracket (CMB) at left, to view targets in holder mounted on 3-axis stage at center. Camera and targets aligned using alignment telescope (AT) at top right, targets illuminated from behind by sliding small black integrating sphere to right.

Table 2. MI Stand-Alone Calibration and Testing
Test NameSubtestAccuracyEnvironmental Conditions
1. Light Transfer   
system linearity±1%, from 10 to 90% full well−55°C and +5°C; pressure ≤ 10−6 torr
read noise±2 e-−55°C and +5°C; pressure ≤ 10−6 torr
full well±5% e-−55°C and +5°C; pressure ≤ 10−6 torr
gain±2% e-/DN−55°C and +5°C; pressure ≤ 10−6 torr
bias (offset)±5% DN−55°C and +5°C; pressure ≤ 10−6 torr
dark current and noise±0.1 e-, RMS noise−55°C and +5°C; pressure ≤ 10−6 torr
2. Absolute and Relative Radiometry ≤20% absolute; ≤5% relative−55°C and +5°C; pressure ≤ 10−6 torr
3. System Spectral Response wavelength, ±0.2 nm; flux, ±7%−55°C and +5°C; pressure ≤ 10−6 torr
4. CCD Blooming Behavior ±5%, adjacent pixels−55°C and +5°C; pressure ≤ 10−6 torr
5. Observation of Rock Target ±1 mm focus controlAmbient
6. CCD Transfer Smear ±1% pixel responseAmbient
7. Grid Target Imaging  Ambient
Effective Focal Length±2% of EFL 
Field of View±0.2° 
Geometric Distortion±0.3% 
8. Bar Target Imaging  Ambient
Depth of Field±1 mm 
MTF±10% at 30 lp/mm 
9. Scattered and Stray Light Factor of 2 to 10Ambient

4.2. Preflight Calibration Results

[28] Reduction and analysis of the MI preflight calibration data is ongoing, so many of the results presented in this section are preliminary. A complete calibration report for each MI will be delivered to the MER project in the fall of 2003. The preflight calibration data were gathered using ground support equipment (GSE) in various laboratory settings. Typically, full frames were acquired along with reference pixels and stored as 16-bit integers (no compression). The GSE generated image files in PDS format, with the PDS label composed of a subset of the keywords to be used for flight data. These data will be validated for compliance with PDS standards and archived in the PDS.

4.2.1. Light Transfer and Noise

[29] Light transfer calibration was performed at ambient conditions and in the thermal/vacuum chamber. Light transfer sequences were designed to make use of the photon transfer technique [Janesick et al., 1987] to derive read noise, full well, and gain. Light transfer and dark current data were acquired at the CCD component level and at the camera level. During ambient tests, the dark current rate was high enough that “light transfer” sequences were obtained by taking dark frames at various integration times. During thermal/vacuum tests, an integrating sphere was adjusted to yield Mars-like radiance levels, and light transfer sequences were obtained by varying integration time. Typically, 21 integration times were used to produce light transfer data. These data were also used to measure the linearity of the camera response with respect to input radiance, and show that the response is linear to better than 1% within the operating temperature range. The read noise, full well, and gain measured for each camera are summarized in Table 3. Coherent noise was not observed in any of the calibration images.

Table 3. MI Performance Summarya
 MI S/N 105MI S/N 110
  • a

    Operating temperature range.

SpacecraftMER-A (“Spirit”)MER-B (“Opportunity”)
Read Noise32.0 ± 4.0 electrons29.7 ± 4.0 electrons
Full Well169304 ± 6099 electrons160019 ± 13933 electrons
Gain47.9 ± 1.6 electrons/DN47.1 ± 1.7 electrons/DN
Default video offset40904080

[30] Dark current generation leads to one of the largest uncertainties in generating accurate radiometrically calibrated images. During surface operations the CCD temperature could range up to ∼10° C. A dark target (low albedo and in shadow) might require an exposure of up to 10 seconds. Under these (worst-case) conditions the dark current would be 300 to 400 DN. Because the radiant flux of the scene is continually incident on the MI detectors (i.e., the cameras are electronically shuttered and have no filters or other mechanical devices to block the incoming radiance), the dark current cannot be measured directly during Mars surface operations. (It could be measured at night but the temperatures would be much lower and not representative of the daytime conditions.) Hence it is important to carefully model the dark current and to understand the physical causes of its variance.

[31] Dark current images (no light source) were acquired in the thermal/vacuum chamber at CCD and electronics temperatures spanning the flight acceptance range. We used these data to model three separable components of dark current (thermally generated electrons) using a modular approach. These elements are referred to as: 1) the reference-pixel component, 2) zero-exposure component, and 3) active-area component. In the event that the dark current behavior changes after launch, this approach allows flexibility to adjust components of the model individually depending on the physics of the situation. The reference-pixel component (so named because it is measured directly by dummy pixels outside the active area of the detector [see Bell et al., 2003]) is essentially a function of camera electronics temperature and ranges up to ∼50 DN. It is primarily due to thermal noise in the electronics although it has small dependencies (≤5 DN) on exposure time and CCD temperature as well. Optionally reference-pixel data may be returned only occasionally to save downlink resources; in this case we can predict this component with an accuracy of about ±2 DN and validate the models with occasional flight data. The zero-exposure component displays a common, fixed spatial modulation across the detector, with the left and right edges of the frame up to ∼3× brighter than the center. It is probably caused by a fixed thermal gradient of ∼5° across the detector, or diffusion of thermal electrons into the array. All frames contain this common spatial pattern and its amplitude grows exponentially with increasing CCD temperature, ranging up to about 100 DN at the highest expected operating temperatures. This component can easily be monitored directly during cruise and surface operations and is our best check of camera behavior.

[32] The last component is the most important and the most difficult to predict and verify; we refer to it as the active-area dark current component. Once the reference-pixel and zero-exposure components have been subtracted, this component is manifested as a nearly constant brightness (within ∼2%) across the detector. It is primarily a strong function of CCD temperature (see Figure 13) although it exhibits a significant dependence on exposure time. This dark current component would saturate the image for an exposure time >100 seconds at a CCD temperature of >10°C. Fortunately this is well beyond the expected operating conditions. The observed active area dark current (+'s in Figure 13) shows two departures from the expected simple exponential dependence on CCD temperature. The first is a slight curvature than we can ignore because it only affects model estimates for very cold temperatures for which the dark current generations is <0.1 DN/sec (Figure 13). The second is a spread in the data away from the simple exponential fit of Figure 13. We have concluded that this is due to heating of the CCD detector during an exposure. After the camera is powered on, the temperature of the detector is recorded near the beginning of the exposure. Power dissipation during integration causes the CCD temperature to rise about 4°C with an exponential time constant (1/e) of ∼10 seconds. We used a simple model to predict the effective CCD temperature as a function of exposure time; the black dots in Figure 13 show the application of this model and agreement with the simple exponential model. Because of the exponential growth, this principal dark current component is difficult to accurately model at high CCD temperatures (> 0°C) and is therefore a significant source of uncertainty. However, the MI is not expected to operate at these high temperatures often, so uncertainties in dark current correction will be acceptably low for most MI data.

Figure 13.

MI S/N 110 active-area dark current data and model. The CCD temperatures measured at the exposure start were adjusted with a model for heating of the CCD detector as a function of the exposure duration (see text).

4.2.2. Absolute and Relative Radiometry

[33] Absolute radiometric calibration was done under thermal/vacuum conditions by viewing an illuminated integrating sphere through the chamber window. The sphere output was adjusted to Mars-like radiance and monitored by a calibrated photodiode; the spectral response of the photodiode was periodically measured throughout the radiometric tests. These data are being used to derive the absolute response of the MI cameras; the results will be included in the MI Calibration Report (MER 420-6-704, JPL D-19830). Preliminary analysis shows the SNR > 100 at 20% of full well, and SNR > 200 at half well. Depending on scene illumination (direct sunlight or deep shadow), MI exposure times are expected to vary from 100 msec to a few seconds.

[34] Relative (pixel-to-pixel) radiometric calibration was done under both ambient and thermal/vacuum conditions by viewing integrating spheres. The “flat field” images initially taken in the thermal/vacuum chamber were compromised by reflections off the chamber window (the MI best focus position was inside the thick window). Therefore a black shield was designed and implemented on the second MI flight unit that reduced reflections off the front of the optics barrel. In addition, a diffusing plate was custom-made by bead-blasting flat optical glass and the plate was inserted into the chamber between the cameras and the window. This diffusing plate eliminated reflections, but comparisons of flat fields taken in the chamber with others taken under ambient conditions without the diffusing plate indicate that the plate is not perfectly diffusing. These various flat field images are being analyzed and processed to derive good relative radiometric calibration; initial results show that the 5% relative radiometric accuracy requirement can be met using only the ambient flat fields (Figure 14).

Figure 14.

Average of MI S/N 110 flat field images taken under ambient conditions.

4.2.3. System Spectral Response

[35] In addition to the component-level spectral transmission and quantum efficiency tests described above, the spectral response of the cameras was measured at the three thermal/vacuum calibration temperatures. An Acton monochromator was positioned in front of the chamber window, allowing a monochromatic beam to be fed into each camera. The monochromator was spectrally calibrated before and after the tests, and its output was monitored during the tests using a photodiode fed by a pick-off mirror. This diode was cross-calibrated to another photodiode that was periodically mounted over the exit slit of the monochromator. The spectral transmission of the chamber window was also measured. Images of the (out of focus) monochromator beam were taken at 10 nm intervals within the MI bandpass and at 25 nm intervals at wavelengths greater than 800 nm. The MI 105 spectral response from 350 to 750 nm at various temperatures is shown in Figure 5. The response outside of this wavelength band is not measurable above the noise.

4.2.4. Blooming Behavior

[36] Anti-blooming gates are not included in the MER CCDs [Bell et al., 2003], so excess charge will bloom into adjacent pixels. The blooming behavior of the MI was tested in ambient conditions by placing an optical fiber at the nominal best focus position and adjusting the integration time to produce various amounts of blooming. The CCDs bloomed as expected; in some cases the excess charge bloomed all the way to the edge of the array in the transfer direction. Images taken immediately after these bloomed images showed no residual charge in the saturated areas, indicating that the excess charge was effectively removed during flush. While blooming may be difficult to avoid in images of targets illuminated directly by the sun, the results of these tests show that it will not degrade subsequent images.

4.2.5. Rock Target Imaging

[37] Images of rock targets were successfully acquired using both MI flight units under ambient conditions. Examples of these images are shown in Figures 2 and 7; many other images were taken at various distances from the best focus position (see Appendix A). The rock targets had rough, natural surfaces on one side and were polished to simulate a surface that had been prepared by the RAT on the other. Images of both sides of the targets were acquired using the MI flight units. These images are useful for testing MI software (described below) and will be used in conjunction with data from other Athena instruments to characterize the rock samples, as described by Squyres et al. [2003].

4.2.6. Transfer Smear

[38] The MER camera design does not include a mechanical shutter. The CCD is flushed immediately before integration begins, and at the end of integration the image is transferred to the storage region in 5.12 msec [Bell et al., 2003]. During the flush and transfer of the charge from the image region to the storage region, light continues to fall on the image region and generate photoelectrons. Each pixel is exposed to incident light for 5 μsec times its row number, where row 1 is shifted into the shielded region of the CCD first. This results in a smeared image of the scene, and is also known as the “shutter effect.” A common method to correct for transfer smear is to obtain a zero-second exposure of the same scene immediately before or after the image to be corrected. These “zeros” were often taken during the MI calibration and testing to allow this correction to be made. In addition, the same fiber setup described in the previous subsection was used to evaluate transfer smear for each of the MI flight units. Analysis of these data has confirmed that transfer smear is generated during both the CCD flush cycle and during image transfer. Using these results, we have developed a model that can be used to remove the effects of transfer smear in full-frame, unsaturated images for which no “zero” is available.

[39] For the MER cameras, the minimum SNR due to transfer smear (for the last pixels transferred) is therefore tex/10.24, where the exposure time tex is in msec and 10.24 msec is the sum of the flush duration and the transfer time. For example, SNR > 100 for exposures longer than 1 second (considering noise contributions only from transfer smear). Correction of the shutter effect, either by subtracting a zero-exposure frame of the same scene, or using the model described above, is therefore required for images taken with exposure times less than 1 second in order to meet the 100:1 SNR requirement (Table 1). Correction of transfer smear during landed operations using flight software is discussed below.

4.2.7. Geometry

[40] The effective focal length, field of view, and geometric distortion of the MI flight units were measured under ambient conditions in the test configuration shown in Figure 12. An accurately characterized grid target was imaged at various distances relative to nominal best focus to evaluate changes in geometry with distance. The position of the grid target was controlled to sub-micron accuracy using the stage and controller shown on the right side of Figure 12. Preliminary analysis of the geometric data show that the effective focal length of both Microscopic Imagers is 20.2 mm, their working f/# is 14.4, and their field of view at the best focus position is 31.5 mm. The IFOV at best focus for MI 105 is 30.8 microns/pixel, and the IFOV for MI 110 is 30.7 microns/pixel. Geometric distortion is less than the measurement error of 0.3 pixel. All of these values are within the design specifications (Table 1).

4.2.8. Resolution

[41] The depth of field and modulation transfer function (MTF) of the MI flight units were measured under ambient conditions in the test configuration shown in Figure 12. A well known bar target was imaged at various distances relative to nominal best focus to evaluate changes in MTF with distance. The position of the bar target was controlled to sub-micron accuracy using the stage and controller shown on the right side of Figure 12. Preliminary reduction of the bar target data indicates that the MTF and depth of field requirements are met for both flight units (Figure 15). Within 2 mm of the best focus distance, changes in MTF are not significant.

Figure 15.

Modulation transfer function from bar target images taken at various distances from MI optics first principal plane. (top) MI S/N 105 MTF at target distances separated by 1 mm. (bottom) MI S/N 110 MTF at target distances separated by 3 mm.

4.2.9. Scattered and Stray Light

[42] Scattered and stray light were evaluated under ambient conditions using an optical fiber mounted to permit rotation relative to the camera optics, within and beyond of the field of view. The fiber was also moved vertically in and out of the MI field of view. These tests show ghost images that are likely due to reflections between the first powered optical element (lens) and the sapphire window (Figure 16). Images of a fiber optic were taken at exposure times that resulted in saturation when the fiber was in the field of view. The fiber was moved in 5° angular increments within and beyond the edge of field, with shutter frames taken at each location. The fiber was moved in both yaw and pitch relative to the MI optical axis, in both positive and negative yaw directions but only positive pitch. The fiber was positioned near best focus for MI 105, and at a distance of ∼300 mm for MI 110 testing.

Figure 16.

Typical ghost images in MI S/N 105 images, corrected for transfer smear and contrast enhanced. Zero-exposure frame used to correct for transfer smear was also saturated, causing 0 DN columns at azimuth of light source. Maximum ghost intensity ∼0.01% of source intensity. (left) Image 020718083140, yaw +10.1°. (right) Image 020718085123, yaw −11.4°.

[43] The maximum brightness of ghost images was calculated relative to the fiber brightness measured in unsaturated images, using calibration data for the fiber illuminator. There is very little difference in the intensity of ghosts between the yaw and pitch directions. When the yaw or pitch exceeds 12.3 degrees, the source is outside of the field of view of the MI. The decrease in ghost intensities as the fiber is moved away from the center of the field of view does not appear to be affected by the source being outside of the field of view. The images taken with the source closest to the center of the field of view shows the greatest ghost intensity, up to ∼0.1%. As shown in Figure 16, the pattern of ghost images is symmetrical about the center of the field of view. When the fiber was moved outside of the MI field of view, the pattern of ghost images remained similar. Images taken with the fiber pitched upward and outside of the field of view show similar ghost patterns, indicating little dependence on the azimuth of the source. Scattered light is more intense than the ghosts near the light source, but is limited in extent and not a concern.

4.3. System Tests

[44] MI S/N 105 was integrated onto the MER-2 flight system (“Spirit”), which was launched on June 10, 2003 and is scheduled to land on Mars on January 4, 2004. MI S/N 110 was integrated onto the MER-1 flight system (“Opportunity”), which was launched on July 7, 2003 and is scheduled to land on January 25, 2004. After integration, system tests were performed to verify proper camera operation and measure the accuracy of MI placement using the IDD. One of the first images taken by a MER camera using flight software and hardware is shown in Figure 17. Overall, MI performance during system testing was excellent. However, as expected, radiation from the 57Co Mössbauer reference source [Klingelhöfer et al., 2003] resulted in short bright tracks in MI dark frames (Figure 18). The radiation-induced tracks are expected to decrease in frequency during the MER mission as the Mössbauer sources decay. In addition, some new bad pixels are apparent in MI images taken during system testing, as shown in Figure 18. No evidence for coherent noise has been found in any of the MI test images.

Figure 17.

Full MI S/N 105 frame taken under ambient conditions using flight rover hardware and software.

Figure 18.

MI 105 zero-second dark frame taken during system testing, contrast enhanced to show radiation tracks. Note also vertical lines caused by transfer smear of bad pixels.

[45] During electromagnetic interference testing, interference between the UHF receiver and the instruments on the IDD was observed. Therefore the MI cannot be operated during UHF passes, but UHF passes are expected to last no more than 6 minutes, typically once a day [Squyres et al., 2003]. During system thermal tests, the IDD was deployed and commanded to move the MI to a test target. The MI contact sensor was successfully used to detect the target, then the IDD was moved away from the target to the nominal best focus position (Figure 19).

Figure 19.

MI S/N 105 image of test target, taken during MER-2 system thermal testing.

[46] After the MER launches, operational readiness tests are planned that will make use of rover system testbeds at JPL. It is expected that many useful lessons will be learned about MI and IDD operations during these tests and that these lessons will be applied to surface operations.

4.4. In-Flight Calibration

[47] In order to verify the accuracy of preflight calibration and to identify changes in camera performance, acquisition of a limited amount of in-flight calibration data is planned. Analysis of these data will enable updating of calibration parameters if necessary, perhaps improving MI calibration. Anticipated in-flight calibration activities are described below.

4.4.1. Dark Fields

[48] During cruise to Mars, MI dark current images and reference pixel data will be acquired and returned to Earth. These dark data will be acquired at different temperatures if possible and losslessly compressed. This will serve as a functional test and permit the dark current model to be verified and updated. Reference pixel data will also be acquired and optionally returned during the landed mission. It will not be possible to obtain MI dark images on the Martian surface at temperatures that produce significant dark current because there is no mechanical shutter in the camera.

4.4.2. Target Imaging

[49] During surface operations, in particular during the “calibration campaign” soon after landing, images of the Compositional Calibration Target (CCT) and magnet array will serve to verify IDD positioning accuracy and MI focal distance. The black dots on the edge of the CCT (shown in Figure 20) are easily resolvable by the MI. This test will utilize the experience and sequences derived from the system level tests described above. Any changes with respect to preflight calibration data will be analyzed and may be used to modify MI/IDD command sequences.

Figure 20.

Isometric drawing of CCT. Diameter of gray disk is 45 mm; diameter of black dots around edge of target is 1 mm.

4.4.3. Sky Flat Fields

[50] MI images of the Martian sky, taken with the dust cover open and closed, will be used to verify and, if necessary, update the flat field calibration. Sky images could be acquired while the Mössbauer or APX spectrometers are placed against a surface target, for example. Any clouds in the sky will be very far out of focus, but Navcam or Pancam images of the same area of sky will be taken to verify that cloud-free flat fields are acquired.

5. Operation

[51] This section describes how the IDD and MI will be commanded to acquire MI data and the flight software that can be optionally used to process the images onboard the rovers. The IDD will not be deployed until after rover egress from the lander, so no useful MI data will be obtained until after egress. The other MER remote sensing instruments will be used to select targets for in situ investigation [Squyres et al., 2003]. After egress, the rover will be commanded to approach targets so that they are within the IDD workspace and accessible to the MI and other in situ instruments. In addition, the IDD will be used to place the MI and other instruments against the compositional calibration target and the filter/capture magnets mounted on the front of the rovers [Madsen et al., 2003].

[52] Once the rover is deployed onto the Martian surface, the MI will be used primarily in two different ways. One will be in combination with all of the other Athena payload elements on high-priority rock and soil targets. When a target has been selected for in-depth investigation, all five Athena instruments can be brought to bear to study it. The IDD will be used to place the MI on the same spot investigated by the APXS and the Mössbauer, and this spot will also be imaged in all colors by Pancam and investigated in the infrared by MiniTES. For targets that appear to have interesting spatial heterogeneity, multiple IDD placements may be used to acquire MI mosaics. And if the surface topography of a target appears to be especially significant, the IDD can be used to acquire MI images in stereo.

[53] The second mode of MI use is for “target of opportunity” imaging. At the end of each sol on which the rover has moved, the front Hazcams will be used to document the scene in front of the rover. This Hazcam stereo pair provides the geometric information necessary to assess whether it is safe to deploy the IDD into its work volume, and to identify targets within the work volume. Whenever the work volume is found to be safe, it will be normal practice to begin the next sol with a deployment of the IDD and MI imaging on the most scientifically interesting target within that volume. These targets of opportunity are distinct from the high-priority targets observed by all instruments; they are simply whatever is found to be the most interesting thing in front of the rover at the end of each move that it makes. Imaging on targets of opportunity will substantially increase the volume of data from the MI, and should provide good documentation of the diversity of fine-scale surface textures seen at the two MER landing sites. Such MI images will also provide serendipitous but invaluable information on lithologic diversity in the vicinity of the rovers.

5.1. IDD Positioning

[54] The accuracy and repeatability of IDD positioning is clearly important to MI and other MER in situ instrument operations. The IDD is part of the Instrument Positioning System (IPS), which includes the front Hazcams and the software needed to use stereo imagery to build 3D models of objects within the IDD workspace [Squyres et al., 2003; Maki et al., 2003]. The terrain maps generated using stereo Hazcam images are used to determine the position and shape of potential IDD targets. These terrain maps are used to determine which parts of the surface can be accessed by the various in situ instruments, including the MI. Accessibility is determined by both the position and orientation of the target surface in the terrain map, and is limited by joint motion restrictions and possible collisions of the IDD and in situ instruments with the rover or the Martian surface. An MI target is therefore selected based upon its 3D position and surface normal orientation. Once a target is selected for MI observations, this information is used to command the IDD to position the MI at a suitable distance from the target with its boresight parallel to the surface normal. The accuracy of the MI positioning by the IPS determines the quality of the image focus.

[55] The requirements for IDD/IPS positioning relevant to the MI are summarized in Table 4. The IPS tests performed during MER system testing indicate that the relative positioning requirements can be met. Front Hazcam stereo images of the IDD workspace will be used to select MI targets and determine their 3-dimensional position and orientation. This information will then be used to command the IDD to position the MI and other in situ instruments. Estimates of the position and orientation of the MI for each acquired image will be stored in the rover computer and returned to Earth with the image data.

Table 4. IDD/IPS Positioning Requirements
ParameterValue(s)
Angular accuracy of instrument positioning in free space within the dexterous workspace of the IPS5 degrees
Instrument positioning accuracy in free space within the dexterous workspace of the IPS5 mm
Instrument positioning repeatability±4 mm in position, ±3 degrees in orientation
Minimum controllable motion along a science target's surface normal vector2 mm ± 1 mm RMS
Positioning accuracy of each in situ payload element to a science target that has not been previously contacted by another in situ instrument≤10 mm
Orientation accuracy of each in situ payload element with respect to normal of a science target's local surface that has not been previously contacted by another in situ instrument≤10 degrees
IDD damping time after placing the MI in position for imaging (vibration amplitude less than 30 microns)≤15 seconds

5.2. Contact Sensing

[56] The MI contact sensor can be used to terminate a “guarded move” by the IDD, as follows. The IDD will be commanded to move the MI along its boresight axis toward a hard object, such as a rock. Motion along this vector will continue until contact is sensed, at which time IDD motion is immediately stopped. The contact sensor can also be used to place the MI near targets on the rover, except for the magnet arrays because of the desire to avoid disturbing material on them.

5.3. Imaging Sequences

[57] Various types of MI imaging sequences are planned and will be tested on the MER system testbeds extensively before landing. The results of these tests are likely to refine the details of MI sequences, and experience during landed operations will also be useful in updating command sequences. The MI Payload Uplink Leads (PULs) have primary responsibility for generation, validation, and updating of MI command sequences. As described above, images taken with the dust cover opened and closed in the same position can be used to derive color information at the 30-micron scale. Such a cover open/close image pair can be inserted into any of the sequences described below.

[58] For hard targets such as rocks, the MI contact sensor will be used to position the camera close to the target using the “guarded move” described above. Depending on the roughness of the target surface, the MI will then be moved away from the target to a position that is less than the best focus distance from the optics. If the surface is extremely rough, an image could be acquired with the sensor still in contact with the target. In either case, the dust cover will be opened and MI images will be taken in positions separated by a few millimeters as the IDD moves the camera away from the target along the MI boresight axis. Each image in such a sequence is a “focal section.” The number of positions at which the MI will stop and take a focal section will depend on the roughness of the target and the accuracy of IPS positioning; no more than 8 images are expected to be required per sequence. The dust cover will be closed at the end of the imaging sequence. Because the IDD has only 5 degrees of freedom, the MI will generally rotate between images in the sequence. The amount of MI rotation can vary widely depending on the position and orientation of the target in the IDD workspace. This sequence can be repeated at different locations on a target to produce overlapping image coverage for stereo or to build a mosaic of MI images of a large target. The software being developed to merge focal sections and mosaic images that are rotated relative to each other is described in section 6 below.

[59] For targets that are too soft to allow contact sensing, the MI will initially be positioned far enough from the target to ensure that it is beyond the MI's best focus distance. The location and shape of the target will be determined using stereo Hazcam images [Maki et al., 2003; Squyres et al., 2003]. The standoff distance will depend on the uncertainties in target geometry and IPS positioning accuracy. The dust cover will then be opened and the MI will acquire images in positions separated by a few millimeters as the IDD moves the camera toward the target along the MI boresight axis. For this sequence the IDD performs a “guarded move” so that, if contact with the target is sensed, IDD motion will be halted. The number of imaging positions will be chosen to minimize the likelihood of contacting the surface in the event that the target is too soft to activate the contact sensor. To avoid possible damage to the dust cover, the MI will be backed away from the surface before closing the dust cover. Again, the focal sections in the sequence will generally be rotated relative to each other. This sequence can be repeated at different locations on a target to produce overlapping image coverage for stereo or to build a mosaic of MI images of a large target. A soil surface that has previously been contacted by the Mössbauer spectrometer will be easier to image because the Mössbauer position information will reduce the uncertainties in MI positioning relative to the target. However, the soil surface is likely to be disturbed by Mössbauer contact, so MI observations of soils are desired before other in situ instruments contact them. Physical properties of the soil, such as particle compressibility and porosity, may be inferred by comparing MI observations taken before and after Mössbauer contact.

[60] A rock surface that has been abraded by the RAT is likely to be flat enough that only one or two MI images of the surface should be required to ensure that it is imaged in good focus. Previous contact of the surface by the RAT and other instruments will reduce uncertainties in MI positioning relative to the target. If the MI contact sensor is used on an abraded surface, the flatness of the surface will similarly allow more accurate determination of the surface location relative to the MI depth of field.

[61] Images of targets on the rover taken early in the landed mission will be used to optimize later MI observations of the rover targets. Initially, a guarded move and the MI contact sensor will be used to position the camera for an imaging sequence similar to that described above for rocks. The data from this sequence will be analyzed and used to determine the position and orientation that yields the best MI images of each rover target.

5.4. Flight Software Services

[62] MI data acquisition and onboard processing are performed by MER flight software. All MER cameras are commanded using a single command structure [Maki et al., 2003]. A separate command opens or closes the MI dust cover. Each MI imaging command includes parameters that specify downlink priority, image ID, exposure time or auto-exposure, image compression parameters, and various types of optional processing and products. MI-specific uses of flight software services are described below. MER imaging flight software services are described in more detail by Maki et al. [2003]. After onboard processing and compression, MI data will be packetized for downlink to Earth.

5.4.1. Autoexposure

[63] The scenes viewed by the MI on Mars are likely to contain a wide variety of brightness levels, depending on illumination conditions and target properties. Specular reflections from crystal faces are possible, especially in scenes that are directly illuminated by the sun. Partial shadowing of the MI target by the rover, IDD, or the MI itself can result in a large brightness range. Therefore the best exposure times for MI images are very difficult to predict, and we expect to use the autoexposure capability often. The number of test exposures required to determine the proper exposure time will range from 2 to 6. For each autoexposure cycle, the test image histogram is calculated and compared against the commanded DN threshold and pixel fraction parameters. If the fraction of pixel values above the threshold is exceeded, the exposure time is reduced accordingly. This process is repeated as necessary up to the specified number of iterations. The initial exposure time, threshold and pixel fraction parameters will be selected on the basis of the expected illumination conditions and experience gained in ground testing. It is expected that fewer autoexposure iterations will be required for the second or later images in a focal section sequence, and the initial exposure time will be that used for the previous focal section.

5.4.2. Onboard Image Processing

[64] Simple image processing tasks can be performed onboard the rovers to correct for transfer smear, bad pixels, and flat field variations. These processing options can be applied in sequence or one at a time. The correction for frame transfer smear, or “shutter effect,” can be applied if the exposure time is less than a given threshold. This conditional shutter correction will be very useful in conjunction with autoexposure, when the exposure time will not be known in advance. If the shutter correction is applied, a zero-second exposure is acquired immediately after the image to be corrected and subtracted from the original image.

[65] An updateable table of bad pixel locations for each camera is stored onboard the rovers, and can be used to correct images for bad pixels before downlink. Each bad pixel is replaced by a mean of nearby pixels, weighted by the distance of the nearby pixel from the bad pixel. Correction of bad pixels may increase the compressibility of the images and is therefore likely to be used often on MI data.

[66] Similarly, a model of the normalized flat field response of each camera is stored onboard the rovers and used to flatten images. If such correction is commanded, the input image is divided by a normalized image generated from the stored model parameters. Such onboard corrections can increase the compressibility of MI images, but preliminary analysis of the flat field calibration data indicate that variations in response have low spatial frequency, so that the effect on compression is likely to be minimal.

5.4.3. Data Products

[67] Several types of imaging data products can be created onboard the rover [Maki et al., 2003]. Image data volume can be reduced by summing rows or columns, subframing (or windowing), or downsampling. Because the goal of MI observations is to resolve small features on Mars, row or column summing is not likely to be performed on MI images. However, subframing (selecting a part of the image for downlink) and/or downsampling (calculating a mean or median of pixels in specified blocks) can be used to reduce MI data volume for downlink. Subframe products are defined by starting row and column and by number of rows and columns. Downsampling can be used to create a “thumbnail” version of an image for rapid downlink and assessment on the ground. If the thumbnail indicates that the image is of scientific interest, the full-resolution image can be later returned to Earth. This capability may prove very useful in selecting well-focused MI data for downlink. A histogram of the image data can also be generated and returned to Earth as a separate product. Reference pixels are returned as a separate product if requested.

5.4.4. Data Compression

[68] After completion of the optional image processing described above, any image data products can be scaled from 12 to 8 bits by truncating the 4 least significant bits, shifting bits, or using one of 5 uploadable look-up tables (LUTs). In most cases, it is expected that MI images will be scaled to 8 bits using a LUT. The design of the LUTs will make use of experience gained in previous Mars lander imaging experiments, MER camera test and calibration, and MER/FIDO field tests. Ideally, one of the 5 LUTs will be optimized for MI images. Experience gained during landed operations can be used to modify the LUTs as necessary.

[69] To further reduce MI data volume, images can also be compressed using either the ICER or LOCO algorithms [Maki et al., 2003]. ICER provides lossy or lossless compression using a wavelet transform and includes error containment features. The number of bits per pixel or a quality goal can be specified, as can the wavelet filter, number of stages of wavelet decomposition, and number of image segments for error containment. These parameters will be selected on the basis of the results of compression tests using MI test images. Preliminary tests show that ICER lossy compression performs very well, and it is expected that most MI images will be compressed to about 1 bit per pixel. Because ICER lossless compression requires more processing time, LOCO will be used for lossless compression of MI data.

6. Data Processing and Products

[70] After MI data is received on the ground, it will be depacketized and formatted into Experiment Data Records (EDRs) at JPL. The EDRs will be in Planetary Data System (PDS) format, with the label populated using telemetry data headers, meta-data and SPICE kernels. Following the label, the EDR will contain raw (uncalibrated) binary image data. Each EDR will be generated within 1 minute of the receipt of the last data packet and placed on the MER Operations Storage Server (OSS). EDRs will also be automatically sent to the USGS in Flagstaff via a secure file exchange service. MI EDRs will be processed to various data products at JPL and in Flagstaff, using the USGS Integrated Software for Imagers and Spectrometers (ISIS) [Eliason, 1997; Gaddis et al., 1997; Torson and Becker, 1997] and other software including the Interactive Data Language (IDL). These products and the methods used to create them are described below, followed by an overview of the plan for data release and archiving of MI data. A flowchart of MI ground data processing is shown in Figure 21.

Figure 21.

MI ground data processing flowchart.

6.1. Standard MI Processing

[71] Standard products will be generated within a few minutes of receipt of EDRs in order to support mission operations. These products will be made available to the MER operations team quickly to allow assessment of the quality of the data and to aid scientific interpretation. The MI Payload Downlink Leads (PDLs) have primary responsibility for timely generation of standard products at JPL during landed operations. The software required to generate standard MI data products is described below.

6.1.1. EDR/Ancillary Data Ingestion

[72] ISIS must ingest compressed PDS-formatted EDRs and convert PDS keywords into ISIS labels. The label information will be used by other ISIS programs as described below, so accurate label data must be generated. Within ISIS, portions of the required geometric information associated with each image may be stored in the image labels, in SPICE kernels external to the image, and/or in an external camera-definition file, depending on the final software design. Ingestion software must have the capability to select the geometric information to be used from alternate sources, such as data embedded in the EDR labels and/or external SPICE kernels. This software also ingests Pancam and other camera EDRs calibrated to radiance or reflectance in PDS format into the ISIS system with appropriately formatted labels. The product of this software is raw image data with an ISIS label, to be used by the programs described below.

6.1.2. Radiometric Calibration

[73] The MI calibration software locates and reads MI EDRs and produces radiometrically calibrated images in ISIS format. This software set utilizes reduced MI calibration data to perform the following functions: correction of transfer smear, bad pixels, and light transfer nonlinearity, modeling and subtraction of bias (offset) and dark current, flat-fielding, and conversion from raw DN to I/F (irradiance relative to a white Lambert surface, illuminated normally). Updates to this software may be made on the basis of in-flight MI calibration data as they are acquired and analyzed. The calibrated image products generated by this software are referred to as “Level 1” products, which have not been geometrically processed [e.g., Gaddis et al., 1999].

6.1.3. Focal Section Merges

[74] Each image, or focal section, in a multiimage MI sequence generally shows only part of the target in good focus, because the boresight of the camera may not be exactly normal to the target surface, and/or the target may be rough compared to the depth of field of the MI. Therefore the focal sections will be merged to produce a single image showing the entire target in good focus, including correction of any geometric distortion. This technique has been demonstrated and software developed at the Ames Research Center using test images from the Robotic Arm Camera on Mars Polar Lander [Keller et al., 2001] and in the biology community. However, the rotation of the MI focal sections around the boresight and uncertainties in the position and orientation of the MI significantly complicate the application of existing software. A variety of alternate approaches to this problem are being explored: the geometric transformations needed to align the images may be incorporated into the focal section merging software, but the capabilities in ISIS and SOCET SET to project the images onto an oblique plane for mosaicking as described below will also serve to align them for merging. Software developed at Ames is being modified for use with MI focal section sequences. Furthermore, the ability to select, align, and mosaic the in-focus sections of images interactively in various commercial software packages (e.g., SOCET SET, Photoshop) provides a robust backup capability to the more sophisticated merging software.

6.1.4. Conversion to PDS Format

[75] The products described above will be useful in supporting operations planning and data analysis and must be validated and archived to the PDS. Operations planning and analysis software will be able to ingest PDS-formatted data, so ISIS products will be converted to PDS format and placed on the OSS. Products to be converted include calibrated images, high-level products, calibration and other ancillary data files, index tables, and documentation. This software will make use of existing ISIS code, modified for MER as needed.

6.2. High-Level MI Processing

[76] High-level products are not needed for daily operational decisions and are therefore generated days to weeks after the EDRs are available. All of these products will be provided to the science team for analysis.

6.2.1. Mosaics

[77] Multiimage MI sequences can be repeated to produce overlapping image coverage of an extended target. These images will be controlled, projected, and mosaicked into a single image that shows the entire area in good focus. The software will define and generate parameters for an arbitrary oblique projection surface approximating the target and project the images onto it. Uncertainties in the position and orientation of the MI will likely be large enough that each image will have to be geometrically controlled on the basis of measurements of features in the areas of image overlap. A photogrammetric bundle-block adjustment program is being written in ISIS that will be able to calculate improved control for sets containing a mix of images from MI and the other MER cameras. The tools for obtaining the manual and automatic image measurements on which the adjustment is based already exist.

6.2.2. Stereogrammetry, DTMs

[78] The multiimage MI sequences described above can be repeated to produce stereo image coverage of a target by repositioning the MI using the IDD. These images will be used to generate a digital terrain model (DTM) of the target surface with sub-pixel registration and stereo matching errors. This step will be performed in SOCET SET, a commercial photogrammetric package already in use in Flagstaff [Kirk et al., 1999]. Stereo image pairs will be imported in local coordinates and DTM segments will be collected in such coordinates. No software modifications are required, but uncertainties in the position and orientation of the MI will likely be large enough that each image will have to be geometrically controlled either in ISIS as just described or with the feature-measurement and bundle-adjustment tools in SOCET SET. Areas imaged in stereo and in good focus will be used to measure tie points.

[79] The software developed to export IMP images from ISIS to SOCET SET will be generalized to work with MI and other MER camera data. SOCET SET needs a correctly formatted image file and an ASCII “support file” (detached label) containing geometric information. This information includes camera position and orientation relative to some coordinate system; SOCET SET does not maintain any information about the correlation of camera motion and pointing or shared parameters between images. The translation software will be able to produce SOCET support files with camera position and pointing relative to any of the coordinate systems of the chain defined for that image, e.g., relative to the camera head, rover-centered level, site-centered, etc. DTM segments produced in SOCET SET will be exported to ISIS. Surface normal vectors will also be calculated from the DTM segments and exported. It may also be desirable to transform and export image data as well. The geometric transformations will be carried out on a fully three-dimensional basis, using DTM data rather than a simplified model of the site. As a result, mosaics and focal section merges made from images that have been transformed in SOCET SET will be free of the geometric distortions such as parallax and scale errors that affect products made by “two-dimensional” reprojection in ISIS.

6.2.3. Pancam Color Merge

[80] Multispectral Pancam images will be merged with MI images of the same targets to determine the multispectral properties of features in MI images. The Pancam images will typically be lower in resolution by a factor of about 20 relative to MI images. Because the Pancam cannot view the entire IDD workspace, the rover will often have to move to acquire Pancam images of MI targets. This will likely result in significant uncertainty in the location of MI targets in Pancam images based only on dead reckoning. Human intervention will therefore be required to locate MI targets in Pancam images and pick control points; this process is likely to be complicated by differences in illumination between MI and Pancam images. The software will allow the user to update Pancam and MI pointing and position data in order to improve the merge as follows:

[81] a. Pick tiepoints between images using interactive display software

[82] b. Do bundle-adjustment in batch mode

[83] c. Examine statistics from (b), go to (a) and revise tiepoints as needed

[84] d. Project images in batch mode

[85] e. Examine alignment in projected space, go to (a) and revise tiepoints as needed The software will define and generate parameters for an arbitrary oblique projection surface and either project the images onto it or project Pancam images onto MI Level 1 products. It may be necessary to update the position and orientation of the rover as well. Alternatively, Pancam and MI data can be merged less accurately by warping Pancam data based on predicted camera positions and manually registering the images.

6.2.4. Image Restoration

[86] The feasibility of image restoration/deblurring (Wiener-type filtering) of compressed MI data is being evaluated. The purpose of such processing is to recover spatial information that is lost because of blurring from diffraction or defocusing. The MTF data needed to apply such techniques has been gathered for the MI flight units at various distances relative to best focus. Because different areas of MI images will be blurred by varying amounts due to defocus, image restoration is most likely to be applied to merged focal sections.

6.3. Data Release and Archiving

[87] MI data will be validated and released to the science community in a timely manner, following the MER Program Data Management Plan. This plan provides for timely release to the science community of all validated data from the Athena investigation. It also provides for rapid release to the public of a subset of the data considered to be of particular interest [Crisp et al., 2003; Squyres et al., 2003]. In addition to these data releases, there will also be releases of non-commercial/non-proprietary data analysis software and algorithms that were used to produce the data products. In particular, much of the MI software will be released as part of normal distributions of ISIS.

[88] A major advantage in having EDRs generated in PDS format is the resulting simplification of the archiving process. Similarly, MI data products that are converted to PDS format and placed on the OSS will be easy to prepare for archival. The MI EDRs and higher-level data products (also known as reduced data records or RDRs) will be validated for scientific integrity and conformity with PDS standards and transferred to the PDS according to the project-mandated schedule. In addition, event data, MI engineering data, calibration files, and software will be validated and archived to the PDS. MI pointing information (C kernels) will be derived from IDD SPICE kernels and updated whenever possible using available imaging data sets and archived to the PDS. The details of the data release schedule and archival process are reported in the MER Archive Generation, Validation, and Transfer Plan (MER 420-1-200, JPL D-19658).

Appendix A:: Images of Rock Targets Taken Using MI Flight Units

[89] Table A1 summarizes images taken of rock targets during ambient calibration of the MI flight units. MI S/N 110 (on MER-B) took images of rock targets with AREF IDs 174 and 222 [Squyres et al., 2003]. MI S/N 105 (on MER-A) took images of rock targets with AREF IDs 107, 178, 182, 183, 184, and 198. Target distances are accurate to ±1 mm.

Table A1. MI Observations of Rock Targetsa
Target DistanceRock Sample AREF ID
107174178182183184198222
  • a

    R, rough side of sample; S, smooth side of sample; “w” added if dust cover window material placed in front of target.

49 mm     RR 
52 mm    RRR 
55 mmRR R, SRRR 
58 mmRRRR, SRRRR, Rw
61 mmRRRR, SRR, SR, SR, Rw
64 mmR, SR, SR, S RwRR, SwR
67 mm R, S R, SR, Rw RR, S, Sw
70 mm R, S R, SR, Rw RR
73 mm R, S R, SR RR

Acknowledgments

[90] The Microscopic Imager investigation owes much to the Athena Science team and the JPL team that designed and built the MER cameras. The authors appreciate the contributions made by Jim Aragon, Ali Bakhshi, John Bousman, Paul Cate, B. J. Chippindale, Nancy Cowardin, Roberta Davis, Darryl Day, Tom Dea, Bob Deering, Don Dunn, Perry Fatehi, Carolina Flores-Helizon, Virginia Ford, Wayne Hartford, Pete Kobzeff, John Koehler, Greg Lievense, Tim McCann, Ali Pourangi, Walt Proniewicz, Don Schatzel, Alejandro Soto, Beverly Stange, Robby Stephenson, Mike Sucy, Dave Thiessen, Charles Thompson, Rudy Vargas, Enrique Villegas, Marc Walch, Len Wayne, Mary White, Reg Willson, and Bobbie Woo. Many of these individuals assisted with the testing and calibration of the cameras, as did Deborah Bass, Charles Budney, John Callas, Wendy Calvin, Emily Dean, Bill Farrand, Lisa Gaddis, John Grant, Ed Guinness, Jeff Johnson, Jonathan Joseph, Kjartan Kinch, Mark Lemmon, Zoe Learner, Morten Madsen, Elaina McCartney, Scott McLennan, Doug Ming, Mary Mulvanerton, Tim Parker, Jon Proton, Frank Seelos, Jason Soderblom, Rob Sullivan, Roger Tanner, Jim Torson, Cathy Weitz, and Michael Wolff. The camera development and testing was well supported by the MER management staff and the IDD and ATLO teams at JPL. The MI contact sensor and dust cover were designed and built at Alliance Spacesystems, Inc. We also thank the rest of the USGS MER team: Jeff Anderson, Brent Archinal, Janet Barrett, Kris Becker, Debbie Cook, Eric Eliason, Lisa Gaddis, Annie Howington-Kraus, Chris Isbell, Jeff Johnson, Mark Rosiek, Bob and Tracie Sucharski, and Jim Torson. Detailed reviews by Deborah Bass, Stubbe Hviid, Lisa Gaddis and Jeff Johnson improved the quality of this paper and are much appreciated. The use of trade, product, or firm names in this paper does not imply endorsement by the U.S. Government.

Ancillary