Presented at the 9th AIAA Conference on Computers in Aerospace, October 1993. Copyright (c) 1993 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

The Mars Observer Camera Ground Data System

Michael Caplinger
Malin Space Science Systems, Inc.
PO Box 910148
San Diego CA 92191-0148, USA

Abstract

The Mars Observer Camera (MOC), launched on board the Mars Observer spacecraft in September 1992, is to be operated by a small team using large amounts of computer assistance. Here, we describe the architecture of the MOC Ground Data System, which is capable of taking imaging requests, generating imaging command sequences, receiving the acquired images, archiving them, and delivering them in several processed forms to the MOC user community, all with the need for only a limited amount of human interaction.

Introduction

The Mars Observer Camera (MOC), one of the instruments on NASA's Mars Observer spacecraft launched in September 1992, is an ambitious imaging system that will return over 350 gigabits of data during the course of one martian year (687 days.) The MOC consists of a Narrow Angle (NA) system, that can acquire images at 8 resolutions from 12 m/pixel to 1.5 m/pixel, with a maximum crosstrack dimension of 3 km. Complementing the NA is a two-color Wide Angle (WA) system that can acquire both global images at low resolution (7.5 km/pixel) and regional images at variable resolutions up to 250 m/pixel. The MOC contains a 32-bit microprocessor and a 12 megabyte RAM image buffer. Two forms of data compression are provided: a 2x lossless compressor implemented in hardware, and a lossy software-based transform compressor providing compression factors from 4x to over 20x. Depending on Earth-Mars distance, the MOC transmits data at rates from 700 bits/sec to nearly 40 kilobits/sec, and can support rates up to 128 kilobits/sec [1].

Previous planetary missions have been entirely controlled from a central location. In contrast, Mars Observer operations are distributed, with each instrument team controlling their instrument from their home institution [2]. NASA's Jet Propulsion Laboratory (JPL) operates the spacecraft, and serves as a central clearinghouse for commanding and receipt of telemetry. Mars Observer is intended to be a routine mapping mission with body-fixed nadir-pointed instruments and continuous data gathering, so operations are conceptually simple. Complex conflict resolution between instruments sharing limited resources, a major aspect of earlier missions such as Voyager, is not expected to be needed for MO operations, as all instruments are acquiring data continuously and returning it at rates programmed into the spacecraft's telemetry system.

However, the MOC cannot be operated in a routine and continuous manner. Since no downlink data rate is fast enough for continuous data gathering (the NA produces image data at a rate of 40 megabits/sec; the WA, at over 350 kilobits/sec) one is forced to "pick and choose" targets. Also, the NA has an extremely narrow field of view, and uncertainty in navigational predictions limits precise targeting of the NA to only a few days before image acquisition. This requires that command sequences be generated no more than a few days before execution, and that operations be extremely adaptive to changes in spacecraft position knowledge. One way to provide this flexibility would be to have modest automation and a large staff, but budgetary constraints make this impossible. Therefore, we have instead designed a system with unprecedented amounts of automation, that requires only a small staff to supervise operations, make decisions which cannot be automated, and serve as liaisons to the community of science users. The hardware and software system to perform MOC operations is collectively called the MOC Ground Data System (GDS), and its architecture is the subject of this paper.

Interfaces

The design of the MOC GDS has been considerably simplified by the simple interface through which it can communicate with the MOC itself. The MOC facility is connected to JPL via a dedicated 56Kb data line; a router restricts communications over this line to a single secure machine at the MOC facility provided by JPL. Telemetry packets as sent by the MOC can be retrieved directly from JPL's Project Data Base (PDB). This allows the MOC team to treat the complex mechanism by which the spacecraft transmits data to the Deep Space Network, and how those data are then relayed to JPL, as transparent.

Uplink commands are also sent via the PDB. The uplink volume is modest (on the order of 5000 bytes per day) and a load can be sent daily by placing it on the PDB and notifying staff at JPL. JPL performs minimal syntax checking, verifies that the commands are in fact destined for the MOC and not some other instrument or spacecraft system, and then radiates the commands, ordinarily within 24 hours. It is important to note that since no MOC command can damage the MOC, or even affect any other spacecraft system, the consequences of miscommanding the MOC are restricted to disruptions of operations, so all checking is the responsibility of the MOC GDS.

Files describing the position and orientation of the spacecraft, both predictive for planning purposes and reconstructed from tracking data for data processing, are made available for retrieval on the PDB. Other files describing the times and durations of tracking passes and other spacecraft and ground activities are also available.

The images taken by the MOC will be made available in two ways. A low-resolution daily global map will be placed on the PDB for immediate access by the Mars Observer community. The rest of the images will be released on CDROMs roughly six months after acquisition.

Uplink

All MOC commanding is done from the MOC facility. Input on the overall scientific objectives to be addressed by operations, and the specific areas to be imaged, comes from the MOC science team both in residence at the MOC facility and at their home institutions, other Mars Observer instrument teams, other Mars Observer scientists, and the general scientific community. An important role of the operations staff is to serve as "translators" between science users and the GDS.

Uplink command generation has been split into a sequence of steps, shown schematically in Figure 1. The following sections describe each step in detail. Figure 1: uplink processing steps

Observing plans

Because most MOC target assignment must be done on a short timescale, two different approaches to mission planning are possible. In the first, more traditional approach, planning is performed a few days at a time based on the targets available in that period. The major difficulty with this mode is that it requires a significant daily investment of time; over the course of a mission lasting nearly two years, many members of the science community may be unwilling or unable to make that investment. To minimize this problem, the MOC GDS also provides a second approach, called time-independent planning. In this mode, users define "observing plans" that consist of a specification of an area or feature edge on the surface of Mars, the type of acquisitions to be made there, any geometric and timing constraints (such as lighting angles, repeat interval, season, etc.), the image size, resolution, allowable compression types, and a single number indicating the priority of the observation. These plans can be defined at any time before or during operations.

Observing plans can be input to the system in a number of formats; the principal means of definition is by a graphical interface, called plan, that can display maps of the planet at several different resolutions. Maps can be imported in many forms, including image mosaics, digital terrain models, geologic maps, Viking images, or during operations, previously-acquired MOC images. plan is a standalone program and can be distributed to science team members for use on their home institution computers. Once defined, plans are transmitted to the MOC facility (typically by electronic mail) and stored in a central database, called the Operations Database, that records the status and history of each plan.

Another component of each plan is the scientific rationale for making the observations. This is described in text form, so that future users of the imaging data can discover the question a particular image was intended to address. plan can also automatically tag each plan with the target area's name. Figure 2 shows the plan interface. Figure 2: plan interface screen

The GDS also manages geodetic problems in the available map data. Since the MOC Narrow Angle field of view is only 3 km wide and typical position errors in the available map products for Mars are on the order of 5 km at best, targeting errors can be expected even assuming perfect knowledge of spacecraft position. While we expect to generate more accurate map products using MOC data, the desire to preplan many observations means that the GDS must be able to use existing pre-MOC maps. Rather than have to support the use of multiple coordinate systems simultaneously, we have adopted the system implicit in the U.S. Geological Survey (USGS) Digital Image Mosaic as our standard reference system. As the reference system is refined using MOC data, a translation from the old to new systems will be performed on each plan. The same mechanism can be used to translate from other existing non-USGS coordinate systems.

Initial target list generation

Daily during operations, the list of all active plans is retrieved from the Operations Database and examined by a program that compares the plans against the predicted spacecraft position. The result is a "strawman sequence" of potential image commands for that day; each element of the sequence consists of the time a specific optical system is to be activated, and a set of parameters (image size, resolution) to be associated with that particular image. Other needed parameters are left unspecified at this stage of planning, to be defined later in the process.

In addition, users may examine the spacecraft's position using a second graphical interface, target, and designate "targets of opportunity" not anticipated in pre-planning; these targets are integrated into the strawman sequence. target uses the same graphical interface as plan, overlaying the spacecraft groundtrack, MOC fields of view, and other geometric and lighting information on the maps. Simple "point-and-click" interactions are used to define and modify imaging commands. In addition, target has an integrated instrument simulator which can display the changing usage of instrument resources as the sequence is modified and new commands are added. Figure 3 shows the target interface. Figure 3: target interface screen

Conflict resolution

Unfortunately, not all of the commands in a typical strawman sequence can be executed because of limited instrument resources. These resources include buffer space, CPU processing time, downlink rate, and power. Although target allows a human planner to attempt to repair the sequence by hand, this is a challenging task when many hundreds of potential commands compete for the same resources. Therefore, an automatic conflict resolution program, called autofix, is typically used to generate the final sequence. It uses a series of heuristics to alter the compression type and downlink channel assignment for each image, then simulates command execution to locate conflicts in the modified sequence. Images that cannot be acquired within the constraints are deleted; lower-priority images are deleted first. In some cases, images could be "saved" by reducing their size or resolution, but autofix does not attempt such changes. Instead, the results of conflict resolution are made available for manual review and modification.

Command generation

Once reviewed, the command sequence is encapsulated in the MOC command protocol and sent to JPL over a dedicated communications link, where it is integrated into a spacecraft command load for transmission. The Operations Database contains an entry for every command so that the status of instrument operations can be tracked; this database is also used to update the timing of unexecuted commands already stored in the instrument in case of last-minute improvements to the knowledge of the spacecraft's position.

Mars Observer uplink packets are variable-length with a maximum length of 254 bytes. The packet header contains a identifier of the destination instrument, a sequence number, an eight-bit command opcode, and a few user-defined bits. A sixteen-bit checksum is appended to the end of the packet.

Within each MOC uplink packet, an additional level of protocol information is used to chain multiple packets into a single logical command group of a maximum of 2048 bytes. If any element of this group is lost, the entire group is ignored. The group contains a single time value which specifies when the group should be executed; the instrument stores groups awaiting execution in a large error-corrected DRAM buffer that can store up to a month of pending commands. Individual commands within the group can be either untimed, in which case they are executed shortly after the group execution time, or timed, in which case they are executed at an exact specified time after the group execution time. The majority of commands are timed and are used for image acquisition; untimed commands are used to perform operations such as table updates and diagnostic functions.

Part of the MOC health and welfare telemetry stream to be discussed later contains the status of all commands pending in the instrument; this can be used on the ground to verify receipt of commands.

Downlink data processing

All telemetry transmitted from the instrument is received at the MOC facility for analysis. The processing steps required to go from raw instrument packets to images ready for science analysis are shown schematically in Figure 4, and described in detail in the following sections. Figure 4: downlink processing steps

Telemetry retrieval

After each spacecraft tracking pass, the GDS retrieves newly-received MOC data packets from JPL. Mars Observer instrument packets are fixed-length and contain about 1000 bytes of data. The packet header contains an identifier of the instrument that produced the packet, a time tag (which in the case of the MOC is the time of internal packet generation), and a sequence number. A sixteen-bit checksum is appended to the end of the packet. Packets are retrieved from the PDB on the basis of the time tag and are delivered in order of increasing sequence number and time.

Within each MOC packet, a two-level protocol is used to transmit image data and engineering health information. Each MOC image is broken into a series of 240 kilobyte that are independently decompressible; this allows error recovery in the event of dropped packets, since otherwise entire images might be lost. Each fragment has a header that identifies the command that generated it, which fragment of that image it is, and all the attributes of the image needed for decompression. When a packet is generated in the instrument, no more than a fixed percentage (typically 5%) of it is filled with any engineering data values that are awaiting transmission. The remainder of the packet is filled with image data from the next fragment in the downlink queue. This scheme allows high-priority engineering data, such as error messages, to be transmitted promptly, while devoting most of the telemetry stream to image data.

Once retrieved, the MOC GDS processes the packet stream and reassembles the image fragments and engineering telemetry in separate files. Extensive error checking is performed to detect corrupted packets, dropped packets, and corrupt or missing image fragments. The status of each image is recorded in the Operations Database, and the state of the packet stream (including any partial image fragments still being received) is saved for the retrieval of the next pass.

Data archiving

The complete image fragments, still in compressed form, are stored in a local image archive. Experience with previous planetary data suggests that it is preferable to defer processing as long as possible, since much processing is not reversible, but must be made on the basis of ancillary information that may be changed or refined later. An example of this is map projection, which requires the spacecraft position and orientation to be known; final reconstructed position information is often not available until weeks or months after the image is taken. In addition, storing only compressed data reduces total space requirements for the archive by a factor of about five averaged over the mission.

To avoid "freezing" imprecise information into image products, the GDS processes images from their compressed forms on demand, using the latest information available. This also makes it possible to use different processing depending on different users' requirements. For example, each image is manually examined by the MOC operations staff to assess its quality; the systematic processing applied for this purpose is simply decompression. Science users typically view the images via a browsing program that decompresses, applies radiometric and geometric corrections, cosmetically enhances, and optionally map-projects each image before display.

In order to minimize the time a user must wait to see an image, a variety of techniques for pre-fetching are employed. Raw images are permanently stored on optical or magneto-optical disk jukeboxes; images are grouped on disks spatially to increase the chance that multiple image requests can be serviced with only one disk mount. When an image is requested, processing is scheduled on all images "near" that image spatially. (Applications can also give hints to the archive system about what images are likely to be requested soon, in case access patterns other than spatial ones are used.) This processing can proceed in the background on the user's own workstation, as well as in parallel on a pool of multiprocessor compute servers. The processed images are stored in a large magnetic disk cache so that future or repeated access to the same images can be satisfied immediately; space in the cache is managed in a least-recently-used fashion. All requests are routed through a central server that allocates resources to each operation. Requests can be cancelled if a user switches attention to another area.

Even before the start of MOC image acquisition, the archive system has been used to store 30,000 raw Viking images for access by the planning and targeting software; the images are stored in their raw form, and then cosmetic processing and map projection are applied by the archive system.

Data logging and optical navigation

As already noted, inaccuracies in the predictions of spacecraft position will make hitting specific NA targets very unlikely. One way to compensate for this is to use MOC images themselves to determine where the spacecraft was when the image was taken, and then use this information to refine the position prediction for future images. This process is called optical navigation.

Shortly after each image is received and stored in the local archive, it will be viewed by the operations staff to assess its quality. (Although we have developed algorithms that try to do this automatically, they are still untrustworthy enough to make manual examination mandatory; also, humans are much better at assessing the "value" of an image.) At this time, correspondences between features seen in the images and features seen on the maps of the approximate target area will be recorded; this will allow the image to be located relative to the reference system with relatively high precision.

The first-order use of these correspondences will be to compute the timing error between predicted and actual feature locations. These errors will be examined for systematic behavior and fit to a model that can be used to offset future commands in time.

More sophisticated use of the correspondences will be to merge data from multiple images, the raw radiometric tracking information used by JPL to predict the spacecraft location, and physical models of the contributing error sources such as atmospheric drag and the martian gravity field, in a least-squares solution that will simultaneously refine the spacecraft position knowledge and the knowledge of feature locations on the planet. These results can be fed back into the GDS to minimize both position errors of targets and of the spacecraft, and to increase the chance of hitting precise targets.

Data product generation

Within the MOC facility, users view and analyze images from the local archive. Outside the facility, only very limited volumes of image data can be requested by MOC science team members over the NASA Science Internet. The remainder of the Mars Observer community access MOC data through the PDB, and the wider scientific community accesses it via CDROM volumes obtained from NASA's Planetary Data System.

Original planning called for the return of compressed data products to the PDB via the dedicated communications link, but overhead and increased throughput requirements forced that plan to be abandoned. Instead, the CDROM volumes are produced in write-once form at the MOC facility and sent directly to a CD mastering and production company. Copies of the volumes are delivered to the PDB for ingestion and later become available to the general community.

There are two types of CDROM volumes, compressed and decompressed. The compressed product is the most basic archival form, identical to the compressed images stored in the MOC local archive. Since these products are in their original form, substantial processing is required to make use of them; descriptions of the processing algorithms, source code, and executables for common computer platforms are included on the volumes.

The second type of volume is a decompressed and processed version intended for maximum utility for visual analysis. These volumes will be produced in limited numbers and sent to JPL for transfer to hardcopy, since a significant fraction of the Mars Observer community has indicated a need for imaging data in hardcopy form. It is not yet known if the decompressed volumes will be made available more widely.

The decompressed volumes are simply ordered dumps of the contents of the local archive. In fact, depending on the maturity of CDROM jukebox hardware, we may use CDROM versions of the data the local archive, requiring only a few months' worth of images to be staged through the system on write-many storage devices like magnetic or magneto-optical drives. Since the images are not released for at least six months after receipt, it is possible to stage six months of data and then produce all the volumes in both spatial and roughly time-ordered fashion, with most of the advantages of both orderings.

The compressed archives are produced by a program which is simply a client of the archive server. The images are requested with the decompressed processing, and the archive server allocates all the needed resources to perform that processing. The program then copies the processed images to a write-once CD volume for transfer to the mastering facility.

Health monitoring

Instrument health information consists of temperature, current, and voltage values from an array of sensors in the MOC, and a description of the instrument's software state, including the amount of free buffer space and number of commands awaiting execution. These values are automatically compared against expected values and the GDS staff alerted to any anomalies; a daily report is generated to synopsize the instrument's ongoing health.

Implementation and performance

The MOC GDS consists of about 50 separate programs, all written in C under the Unix operating system. Total code size is about 120,000 lines. Although the GDS was developed on Sun Microsystems SPARC workstations, most of the programs are relatively platform-independent. Interactive programs use the X Window System and Sun's XView toolkit.

On the 40 SPECmark processors of the development system, instrument packets can be processed at a rate of about 400 packets/sec, so a day's worth of high-rate downlink can be processed a few minutes. This task is largely I/O bound and would benefit from faster disk subsystems. A single MOC image 2048 pixels square can be transform-decompressed in about 20 seconds.

(A SPECmark is a unit of processing determined by averaging the execution times of a series of benchmark programs and normalizing them such that 1 SPECmark unit represents the processing power of a VAX 11/780.)

Uplink planning is more time-consuming. A typical uplink planning cycle with about 5000 active observing plans takes about 1.5 hours to process to completion; the vast majority of this time is spent in the program which compares plans against spacecraft location to find available targets.

We do not expect significantly faster uniprocessors to be available before operations begin; the fastest uniprocessor available from Sun Microsystems as of this writing has a SPECmark rating of about 60. However, since most of the tasks in the GDS can be easily partitioned into independent runs, significant speedups can be attained by using multiprocessor systems. For example, target location can be accomplished by allocating one orbit to each processor, speeding up daily planning by a factor of over 10, assuming enough processors are available. We expect to be able to plan a day's worth of operations in less than 15 minutes with such hardware.

In the operational system, typical workstations used by the planning staff will be small systems providing about 20 SPECmarks of processing power, 32 MB of main memory, local disk space of about 500 MB, and a single 8-bit color display. Science analysis workstations will be somewhat larger, including one or more 60 SPECmark processors, 64 MB of main memory, 2 GB of local disk space, and two 8- or 24-bit displays. Compute servers will have several 60 SPECmark processors, 64 MB of main memory, and 4 GB of disk space. Relatively small 20-SPECmark processors are used as file servers for system files and to control the optical and magneto-optical jukeboxes. TCP/IP and the Network File System (NFS) running over 10 Mb Ethernet is used for communication between machines. Because most processing is performed local to each machine, with only the raw and final processed images sent over the network, network loading will be fairly low.

Applicability

The MOC GDS has been tailored to a specific instrument and mission profile, but the techniques used are quite general. The largest difference between the MOC GDS and other missions is the lack of a need to schedule instrument pointing and handle the inter-instrument conflicts that such pointing implies; this need has been a main driver of JPL research [3], and is the central concern of the mission operations system for the Hubble Space Telescope [4].

Although the GDS need not deal with pointing explicitly, since the WA system views a 1800-km swath of the planet on each orbit, it must be implicitly "pointed" by selecting subsections of the WA field of view to be acquired. This is the typical situation with instruments on any low-orbiting platform, such as the satellites planned for the Earth Observing System, and the same software could be applied to such missions.

Conclusion

The chief design goals of the MOC GDS are to minimize the need for operations staff to perform repetitive tasks, while maximizing the science return of the MOC. Operations staff should be free to deal with the inevitable problems that will occur, and to assist scientists in planning their specific observations, without becoming bogged down in the bookkeeping details of day-to-day commanding and data processing. We await the start of MOC mapping operations in November 1993 to see how well these goals will be met.

Acknowledgements

I especially thank Mike Malin, the MOC Principal Investigator, without whom the MOC GDS would have never existed. Jeff Warren has been of invaluable help in the implementation of interactive tools and the Operations Database. To the other members of the MOC team, past and present, go my heartfelt thanks. The work described here was supported by the Jet Propulsion Laboratory, Contract 959060 to Malin Space Science Systems, as part of JPL Contract NAS-7-918 with the National Aeronautics and Space Administration.

References

[1] "Design and Development of the Mars Observer Camera", M.C. Malin , International Journal of Imaging Systems and Technology, Vol. 3, 76-91 (1991).

[2] "Mars Observer Mission", A.L. Albee , Journal of Geophysical Research, Vol. 97, No. E5, 7665-7680 (1992).

[3] "PLAN-IT: Scheduling Assistant for Solar System Exploration", W.C. Dias , Telematics and Informatics, Vol. 4, No. 4, 275-287 (1987).

[4] "Expert Systems Tools for Hubble Space Telescope Observation Scheduling", G. Miller , Telematics and Informatics, Vol. 4, No. 4, 301-311 (1987).

Return to MSSS Home Page