Autonomous Exploration of Small Bodies Toward Greater Autonomy for Deep Space Missions

Autonomy is becoming increasingly important for the robotic exploration of unpredictable environments. One such example is the approach, proximity operation, and surface exploration of small bodies. In this article, we present an overview of an estimation framework to approach and land on small bodies as a key functional capability for an autonomous small-body explorer. We use a multi-phase perception/estimation pipeline with interconnected and overlapping measurements and algorithms to characterize and reach the body, from millions of kilometers down to its surface. We consider a notional spacecraft design that operates across all phases from approach to landing and to maneuvering on the surface of the microgravity body. This SmallSat design makes accommodations to simplify autonomous surface operations. The estimation pipeline combines state-of-the-art techniques with new approaches to estimating the target’s unknown properties across all phases. Centroid and light-curve algorithms estimate the body–spacecraft relative trajectory and rotation, respectively, using a priori knowledge of the initial relative orbit. A new shape-from-silhouette algorithm estimates the pole (i.e., rotation axis) and the initial visual hull that seeds subsequent feature tracking as the body gets more resolved in the narrow field-of-view imager. Feature tracking refines the pole orientation and shape of the body for estimating initial gravity to enable safe close approach. A coarse-shape reconstruction algorithm is used to identify initial landable regions whose hazardous nature would subsequently be assessed by dense 3D reconstruction. Slope stability, thermal, occlusion, and terra-mechanical hazards would be assessed on densely reconstructed regions and continually refined prior to landing. We simulated a mission scenario for approaching a hypothetical small body whose motion and shape were unknown a priori, starting from thousands of kilometers down to 20 km. Results indicate the feasibility of recovering the relative body motion and shape solely relying on onboard measurements and estimates with their associated uncertainties and without human input. Current work continues to mature and characterize the algorithms for the last phases of the estimation framework to land on the surface.

• Limited scope of autonomy use: capabilities have only been used for relatively short duration of the mission with pre-and sometimes post-monitoring from ground.
• Use of a priori maps: missions with proximity operations required extensive ground processing to generate maps that were used in subsequent autonomous maneuvers.  Figure S1. State of the practice in spacecraft navigation Figure S1 shows a high-level overview of the process used in state of the practice for determining the orbit of the spacecraft relative to the body, which combines radiometric and optical data to plan approach and maneuvers for proximity operations, including landing on small bodies. The current and most widely used method to identify and track surface features is called Stereo Photoclinometry (SPC) (Gaskell et al., 2008), which is a ground-based semi-manual process that simultaneously refines the body landmarks and updates relative orbit. Figure S2. Image patches surrounding two surface features of a procedural-generated small body, as seen under different lighting and perspective conditions. As the small body rotates, lighting causes dramatic changes in visual-appearance, which is accentuated by the absence of an atmosphere to diffuse light (see Figure S2), Figure S3. Top: Image rendering of Comet 67P with a 0 • sun-phase angle (left) and a 60 • sun-phase angle. Bottom: Pole error from the pole-from-silhouette (PfS) estimation algorithm at 0 • sun-phase angle (left) and 60 • sun-phase angle (right). The "Cast on" indicates the use of the shadow-casting variant of the pole-from-silhouette algorithm.

Figures
The performance of the pole-from-silhouette algorithm is affected by the sun-phase angle as well as the observing latitude. Figure S3(top) shows rendered images of Comet 67P/CG at two different sun-phase angles: 0 • and 60 • with their corresponding pole-estimate errors ( Figure S3(bottom)). The errors are shown for two variants of the pole-from-silhouette algorithm: the ray-casting algorithm (Cast off in the figure) and a shadow-casting variant of the ray-casting algorithm (Cast on) that extends the performance of the algorithm and relaxes the assumption of a sun-phase angle of < 20 • for this algorithm. A more detailed description of these algorithms and an assessment of their performance can be found in (Bandyopadhyay et al., 2021). Figure S4 shows the normalized mean Hausdorff distance for the coarse reconstruction of three bodies from a 1000-point dataset at different sun-phase angle using the Spherical Conformal Mapping approach with a body-symmetry assumption to recover information from shadowed regions. The results generally show a low percentage error in the coarse reconstruction (Jarvis et al., 2021).  A Convolutional Neural-Network (CNN) image matcher is trained on synthetically generated bodies. Figure S2 shows an example of patches under different lighting and perspective channges, which are used to training the network. Figure S5 compares results of matching performance between classical feature-descriptor matching, SURF and BRIEF, and CNN-trained matching.
Our best estimate of the 6 Trajectory Correction Maneuvers (TCM) are shown in Table S2. All TCM are in EME2000 frame.  Figure S6. External geometry of the notional 8U CubeSat architecture, consisting of solar arrays on three orthogonal faces, a phased array medium-gain antenna on one face, a radiator on another, and suite of imaging sensors on the final (nominally nadir-pointing) face.

Star trackers
Thruster plumes Figure S7. The cubic chassis is protected by eight corner-mounted "legs" for landing on any side. Optical frustums of spacecraft cameras, GNC sensors, as well as (red) thruster plumes are shown.  Figure S8. Synthetic image showcasing exaggerated camera artifacts, with spacecraft components in the foreground and comet 67P in the background. The Lommel-Seeliger model is used to render the comet in this frame. . Landmark angles relative to the body's coordinate frame for two pole hypotheses (black and red) from image sets 9-14. The body's coordinate frame has +z-axis pointing from body's center of mass toward the spacecraft. Landmarks with positive-declination angles (black) are on the observable side of the body (between the body center and the camera), while landmarks with negative-declination angles (red) are on the opposite side, which would get obstructed from view by the body (−z-axis of the body-camera line). Positive declination angles disambiguate different pole hypotheses that result from the Pos "ballerina effect." Figure S11. Example of postfit residuals for centroid and feature tracking.