Next Article in Journal
Tactical Coordination-Based Decision Making for Unmanned Combat Aerial Vehicles Maneuvering in Within-Visual-Range Air Combat
Next Article in Special Issue
Numerical Study on Particle Accumulation and Its Impact on Rotorcraft Airfoil Performance on Mars
Previous Article in Journal
DeepAF: Transformer-Based Deep Data Association and Track Filtering Network for Multi-Target Tracking in Clutter
Previous Article in Special Issue
Rutting Caused by Grouser Wheel of Planetary Rover in Single-Wheel Testbed: LiDAR Topographic Scanning and Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network-Aided Optical Navigation for Precise Lunar Descent Operations

1
Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
2
INAF—Osservatorio Astronomico di Padova, Vicolo Osservatorio 5, 35122 Padova, Italy
*
Author to whom correspondence should be addressed.
Aerospace 2025, 12(3), 195; https://doi.org/10.3390/aerospace12030195
Submission received: 10 January 2025 / Revised: 15 February 2025 / Accepted: 25 February 2025 / Published: 27 February 2025
(This article belongs to the Special Issue Planetary Exploration)

Abstract

:
Advanced navigation capabilities are essential for precise landing operations, enabling access to critical lunar sites and supporting future lunar infrastructure. To achieve accurate positioning, innovative navigation methods leveraging neural network frameworks are being developed to detect distinctive lunar surface features, such as craters, from imaging data. By matching detected features with known landmarks stored in an onboard reference database, key navigation measurements are retrieved to refine the spacecraft trajectory, enabling real-time planning for hazard avoidance. This work presents a crater-based navigation system for planetary descent operations, which leverages a robust machine learning approach for crater detection in optical images. A thorough analysis of the attainable detection accuracies was performed by evaluating the network performance on diverse sets of synthetic images rendered at different illumination conditions through a custom Blender-based pipeline. Simulation campaigns, based on the JAXA Smart Lander for Investigating Moon mission, were then carried out to demonstrate the system’s performance, achieving final position errors consistent with 3 − σ uncertainties lower than 100 m on the horizontal plane at altitudes as low as 10 km. This level of accuracy is key to achieving enhanced control during the approach and vertical descent phases, thereby ensuring operational safety and facilitating precise landing.

1. Introduction

Optical measurements have supported spacecraft operations in deep space since the Voyager era, complementing Earth-based navigation techniques that rely on processing radio tracking data during the approach phase to planets, moons, and asteroids [1]. Images captured by spacecraft cameras are transmitted to Earth, where the navigation team analyzes them to identify known celestial objects. The image coordinates (sample and line) of these identified landmarks are then used to refine the spacecraft orbit determination, allowing for the precise computation of trajectory correction maneuvers.
Technological advancements over the last decades enabled one of the first successful implementations of an autonomous onboard optical navigation (OpNav) system with the Deep Space-1 mission [2]. This achievement paved the way for advanced image-based navigation technologies, which have subsequently supported real-time operations during crucial mission phases, including close proximity operations at small bodies [3,4], touch-and-go maneuvers for terrain sampling [5], and pinpoint landing [6,7].
Depending on the distance between the target body and the spacecraft, different techniques are used to extract navigation information from imaging data [8,9]. Center-finding and limb-scanning methods, for instance, are employed during the cruise and approach phases. As the spacecraft approaches the surface of the celestial body, the onboard systems carry out landmark-based navigation. This approach is based on the detection and tracking of surface features, such as specific morphological elements (e.g., craters) or distinct terrain patches (e.g., maplets [10]).
In the context of terrain sampling maneuvers, the OSIRIS-REx mission utilized an autonomous maplet-based navigation system to achieve precise targeting of the asteroid Bennu’s surface [5,11]. This system correlated camera images with the expected appearance of landmark patches, enabling accurate corrections of the spacecraft positioning. Similarly, the Perseverance rover employed a feature-based navigation system for precise landing at the Jezero crater [6]. By matching point-like features (e.g., Harris interest points) between descent images and onboard reference maps of the landing area, the onboard flight system obtained real-time, accurate estimations of its state during descent, leading to unprecedented landing precision. Cross-correlation techniques were used to align keypoints between the descent images and the onboard reference map, ensuring a close match in appearance [12]. The process of generating and validating these reference maps is both complex and time-consuming. As a result, while this navigation approach holds promise for lunar missions, alternative navigation methods that leverage known surface features, such as craters, are being explored.
Craters are indeed utilized as fiducials to enhance the spacecraft localization, as demonstrated by the JAXA Smart Lander for Investigating Moon (SLIM) mission. A crater-based navigation approach was adopted to achieve landing accuracies better than 100 m [7]. By matching craters detected in images from the navigation camera with those on onboard crater maps, the system provided accurate estimates of the spacecraft position during descent [13]. Crater detection was carried out through computer vision techniques (i.e., Haar-like features), which identify craters based on their characteristic appearance, including illuminated interiors and shaded neighborhoods [14]. To address challenges posed by non-nominal illumination conditions, neural network-based approaches are gaining traction for their ability to handle environmental uncertainties effectively. A careful training dataset augmentation can enhance the network’s adaptability to diverse operational scenarios, thereby improving navigation accuracies [15].
Although the literature extensively covers machine learning techniques for crater detection, including various neural network architectures designed to identify crater locations from diverse input data (e.g., optical images, radar images, topographic maps), the practical challenges associated with integrating artificial intelligence (AI) frameworks into onboard navigation systems remain largely unaddressed. A critical limitation in space applications is the deployment of computationally demanding AI-driven algorithms on resource-constrained, space-qualified electronic hardware. However, future missions are expected to leverage commercial-off-the-shelf (COTS) computing platforms [16,17], facilitating the onboard execution of AI-enabled methods. The performance and reliability of these algorithms must be rigorously assessed to ensure their suitability for autonomous navigation and decision-making in space environments. Evaluations of AI-based crater detectors typically focus on the metrics such as precision and recall [18], but a crucial aspect that is often unexplored is the accuracy of the estimated crater centroid coordinates, which serve as critical measurements for OpNav. This accuracy directly influences the performance of multi-sensor navigation systems [19], as the precision of the data provided by each instrument significantly impacts the overall accuracy of the spacecraft’s reconstructed trajectory. These key performance factors are investigated in this study through experiments carried out in a high-fidelity synthetic lunar environment. A quantitative estimate of the attainable crater detection accuracies is retrieved by analyzing the crater detection errors under different observation conditions. Data augmentation is incorporated to enhance the network’s robustness to the illumination conditions, ensuring unbiased crater detections independent of solar direction.
In this paper, we present a crater-based navigation pipeline that leverages a state-of-the-art object detection model to enhance crater extraction accuracy. The performance of the navigation system is rigorously evaluated through extensive open-loop numerical simulations conducted within a custom high-fidelity digital planetary environment. The paper is structured as follows. Section 2 provides a detailed description of the software tools used to generate synthetic images of planetary surfaces and offers an in-depth discussion of the crater detection and matching process. In Section 3, we analyze the robustness of the crater detection and matching system under varying operational conditions. Section 4 presents the simulation setup used to assess the system performance, which is modeled after the JAXA SLIM mission. In Section 5, the attainable navigation accuracies are analyzed, including a Monte Carlo simulation, to assess the system’s robustness. Conclusions and future work are presented in Section 6.

2. Data and Methods

A high-level picture of the feature-based optical navigation pipeline is shown in Figure 1. At each new image acquisition (Section 2.1), craters are extracted by using an AI-enabled crater detection algorithm (Section 2.3) and then matched with pre-mapped craters from an onboard database (Section 2.2). If a sufficient number of crater matches is established (Section 2.4), the information is processed through a sequential estimation filter and the spacecraft’s trajectory is adjusted accordingly (Section 4).

2.1. Generation of Synthetic Images

To support the design of novel vision-based navigation techniques, testing these systems in a realistic synthetic environment is essential. A range of software tools, both commercial (e.g., PANGU [20], SurRender [21]) and open-source (e.g., SISPO [22], CORTO [23]) have been developed to create high-fidelity synthetic scenarios. Open-source tools often utilize rendering platforms, including Blender [24] and Unreal Engine [25]. Advanced techniques for generating photorealistic images of synthetic scenes employ physically based path tracing, which models light as rays and applies physical principles to simulate its interaction with both the camera and scene objects. Blender, with its state-of-the-art path-tracing engine (i.e., Blender Cycles), has become widely adopted in the scientific community for producing realistic images of planetary surfaces [22,23,26,27]. Furthermore, Blender’s scripting environment allows for the implementation of custom reflectance functions, enabling precise modeling of the complex photometric behavior of the planetary surface regolith [28]. Leveraging these capabilities, we developed a custom Blender-based pipeline to generate high-fidelity images of planetary surfaces.
The image rendering process entails an accurate definition of the scene, including cameras, light sources, and a 3D model of the celestial body. The first step involves deriving a mesh representing the observed portion of the body’s surface from a Digital Terrain Model (DTM), which is a 2D array of elevation values h referenced to a planetary shape, such as a sphere with radius R r e f . Each cell in the DTM corresponds to a specific point on the planet’s surface. After defining the mesh, it is imported into Blender for rendering. Although Blender plugins facilitate DTM-to-mesh conversion (e.g., Blender-Hirise-DTM-Importer (https://github.com/phaseIV/Blender-Hirise-DTM-Importer, accessed on 1 January 2025), Blender-GIS (https://github.com/domlysz/BlenderGIS, accessed on 1 January 2025), they often encounter limitations such as non-georeferenced points (i.e., the DTM is imported as a displacement texture) and a time-consuming mesh creation process (e.g., accessing elements of the 2D array using nested loops). To address these issues, we implemented a cost-effective pipeline for generating georeferenced meshes from planetary DTMs.
First, a 3D point cloud is generated from the DTM by computing the 3D coordinates ( ρ , l o n , l a t ) for each surface point corresponding to a DTM cell (see Appendix C). These coordinates are obtained from the map coordinates of the corresponding cell ( x , y ) and the associated elevation value h (i.e.,  ρ = R r e f + h ). The computed 3D coordinates ( ρ , l o n , l a t ) are then transformed into Cartesian coordinates ( X , Y , Z ) with respect to the body-fixed frame. Each point, representing a vertex of the final mesh, is assigned a unique identifier (ID) to streamline the process of defining the mesh faces.
As an example, a triangular face in the mesh is defined by a 3D vector, where each element represents the ID of a vertex. To efficiently assign these vertex IDs, a numbering scheme that leverages the DTM’s rasterized structure was adopted. First, an index matrix M is created, matching the size of the DTM array, which assigns a unique ID to each cell. The cells are numbered sequentially within each column, starting from the top and moving downwards (with an index of 1 assigned to the top-left cell). Upon reaching the bottom of a column, the numbering continues, proceeding until the entire grid is covered, resulting in the following matrix:
M = [ 1 n R ( n C 1 ) + 1 n R n R n C ] .
Alternatively, the matrix can be expressed in compact form as
M i j = i + ( j 1 ) n R ,
where i = 1 , , n R and j = 1 , , n C are the row and column indices of the DTM cells, respectively, with  n R representing the number of rows and n C the number of columns in the DTM array.
The computation of the index matrix enables the straightforward definition of the face matrix F, which has dimensions ( n F × 3 ) , where n F represents the number of the faces in the mesh. Each row of the face matrix corresponds to a face in the final mesh and contains the IDs of the three vertices defining that face, listed in counter-clockwise order as seen from the outward normal vector. From each set of four neighboring vertices, two triangular faces are extracted, i.e., an upper triangular face and a lower triangular face. First, two auxiliary matrices of dimension ( ( n F / 2 ) × 3 ) are computed, which collect the vertex IDs of all the upper triangular faces ( F U T ) and the lower triangular faces ( F L T ). The final face matrix (F) is then defined by accounting for the contribution of these two matrices as:
F = F U T F L T ,
where
F U T = F 2 F 3 F 4
F L T = F 1 F 2 F 4 .
Here, F 1 is a column vector derived from the matrix M ¯ by traversing it in row-major order, where M ¯ is a reduced-size matrix obtained from M by removing its first row and column. The remaining face matrix elements are computed as follows: F 2 = F 1 1 ; F 3 = F 2 n R = F 1 ( n R + 1 ) ; and F 4 = F 1 n R . After computing the mesh vertices and faces, the 3D model can be generated and imported into Blender. Since the coordinates of each vertex are defined in the body-fixed frame, Blender’s coordinate system will align with the body-fixed frame’s axes. Furthermore, because the pipeline relies solely on matrix operations, it ensures a significant reduction in the time required for mesh generation.
Because the time required to convert a DTM into a mesh increases with its size, processing a full-size DTM, such as a global map, becomes impractical. Smaller DTMs are then extracted from the full-size model based on the camera footprint, which defines the visible surface area in the image. The extraction procedure involves several key steps. First, intersection points are computed between a set of 3D vectors sampled from the boundaries of the camera’s field-of-view (i.e., viewing frustum) and the surface of the celestial body, which is assumed to be spherical with radius R. This step yields a set of 3D points ( X i , Y i , Z i ) , whose coordinates are referred to the body-fixed frame. These points are converted into spherical coordinates ( ρ i , l o n i , l a t i ) , and then projected onto a 2D map to obtain map-projected points ( x i , y i ) [29]. The boundaries ( x min , x max , y min , y max ) of the smaller-sized DTM are computed from these map-projected points. Subsequently, the full-size DTM is cropped according to these boundaries and the resulting smaller DTM is converted into a mesh, which is then imported into Blender.
The choice of full-size DTMs for the generation of local DTMs depends on the latitude of the observed area. To simulate images collected at equatorial and mid-latitude regions, we used full-size DTMs in equirectangular map projection, whereas to render images at high-latitude and polar regions we employed full-size DTMs in a stereographic projection. This ensures that the local maps accurately reflect the characteristics of the terrain based on its latitude.
Figure 2 shows some examples of meshes representing a ( 5 ° × 5 ° ) area of lunar surface centered at ( l o n c , l a t c ) = ( 48 ° , 13 ° ) . The meshes are derived from smaller-sized DTMs extracted from global Moon DTMs at resolutions of 128 px/deg (left), 16 px/deg (center), and 4 px/deg (right), respectively (https://pds-geosciences.wustl.edu/lro/lro-l-lola-3-rdr-v1/lrolol_1xxx/data/lola_gdr/cylindrical/img/, accessed on 1 January 2025). A wireframe view of the mesh is also provided on the right panel, highlighting the vertices and the faces that define the mesh.
A preliminary validation campaign was carried out to assess the Blender-based pipeline for generating synthetic images. The mission cases accounted for in these tests are based on observation campaigns in the context of the exploration of the Moon, planets, and small bodies. Synthetic images were produced to match real spacecraft imagery in terms of camera properties, acquisition geometries, and illumination conditions. The test cases included images from Kaguya’s High-Definition Television (HDTV) camera and the framing cameras on the Rosetta and MESSENGER spacecraft.
To accurately position the camera in the scene, the camera attitude matrix with respect to the planet’s body-fixed frame was extracted from the mission attitude kernels. The quaternion associated with the pointing matrix R B C is computed and used to define the camera orientation within the Blender scene (Blender’s camera frame convention differs from that used in the computer vision community, with the z-axis pointing opposite to the boresight direction, and the x- and y-axes pointing to the right side and to the top of the image, respectively. The camera attitude matrix retrieved from the mission attitude kernels should be adjusted accordingly). The camera position is determined from the mission position kernels at epoch t = t 0 + t e x p / 2 , where t 0 is the initial image acquisition time, and  t e x t is the image exposure time, both of which are provided in the image metadata.
Figure 3 and Figure 4 show a real (left) and a rendered image (right) of the Moon, as seen from Kaguya’s HDTV telephotographic (TELE) and wide-angle (WIDE) cameras, respectively [30]. The real and rendered images are consistent with surface features aligning closely and appearing similarly in size and in terms of illuminated and shadowed regions. Nevertheless, the HDTV-TELE rendered image appears to be slightly “pitched up”. This discrepancy may be attributed to minor errors in the reconstructed camera attitude, as derived from the mission SPICE kernels. The impact of these mismodeling is amplified by the spacecraft observation geometry, with the camera oriented in the forward direction. Such inconsistencies preserve information on orbital corrections that may be derived through skyline-based image residuals.
In addition to lunar missions, the image generation pipeline is also adaptable to other mission scenarios, accommodating cameras with differing fields of view and image resolutions. Additionally, the pipeline enables the seamless extraction and integration of camera position and attitude data from the mission SPICE kernels.

2.2. Optical Data Modeling

The optical measurements supporting spacecraft navigation are based on a landmark-based approach. In this method, the spacecraft’s state is refined by minimizing the 2D discrepancies between observed crater centroids and the projected centroids of pre-cataloged craters. This comparison allows for precise adjustments to the spacecraft position and velocity, ensuring accurate navigation relative to the observed celestial body.
A perspective distortion-free camera model is used to project 3D points onto a 2D image plane (Figure 5). For a given 3D point in the camera frame C, P C = [ X , Y , Z ] T , its 2D projection onto the image plane, p = [ x , y ] T , is computed as follows [8]:
p 1 = x y 1 = 1 Z f x 0 c x 0 f y c y 0 0 1 X Y Z ,
where f x and f y are the focal length scaled by the pixel size along horizontal and vertical directions, respectively (for square pixels f x = f y = f ), and  ( c x , c y ) is the principal point (i.e., the image point where the optical axis intersects the image plane, assumed to be orthogonal to the image plane). The 3D point P C is derived from
P C = R B C r C B r SC B ,
where R B C is the rotation matrix from the Moon’s body-fixed frame to the camera frame, while r C B and r SC B are the position vectors of the crater centroid and the spacecraft, relative to the Moon’s center, both expressed in the Moon’s body-fixed frame B.
The Cartesian coordinates of the i-th cataloged crater in the Moon’s body-fixed frame are derived from its spherical coordinates ( ρ i , l a t i , l o n i ) . Accurate modeling of the crater’s distance from the Moon’s center ( ρ i ) is key to correctly project its location onto the image plane. This distance is expressed as
ρ = ρ ¯ + Δ h ,
where ρ ¯ represents the radial distance from the Moon’s center to the crater floor, and  Δ h is the crater’s depth.
An estimate of the crater’s depth is carried out from Wang’s crater catalog [31], which provides the crater’s center and diameter, and auxiliary morphometric data, including depth. The value of ρ ¯ for each crater is estimated using a global LOLA-based DTM of the Moon with a resolution of 128 px/deg [32], corresponding to ∼237 m/px at the equator, applying a nearest-neighbor interpolation at the coordinates ( l o n i , l a t i ) . Once the spherical coordinates of the crater are computed, its Cartesian coordinates are determined as
B r C , i = ρ i cos ( l a t ) cos ( l o n ) cos ( l a t ) sin ( l o n ) sin ( l a t ) i .
Accounting for the crater’s depth is crucial for obtaining an accurate projection of the crater centroid from the catalog, as the crater rim is offset from the Moon’s surface (Figure 6).
The effect of incorporating a crater’s depth into the projection process is illustrated in Figure 7. The left panel shows the reprojected crater rims onto a synthetic image of the lunar surface, captured by a wide-angle nadir-looking camera from an altitude of ∼100 km, without accounting for the crater’s depth. This results in a misalignment between the projected and observed crater rims. Conversely, when the depth is included (right panel of Figure 7), the discrepancies are mitigated, leading to projected crater rims that are fully consistent with the observed craters.

2.3. Crater Detection

Craters are key features for spacecraft navigation across challenging mission scenarios, including pinpoint landing [13,33,34], long-range surface exploration [35], and precise orbit determination of orbiting platforms [36,37]. A robust crater detection algorithm, capable of handling varying illumination conditions and viewing geometries, is essential for a feature-based navigation system. Several techniques have been proposed to perform crater detection on optical images, ranging from well-established computer vision methods (e.g, edge detection [38], image segmentation [39,40,41], Haar-like features [14]) to advanced artificial intelligence approaches (e.g, [34,42,43,44,45]). Among existing Convolutional Neural Network (CNN) architectures, single-stage detectors, such as SSD [46] and YOLO [47], are emerging as state-of-the-art methods for real-time object detection applications. Compared to single-stage detectors, two-stage models, such as Faster R-CNN [48], achieve more accurate bounding box regression by refining the Region-of-Interest (ROI) proposals and incorporating a feature enhancement step in the second stage. However, the improved detection performance comes at the cost of significantly higher computational complexity, resulting in slower inference times than one-stage architectures. A trade-off analysis was, thus, conducted between multiple techniques in terms of detection accuracy, computational efficiency, and memory usage.
During the preliminary stages of this study, in addition to neural network-based crater detectors, detection methods based on solid computer vision techniques, such as image segmentation with k-means clustering [39], were accounted for. After segmenting the input image, candidate craters are retrieved by fitting an ellipse to pairs of dark and bright patches of similar size, which are aligned to the Sun’s direction (Figure S1). Preliminary results indicate that the image segmentation-based approach enables crater detection across varying solar illumination angles while achieving faster processing times compared to network-based approaches. However, this approach yields fewer detections with higher noise levels, leading to increased uncertainties in crater centroid estimation (Figure S2). Even with optimized parameter tuning (e.g., the number of image clusters), AI-enabled approaches are expected to achieve superior detection accuracy. The ongoing development of space-graded high-performance computing platforms [49] and the implementation of strategies to mitigate radiation-induced errors on COTS instruments [17] further support the adoption of AI-aided crater detection, which has been successfully applied in previous studies for geological and morphological features mapping [43,50].
To better assess the performance of the proposed navigation toolchain, testing and validation campaigns were conducted to evaluate alternative object detection frameworks, including SSD (with a VGG16 backbone), Faster R-CNN (with a ResNet50 backbone), YOLOv5-extra-large, and YOLOv5-nano (https://github.com/ultralytics/yolov5 (accessed on 1 January 2025) networks. Compared with the smaller YOLOv5-nano model, the other networks demonstrated comparable or superior precision and recall, aligning with the findings of Parracino et al. [51] who evaluated SSD (with a MobileNetV2 backbone) and YOLOv7. However, on average, the YOLOv5-nano model exhibited the shortest inference times. Given these trade-offs, YOLOv5-nano was selected for this study and extensively analyzed for detection accuracy and robustness under different illumination conditions (Section 3).

Training

A supervised transfer learning approach was employed to fine-tune a pre-trained YOLOv5-nano model for crater detection. By leveraging the low-level features learned by the pre-trained network, transfer learning enables high detection accuracies even with a limited dataset while also accelerating the training process. Given an input image, the fine-tuned network extracts multiple crater instances as bounding boxes centered on candidate locations, each assigned a confidence score ranging between 0 and 1. Higher confidence scores indicate a greater likelihood that the detected instance corresponds to a crater.
In a supervised framework, the network is trained using a ground truth (GT) dataset of true positive detections. We, thus, created a GT dataset consisting of images and corresponding crater labels. This dataset includes over 1500 images, obtained by extracting small-sized image patches (i.e., 512 × 512 px) from a global mosaic of the lunar surface (i.e., 128 px/deg, equivalent to approximately ∼237 m/px at the equator) [32]. Due to the distortion effects of the mosaic’s map projection, which are more pronounced at higher latitudes, we restricted the extraction of image patches to mid-latitudes.
For each image in the dataset, we defined GT labels corresponding to the craters within the image patch (Figure 8). Each label is represented by a rectangle defined by a four-dimensional vector ( x c , y c , w , h ) , where ( x c , y c ) is the rectangle’s center, and w and h are its width and height, respectively. The GT crater labels were derived from state-of-the-art lunar crater databases (e.g., Robbins’ [52] and Wang’s [31] catalogs). By extracting the crater’s diameter and center location ( l o n c , l a t c ) from these catalogs, the edges of the craters are represented as 2D ellipses in the map-projected image. Bounding boxes were then computed for each crater as the smallest rectangle that circumscribes the ellipse (see Appendix A); note that the sides of each rectangle are parallel to the image boundaries. The box parameters were normalized to a range between zero and one by dividing x c and w by the image width (in pixel), and  y c and h by the image height (in pixel).
A key requirement for the crater detection subsystem is its robustness to varying observation geometries encountered during spacecraft operations. Since the patches extracted from the mosaic are characterized by nearly uniform illumination conditions (e.g., Sun-from-west; see Figure 8 and Figure 9A), we augmented the dataset by applying rotations to the image tiles to simulate different solar azimuth angles (Figure 9D). While open-source packages offer a range of data-augmentation techniques, including rotation transformations, directly applying these methods can lead to labeling issues; because the sides of the rectangles should be parallel to the image boundaries, bounding boxes in rotated images are defined as rectangles that pass through the vertices of the original bounding boxes, rather than being tangent to the crater rims (Figure 9F). This misalignment becomes critical when rotation angles approach ± 45 ° . To address this issue, we implemented a custom data rotation algorithm designed to determine bounding boxes that are tangent to the crater edges (Figure 9E). This algorithm operates by first rotating the ellipses (and the image) rigidly and then computing the bounding boxes as rectangles that are tangent to the rotated elliptical contours, as illustrated in Figure 9.
To train the model, the augmented crater dataset was partitioned into training and validation sets by adopting a 75/25 split ratio, and the stochastic gradient descent (SGD) optimizer was used, with a batch size of 64 and a learning rate of 1 × 10 2 . After training, the fine-tuned YOLO-nano network achieved a precision of 0.64 and a recall of 0.54 on the validation set. Although enhanced detection performance can be achieved by adopting models with more parameters, our results indicate that the fine-tuned network is well suited to fully support navigation operations, demonstrating high robustness to the illumination conditions (see Section 3.1). Furthermore, its limited requirements in terms of memory consumption could make it suited for deployment on future COTS computers for space applications [16]. Nevertheless, in case improved detection metrics are required, as in the case where a more comprehensive global crater database has to be created, larger models can be adopted, such as the YOLO-extra-large model (for comparison purposes, the YOLO-extra-large and YOLO-nano models have more than 85 million and ∼2 million parameters, respectively) [43]. The enhanced detection performance of these models, however, comes at the cost of increased resource and memory consumption, which is not compatible with the constrained computational capabilities of onboard systems. To mitigate this limitation, the YOLO-nano model was selected minimizing the computational burden associated with crater detection, facilitating its deployment on future robotic platforms expected to feature expanded processing capabilities. Nevertheless, this network exhibits high efficiency and fast inference even on CPU-based platforms with limited computational resources [53].
Given an image as input to the trained network, bounding boxes are generated for each detected crater, with the center of the box assumed to correspond to the crater’s center. The crater diameter (in pixels) is then estimated as the average of the bounding box’s width and height, calculated as d = ( w + h ) / 2 . This parameter plays a critical role in the crater matching process (Section 2.4). Despite being trained on real lunar images only, the network is also capable of detecting craters in synthetic lunar images generated through the Blender-based pipeline (Section 3).
By improving the image contrast, enhanced crater detection performance is achieved by the network especially in poorly textured images. In our experiments, the CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm was, thus, applied to pre-process the images before feeding them into the YOLO-based network for crater extraction.

2.4. Crater Matching

Crater matching (or crater identification) is a fundamental task in defining crater-based navigation measurements. This process involves establishing correspondences between two distinct sets of craters: those detected in an image (i.e., “observed” craters) and those listed in an onboard database (i.e., “cataloged” craters). Crater matching is extremely challenging because the two sets may differ significantly. For instance, the detector might identify craters not present in the catalog or fail to detect known craters. Different approaches have been proposed in the literature to address this challenge, including methods based on mathematical invariants of multi-crater patterns (e.g., [41,54,55]), stochastic and RANSAC-based approaches (e.g., [39,40]), and machine learning-based techniques (e.g., [44]).
To address the crater matching task, an initial approach based on nearest-neighbor matching was considered. However, this method is prone to generate erroneous associations in dense crater fields, where a high number of detected/catalog-projected craters increased the likelihood of a noisy match, reducing the effectiveness of statistical filtering techniques. To improve robustness against incorrect data associations, a method based on the Joint Compatibility Branch and Bound (JCBB) technique was explored, as it iteratively identifies a mutually consistent set of correspondences [56]. However, its computational complexity limited its applicability to large datasets.
Given these challenges, the crater matching strategy was then implemented by using a triad-based approach (Figure 10), building upon previous work on feature-based OpNav techniques [55]. Originally conceived for spacecraft state initialization in a lost-in-low-lunar-orbit scenarios, the crater matching cost function was adapted to incorporate a priori spacecraft state information. A  statistical filtering step was introduced to perform a preliminary rejection of outlier matches. The complete crater matching pipeline follows a structured sequence of steps, which are outlined in Algorithm 1 and detailed in the following sections.
Algorithm 1 Crater Matching Algorithm
  1:
Input: Centroids and diameters of observed craters ( z , D o b s ), centroids and diameters of projected catalog craters ( h , D c a t ), positive values ϵ 1 , ϵ 2 , ϵ p x , ϵ p e r c , χ ¯ α , 2 2
  2:
Output: Optical residuals, ϵ C
  3:
d o b s DefineTriads( z , D o b s )                                                                            ▹ N o b s triads
  4:
( d a l l c a t , k v e c ) ← DefineTriads( h , D c a t )                                            ▹ N c a t triads and k-vector
  5:
Create empty triad correspondence matrix C T
  6:
for  i = 1 to N o b s  do                                                                     ▹ for each observed triad
  7:
       α S , i d i o b s                                                                      ▹ triad’s smallest internal angle
  8:
       d s u b c a t SelectTriads( d a l l c a t , k v e c , α S , i , ϵ 1 , ϵ 2 )                        ▹ downselect catalog triads
  9:
      if  d s u b c a t is not empty then
10:
             E i , j ¯ FindBestTriadMatch( d i o b s , d s u b c a t )                       ▹ E i = E ( i , j ¯ ) , with  j ¯ N c a t
11:
            Append ( i , j ¯ , E i ) to C T
12:
      end if
13:
end for
14:
if  C T is not empty then
15:
      Initialize crater correspondence matrix C C T
16:
      CSolveOneToMany(C, C T )
17:
      CDiameterFiltering(C, D o b s , D c a t , ϵ p x , ϵ p e r c )
18:
      CChiSquareFiltering(C, z , h , χ ¯ α , 2 2 )
19:
       ϵ C ComputeResiduals(C, z , h )
20:
      return  ϵ C
21:
end if
Correspondences between different triads are established by comparing their invariants, a set of scalar quantities that uniquely characterize a crater triad and can be used to evaluate the similarity between two triads. First, two sets of crater triads are defined: observed triads, based on craters detected by the network, and database triads based on the projected craters from the catalog (derived from the estimated spacecraft’s pose). A preliminary estimate of the spacecraft state is assumed to be available from the navigation filter, enabling a downselection of the onboard crater catalog. This filtering step excludes craters outside the field of view (FOV), reducing the computational load associated with generating database triads. However, due to the combinatorial nature of the problem, processing large datasets of detected/catalog projected crater can still require significant computational time.
For each observed and database triad, a 6-dimensional descriptor is computed by accounting for geometrical properties (Lines 3 and 4). This descriptor includes the cosines of the smallest and largest internal angles of the triangle, the normalized diameters of the three craters (i.e., normalized by the longest side of the triangle), and a parameter δ that represents the triad’s orientation [55]. The orientation of the triad is determined by moving across its vertices in order from the smallest angle to the intermediate angle, and then to the largest angle (i.e., δ = 1 if the triad has a counterclockwise orientation; δ = 1 if it has a clockwise orientation). The invariant parameters related to the l-th triad are collected in the invariant array d l , defined as
d l = [ cos ( α S ) , cos ( α L ) , D ¯ i , D ¯ j , D ¯ k , δ ] ,
where i, j, and k refer to the indices of the craters in the triad (Figure 10), D ¯ i , D ¯ j , and D ¯ k are their normalized diameters, and α S and α L are the smallest and largest internal angles, respectively. A detailed discussion on the steps required to compute the descriptor for a given triad is provided in Appendix B.
For each observed triad, we identify the corresponding database triad that minimizes the cost function E, which is defined as
E ( m , n ¯ ) = SAD ( d m o b s , d n ¯ c a t ) + ε i = 1 3 ( x m , i x n ¯ , i ) 2 + ( y m , i y n ¯ , i ) 2 1 / 2 ,
where SAD is the sum of absolute differences; d m o b s and d n ¯ c a t are the descriptors of the m-th observed triad and the n ¯ -th database triad, respectively; and ( x m , i , y m , i ) and ( x n ¯ , i , y n ¯ , i ) are the centroids of the i-th crater (i = 1, 2, 3) in the observed and candidate database triads, respectively. The second term in the cost function accounts for minimizing the relative distance between corresponding craters. However, this term is scaled by a small positive coefficient ε (e.g., ε = 0.003 ), giving greater weight to the similarity of the triad invariants in the overall matching metric.
Calculating the cost for each database triad for a given observed triad is computationally expensive. To speed up this process, a downselection of the database triads is performed based on the smallest internal angle of the observed triad ( α S ), retaining only triads whose smallest internal angle ( α ˜ S ) is sufficiently close to α S using the following criterion:
c m i n < cos ( α ˜ S ) < c m a x
where c m i n = cos ( α S ) ϵ 1 and c m a x = cos ( α S ) + ϵ 2 , with ϵ 1 , ϵ 2 > 0 . The downselection is optimized using the k-vector range searching technique [57] (Line 8), which sorts the database triads by their smallest internal angle for faster processing.
The output of the crater matching step is a set of candidate correspondences between craters detected by the network and those projected from the catalog onto the image plane. Ambiguous matches are filtered out based on their triad matching score (Line 16). For example, if a catalog crater in the i ¯ -th database triad is paired with three observed craters from different observed triads l , m , n with matching costs E ( l , i ¯ ) > E ( m , i ¯ ) > E ( n , i ¯ ) , the catalog crater will be paired with the crater from triad n.
After completing this step, a set of one-to-one crater correspondences is obtained, but mismatches may still occur. A two-steps filtering approach is then applied to filter them out. First, crater pairs with diameter differences greater than ϵ p e r c = 25% (or an absolute difference exceeding ϵ p x = 5 px) are eliminated (Line 17). Next, statistical filtering is carried out based on the distribution of 2D optical residuals (Line 18). Given a crater match i, the corresponding residual ϵ i is computed as the difference between the image location of the i-th detected crater z i = [ x i , y i ] T and the corresponding catalog-projected crater h i = p i = [ x ¯ i , y ¯ i ] T (see Equation (6)). The Mahalanobis distance is computed between the residual point ( ϵ C , i = z i h i ) and the median of the distribution ( μ ), as
( ϵ C , i μ ) T Σ 1 ( ϵ C , i μ )
where Σ is the covariance matrix of the 2D residuals. If the computed distance exceeds a preset threshold value χ ¯ α , 2 2 based on the chi-square distribution with two degrees of freedom (DOF) (e.g., χ ¯ 0.9 , 2 2 4.605 , corresponding to a 90 % confidence interval for a chi-square distribution with two DOF; note that two is the number of DOF of the measurement model), the crater pair is eliminated as an outlier. The median is used instead of the mean to reduce the influence of outliers. The filtering process may be repeated until no more outliers are identified.
Figure 11 shows the statistical filtering process applied to crater matches in an example image captured by the navigation camera. Matched crater pairs are reported in the left panel, with catalog-projected craters represented as ellipses and detected craters as boxes. The right panel presents the 2D distribution of optical residuals, where crater pairs failing the Mahalanobis-distance test are marked in red and rejected, while those passing the test (in blue) are incorporated into the navigation filter (see Section 4).

3. Software Pipeline Testing Campaign

The computation of the optical data is affected by different error sources, primarily stemming from the detection performance of the neural network. Additional errors arise from the crater catalog itself, including inaccuracies in crater depth estimation. In a multi-sensor navigation system, each data type must be weighted according to its expected measurement accuracy [19]. Thus, a thorough characterization of the crater detection accuracy is key for fine-tuning the onboard navigation filter.

3.1. Robustness Analysis of Machine Vision Techniques

To evaluate the robustness of crater detection and matching algorithms under varying conditions (e.g., solar azimuth and elevation angles), a series of thorough testing campaigns were conducted using synthetic datasets generated through our custom Blender-based toolbox.
Multiple sets of rendered images were created by using consistent camera poses simulating a spacecraft in a low-altitude lunar orbit equipped with a nadir-looking wide-angle camera (WAC). Each dataset featured different solar illumination conditions, adjusted by modifying the direction of solar rays in the simulated environment (Figure 12). The crater detection and matching algorithms were then independently applied to each dataset, allowing for the calculation of the optical residuals (i.e., pixel discrepancies between the centroids of matched craters). Ideally, a robust crater detector should remain unaffected by changes in image illumination, leading to optical residuals following a zero-mean Gaussian distribution.
An initial analysis was carried out using a preliminary version of the crater detector, trained on a ground truth dataset including only horizontally flipped tiles (i.e., Sun-from-west and Sun-from-east illumination conditions) and tiles rotated by 90 ° clockwise or counterclockwise (i.e., Sun-from-south and Sun-from-north illumination conditions). Craters were first extracted from images whose illumination matched with the training dataset (e.g., left panel in Figure 12). A zero-transfer experiment was conducted to evaluate the network’s generalization capability under “unknown” illumination conditions not included in the training dataset. This involved extracting craters from synthetic datasets with solar azimuth angles outside the training set (i.e., Sun-from-southwest, Sun-from-southeast, Sun-from-northeast, and Sun-from-northwest azimuth angles).
Figure 13 presents the statistical distribution of optical residuals for synthetic datasets that account for a fixed solar elevation angle of 15 ° (top row of Figure 12) under various assumptions regarding the solar azimuth. As expected, the optical residuals for datasets with known illumination conditions (e.g., Sun-from-west, Figure 13(left)) exhibited an unbiased distribution. However, for images with “unknown” illumination conditions (center and right panels in Figure 13), crater matching performance significantly decreased. The optical residual distributions revealed biases of approximately half a pixel, with a strong correlation between horizontal and vertical directions. These results underscore the preliminary detector’s limitations in accurately determining crater bounding boxes due to variations in crater shadow orientation and shape that were not accounted for during training.
To compensate for the crater detector’s limitations under previously unseen illumination conditions, the network was enhanced through fine-tuning of the pre-trained YOLO model. This improvement was achieved by training on an augmented ground truth dataset that included a finer sampling of solar azimuth and elevation angles. Specifically, the dataset incorporated rotated images with a more detailed discretization of rotation angles (see Section 2.3). By expanding the range of training data, the re-trained network demonstrated significantly improved performance, particularly in its robustness to varying illumination conditions. For instance, Figure 14 and Figure 15 show the 2D distribution of the optical residuals for three example datasets with different solar azimuth angles (i.e., from left to right, Sun-from-west, Sub-from-southwest, and Sun-from-southeast) and a 15 ° (Figure 14) and 22 . 5 ° (Figure 15) elevation angle, respectively (see Figure 12). The updated network achieved smaller biases and reduced correlations between residuals in the horizontal and vertical directions for each dataset, with 1 − σ formal uncertainties of about two pixels in both dimensions. This improvement highlights the detector’s enhanced ability to accurately extract crater locations from images regardless of illumination conditions.
This refined training process enabled more accurate detection of crater locations, reducing biases and correlations in the optical residuals. Consequently, the updated detector was used in numerical simulations of the landing mission scenario (Section 4).

3.2. Impact of Surface Landmark Mismodeling on Crater Matching Performance

An accurate estimation of the crater depth is key to ensure that the projected crater centroids align with those detected by the network. To better understand the effects of mismodeled crater coordinates on the landmark matching algorithm, we conducted experiments using the crater matching pipeline without accounting for the crater depth information (see Equation (8) and Figure 6). Figure 16 shows the 2D distribution of the optical residuals for the same datasets as in Figure 15, but without considering crater depth in the onboard database definition. A direct comparison of the plots between the two figures reveals that, although the distribution median and the number of matched craters remain largely unchanged, the residuals exhibit a significantly greater spread. This suggests that accounting for the crater depth is essential to avoid projection errors of crater-based landmarks onto the image plane (e.g., Figure 7). Accurate depth modeling enables precise computation of measurement residuals and improves the spacecraft trajectory estimation through the navigation filter.
These analyses provided a quantitative assessment of the effects of crater mismodeling on the matching process. However, specific values for the optical discrepancies vary depending on several factors, including camera characteristics (e.g., field-of-view, spatial resolution, calibration) and spacecraft pose (e.g., higher altitudes with respect to the surface reduce the impact of crater mismodeling). Additional error sources include the methods used for determining crater center locations in the catalog (e.g., manual labeling or AI-based techniques) and the quality of input data sources (e.g., surface mosaic, 3D topography models).
An accurate and comprehensive onboard crater database is crucial for supporting navigation operations. However, existing catalogs of craters smaller than 1 km are limited to specific regions only, such as the lunar South Pole [58]. Since onboard cameras during landing observe small-scale craters not included in global catalogs, the SLIM mission navigation team created and stored onboard a comprehensive database of craters expected to be visible to the spacecraft. Similarly, we created a custom dataset of small-scale craters to support optical navigation in simulated mission scenarios, leveraging open-source data and tools.
The first step involved importing the LRO WAC global lunar mosaic [32] into QGIS [59]. Using the OpenCraterTool plugin [60], we visualized the crater footprints based on Robbins’ data (Figure 17, light blue) and manually labeled small-scale craters not included in this catalog (Figure 17, orange). Our analysis focused on the predicted ground track of the SLIM spacecraft (i.e., 24 ° l o n 36 ° , 70 ° l a t 11 ° ), resulting in an auxiliary dataset of more than 500 craters.
The creation of a custom catalog of small craters from the LRO WAC global mosaic introduced a significant challenge in ensuring consistency with the high-resolution DTM used to generate high-quality synthetic images for a landing scenario. For these synthetic images, we employed the SLDEM2013 model, derived from Kaguya’s Terrain Camera (TC) data, which are archived as 1 ° × 1 ° tiles in the SELENE Data Archive (https://data.darts.isas.jaxa.jp/pub/pds3/sln-l-tc-5-sldem2013-v1.0, accessed on 1 January 2025). When projecting these newly cataloged craters onto synthetic images generated based on the simulated trajectory of the SLIM spacecraft, substantial discrepancies emerged between the manually labeled craters and the network’s detections.
The SLDEM2013 model was georeferenced using Lunar Orbiter Laser Altimeter (LOLA) data processed with LRO trajectories reconstructed prior to the significant improvements in lunar gravity modeling provided by the Gravity Recovery and Interior Laboratory (GRAIL) mission [61]. In contrast, the LRO WAC global mosaic was produced using more accurate LRO orbits, determined with a higher-resolution lunar gravity field.
These georeferencing inconsistencies between the LRO WAC mosaic and the SLDEM2013 model were quantified through the analysis of multiple synthetic images. Figure 18 shows that the average discrepancies between crater centers from the LRO WAC mosaic and the SLDEM2013 model were measured to be approximately 0 . 004 ° in longitude and 0 . 003 ° in latitude (Figure 19). These offsets necessitated adjustments to the coordinates of the manually labeled craters in our catalog to align them with the SLDEM2013 model and improve the accuracy of the crater matching process in the simulated landing scenario.

3.3. Inference Times

A primary challenge constraining the widespread adoption of AI-enabled techniques in space applications is the limited computational capacity of radiation-certified processors (e.g., BAE RAD750). Accurately estimating the network’s inference times requires hardware-in-the-loop (HITL) testing on state-of-the-art resource-constrained computing platforms. However, preliminary assessments can be performed on higher-performance computers by artificially restricting computational resources through software-based limitations.
To facilitate these campaigns, the Ubuntu scheduler was configured to enforce execution of the inference task on a single 2.5 GHz core of a modern laptop CPU. Additionally, CPU usage was restricted to 8% of the total processing time, effectively simulating a 200 MHz clock. Under these conditions, approximately 350 images were processed, yielding an average inferring time of 360 ms per image. This test provides a preliminary but representative estimate of the inference time, suggesting that the algorithm could support real-time navigation operations. In-depth analyses are planned for the development and implementation of a semi-autonomous Guidance, Navigation, and Control (GNC) subsystem for orbiting and surface space probes.

4. Numerical Simulations on a Landing Test Case

To validate and test the optical navigation pipeline, a landing test case was considered that is based on the JAXA SLIM mission. Landed in January 2024 close to the Shioli crater, the mission demonstrated the successful use of a vision-based pipeline to perform a pinpoint landing on the Moon, achieving an unprecedented accuracy of 100 m with respect to the targeted landing site [13]. A multi-crater pattern matching technique was used to establish crater correspondences between a pre-labeled reference crater map and the craters detected in the camera images by using a dedicated crater extraction pipeline [14], enabling an accurate localization of the lander throughout the descent phase.
A similar approach was used in this work to retrieve sequential adjustments to the spacecraft’s state by minimizing the optical residuals, which are based on correspondences between the craters detected in the image by a neural network (Section 2.3) and those projected from an onboard catalog on the image plane according to the estimated spacecraft’s pose (Section 2.2).
As a first step, synthetic images were generated by using a reference spacecraft’s trajectory to simulate the images taken by the onboard nadir-looking navigation camera during the descent phase. The characteristics of our navigation camera (NavCam) are based on SLIM’s NavCam (i.e., the detector size, shutter type, and image acquisition rate) [62], but with a wider FOV of 45 ° × 45 ° compared to SLIM’s NavCam ( 30 ° × 30 ° ). While initial tests were conducted with a narrower FOV, a wider FOV was ultimately selected to increase the number of cataloged craters visible at lower altitudes, thereby enhancing the navigation system’s performance.
A reference trajectory for the SLIM spacecraft was retrieved from the NASA Horizons web application (https://ssd.jpl.nasa.gov/horizons/, accessed on 1 January 2025) in the form of tabulated spacecraft’s position and velocity in the Inertial Celestial Reference Frame (ICRF). In our numerical simulations, a mission time span of about 25 min was considered, during which the spacecraft’s altitude decreased from ∼60 to ∼10 km (Table 1).
The analysis of the reference trajectory for the SLIM spacecraft revealed inconsistencies between the tabulated spacecraft velocity v and the velocity computed by numerically differentiating the tabulated spacecraft position ( v ˜ = Δ r / Δ t ). An updated position profile was then retrieved by integrating the spacecraft kinematic equations backward in time according to the tabulated velocity profile.
The updated trajectory provides the ground truth for simulating data collected by the onboard sensors. Synthetic images were generated based on the integrated trajectory by assuming a nadir-pointed camera with the z-axis pointing towards the lunar surface. The orientation of the camera frame with respect to the inertial frame was retrieved according to:
R I C ( t ) = [ n ^ t ^ r ^ ]
where r ^ = r / | | r | | , n ^ = ( r × v ) / | | ( r × v ) | | , and t ^ = n ^ × r ^ are the radial, normal, and transverse unit vectors, respectively, which are computed according to the inertial spacecraft’s state.
An accurate modeling of the spacecraft dynamics is crucial to propagate the spacecraft’s state between successive image acquisitions. Because the reference SLIM’s trajectory is based on a time-varying thrust acceleration profile, its contribution should be modeled in the dynamical equations. The thrust acceleration is included by simulating the readings of an onboard accelerometer that is part of an inertial measurement unit (IMU). A true acceleration profile (in the inertial frame) was first computed by numerical differentiating the spacecraft’s velocity. After removing the contribution due to conservative forces from the computed acceleration profile, the instrument readings were simulated by corrupting the nominal acceleration data with zero-mean Gaussian noise ( n V R W ) and a time-varying bias ( b a ) modeled as a random walk process (Table 2) [63]. As a final step, a rotation was applied to express the computed acceleration measurements in the IMU frame, which was assumed to coincide with the camera frame.
The propagation of the state equations was carried out according to:
x ˙ = r ˙ v ˙ b ˙ a = v a G + ( R I S B ) T a I M U 0 3 × 1
where a G is the acceleration due to gravitational forces (in the inertial frame), a I M U is the thrust acceleration measured by the accelerometer, assuming that the non-conservative forces, including solar radiation pressure, are negligible, and R I S B is the spacecraft attitude matrix (i.e., rotation matrix from the inertial frame to the spacecraft body frame, which is assumed to coincide with the IMU and camera frames). The state equations are integrated with a constant time step of Δ t = 0.01 s by using a first-order numerical model, which propagates the estimated state x ^ from time step k to time step k + 1 . In this work, the spacecraft’s attitude is not adjusted in the navigation filter that only accounts for the spacecraft’s position and velocity. Attitude determination errors are accounted for in the integration of the state equations by perturbing the true spacecraft attitude based on the expected performances of the onboard attitude determination system (ADS). A perturbed attitude matrix R ¯ I S B is then computed as:
R ¯ I S B = R e r r R I S B
where R e r r = R x ( δ θ x ) R y ( δ θ y ) R z ( δ θ z ) is a sequence of three elementary rotations, with δ θ x , δ θ y and δ θ z representing angular errors on the pitch, roll, and yaw axes, respectively, which are computed as zero-mean Gaussian variables with variance σ θ = 0 . 05 ° .
The state covariance matrix P ^ k is propagated according to:
P ¯ k + 1 = Φ ( t k + 1 , t k ) P ^ k Φ T ( t k + 1 , t k ) + Q d
where Φ ( t k + 1 , t k ) is the state transition matrix (STM) and Q d is the discrete-time process noise covariance matrix. A first-order Taylor series expansion is used to compute the state transition matrix Φ ( t k + 1 , t k ) from the ( 9 × 9 ) discrete-time system matrix F k = x ˙ / x k [64]. A first-order model is also used to compute the matrix Q d as
Q d G ( Q c Δ t ) G T
where G = x ˙ / n w k is the ( 9 × 6 ) discrete-time noise mapping matrix [64]; and Q c is the ( 6 × 6 ) continuous-time system noise covariance matrix defined as
Q c = σ V R W 2 I 3 × 3 0 3 × 3 0 3 × 3 σ A c c R W 2 I 3 × 3 .
At each new image acquisition, an attempt is made to adjust the spacecraft’s state based on the crater matches. The statistical filtering process based on the 2D distribution of the optical residuals enables a preliminary rejection of outlier crater pairs (Section 2.3). However, the effectiveness of the filtering process depends on the number of available crater correspondences. The greater the number of crater matches, the more effective the outlier filtering process, as it enables a more robust estimation of the distribution’s covariance. For instance, when crater correspondences are limited and include outliers, the filtering process may fail to reject any crater pairs, potentially allowing some outliers to remain unfiltered. An additional Mahalanobis distance-based test (i.e., the Normalized Innovation Squared (NIS) test) is, thus, conducted for each crater pair, rejecting matches that do not satisfy the following condition [6,64,65]:
( z i h i ) T ( H i P ¯ k + 1 H i T + R i ) 1 ( z i h i ) < χ ¯ α , 2 2
where H i is the Jacobian matrix of the i-th optical data, which collects the partial derivatives of the observable h i with respect to the state variables; R i = σ p x 2 I 2 × 2 is the measurement noise matrix (note that, because crater mismodeling effects tend to increase at lower altitudes, an inflated uncertainty σ p x = 3 px was considered in the implementation of the navigation filter); and χ ¯ α , 2 2 is a preset threshold based on the probability density function of the chi-square distribution with two DOF (e.g., α = 0.9 ).
The filtering process leads to a subset of L crater correspondences that are accounted for in the state estimation filter. The (2L × 1) optical residual vector ( ϵ C ), the (2L × 9) Jacobian matrix (H) and the (2L × 2L) block-diagonal measurement noise matrix (R) are computed by incorporating the contribution of each inlier crater pair. The propagated state vector ( x ¯ k + 1 ) and covariance matrix ( P ¯ k + 1 ) are then updated according to the Extended Kalman Filter (EFK) equations, as:
x ^ k + 1 = x ¯ k + 1 + K ϵ C
P ^ k + 1 = ( I 9 × 9 K H ) P ¯ k + 1 ( I 9 × 9 K H ) T + K R K T
where K = P ¯ k + 1 H T ( H P ¯ k + 1 H T + R ) 1 is the Kalman gain matrix.
An a priori estimate of the spacecraft’s state is assumed to be available that is based on a ground-based orbit determination (OD) solution. To simulate errors in the OD process, the initial spacecraft’s state is perturbed by adding a zero-mean Gaussian noise to each component of the position ( σ P = 300 m) and velocity ( σ V = 3 m/s) vectors. The initial covariance matrix P 0 is defined accordingly.

5. Discussion

In this section, we discuss the performance of the crater-based navigation algorithm through numerical simulations based on the landing profile of the SLIM mission. A Monte Carlo analysis was also conducted to investigate the robustness of the navigation system to the initial state estimation errors.
Figure 20 and Figure 21 show the position and velocity errors and uncertainties along the cross-track (left), along-track (center), and radial (right) directions for one of the Monte Carlo runs. The dots represent image acquisitions by the onboard camera. Red dots indicate trajectory adjustments based on crater matches, which require at least three pairs of matched craters. Blue dots indicate cases where fewer than three crater correspondences were established; in these instances, the information was discarded to avoid large and erroneous position adjustments. The position and velocity estimation errors are fully consistent with the 3 − σ formal uncertainties (orange). As expected, a refined reconstruction of the spacecraft trajectory was retrieved along the cross- and along-track directions, which define the horizontal plane, compared to the radial direction that is aligned with the camera boresight.
A more uniform distribution of matched craters across the image plane would improve constraints on the spacecraft’s position along the radial direction. At lower altitudes, however, crater matches tended to cluster in specific areas of the image, resulting in optical residuals that were primarily minimized by adjusting the spacecraft’s position in the horizontal plane. The numerical simulations indicate that the implemented crater-based navigation pipeline supports the accurate reconstruction of the spacecraft’s state at altitudes as low as 10 km. Below this altitude, no crater matches were detected, resulting in state propagation relying solely on inertial data. The system achieved final position errors consistent with 3 − σ uncertainties, with ∼85 m on the horizontal plane and 250 m along the radial direction. Consistent results were obtained for the spacecraft’s velocity, with final errors aligning with the 3 − σ formal uncertainties, ∼0.8 m/s on the horizontal plane and 1.0 m/s in the radial direction.
The prolonged lack of matched craters occurring towards the end of the simulated descend phase led to no optical-based corrections by the navigation filter. As a result, the spacecraft’s state and covariance were propagated only based on inertial measurements, leading to increasing state uncertainties and errors over time. When new crater correspondences were identified, state adjustments were computed, allowing the estimated trajectory to converge toward the true trajectory. The lack of crater matches occurs because the spacecraft flies over areas that are nearly crater-free or contain cataloged craters that do not align with the network’s detections. For instance, Figure 22 shows the predicted craters from the Robbins’ catalog for an image in which no craters were detected by the network. Although multiple craters were expected, most were either very shallow or partially buried (e.g., Figure 8b in [60]), preventing robust feature detection and identification for navigation. The CNN in the proposed framework was trained on labeled craters with diameters above a certain threshold, meaning the small craters present in the image fell below the detection capability of the network. This limitation underscores the need for an enhanced database of surface features, including accurate custom crater catalogs, to support reliable navigation throughout the descent and landing phases. A dedicated crater mapping survey using high-resolution orbital images would allow for detailed mapping of small-scale craters [66], supporting continuous crater-based trajectory updates down to altitudes of a few hundred meters. An alternative optical navigation approach could then be adopted until touchdown based on mapped point-like features (e.g., keypoints [6,64]) or visual odometry-based techniques [67].
To assess the robustness of the crater-based navigation system against initial state estimation errors, a 100-run Monte Carlo analysis was performed, with position and velocity errors initialized using zero-mean Gaussian noise. Figure 23 and Figure 24 show the evolution of position and velocity estimation errors across the three axes, along with the average 3 − σ formal uncertainties (in red). The results demonstrate that the vision-based navigation system remained robust to initial state estimation errors, achieving final RMS dispersions, at an altitude of ∼10 km: below 65 m in horizontal position, 110 m in radial position, and 0.6 m/s in velocity.
Though the final approach and landing stages are not directly addressed in this work, our results demonstrate that the crater-based navigation system achieved trajectory accuracies well suited to support critical navigation operations throughout the descent phase, providing a reliable foundation for precise landing.

6. Conclusions

A crater-based navigation pipeline was presented in this work that leverages a joint processing of inertial and imaging data through a sequential filter to retrieve an accurate reconstruction of the descent trajectory of a planetary landing platform. Adjustments to the spacecraft’s state are computed by establishing matches between the craters listed in an onboard reference database and those detected in the camera images through a neural network devoted to the crater detection task.
A detailed description of a custom image generation tool used to retrieve high-fidelity images of planetary surfaces was provided, including an efficient approach to convert planetary DTMs into 3D meshes that can be imported in open-source rendering software. A thorough discussion of the training operations of the crater detector was then provided, including data-augmentation and labeling techniques to retrieve an extensive ground truth dataset. Comprehensive analyses were conducted to assess the robustness of the crater detector under varying illumination conditions, such as the number of detected craters and bounding box location, and to investigate the impact of catalog errors on the crater matching pipeline. By analyzing the distribution of the optical residuals, we retrieved a quantitative estimation of the optical data accuracies, which is a key parameter in the framework of a multi-sensor navigation system.
To assess the performances of the proposed crater-based optical navigation (OpNav) system in a representative mission scenario, we conducted numerical simulations by accounting for a lunar lander based on the JAXA SLIM mission. Simulated camera acquisitions from an onboard down-looking camera were retrieved by using the Blender-based image generation tool, and then processed through the implemented navigation pipeline to estimate the spacecraft’s trajectory during the descent operations. A Monte Carlo analysis was conducted to thoroughly assess the performances of the navigation system, revealing its robustness to the initial state estimation errors. As expected, a refined reconstruction of the spacecraft’s orbital state was achieved in the cross- and along-track directions, which define the image plane parallel to the horizontal plane, compared to the radial direction aligned with the camera boresight. At an altitude of approximately 10 km, the 3 − σ uncertainties were around 85 m and 250 m for horizontal and radial positions, respectively, and approximately 0.8 m/s and 1.0 m/s for horizontal and radial velocities, respectively.
The achieved state estimation accuracies indicate that the proposed navigation scheme, based on existing crater catalogs, is well suited to support navigation operations during the descent phase down to altitudes of approximately 10 km. At lower altitudes, the absence of cataloged craters caused the crater matching step to fail. By adopting a more comprehensive crater catalog, which also accounts for sub-kilometer-scale craters, robust navigation performances could also be achieved during the final approach and landing phases.
Future developments will focus on incorporating attitude estimation into the navigation filter by processing gyroscope and star tracker data, as well as exploring alternative network architectures, such as instance segmentation networks, to enable crater detection under off-nadir observation geometries. Quantization, pruning, and knowledge distillation techniques will be also explored to optimize the crater detection networks. Auxiliary frame-to-frame optical measurements will be incorporated into the navigation filter to improve trajectory propagation when no image-to-database crater correspondences are available. To reduce computational costs when detecting a large number of craters, various matching techniques will be explored, including RANSAC-like methods, multi-crater patterns such as crater polygons, and graph-based techniques. Additionally, a simulation environment will be developed using the Robot Operating System (ROS) to prepare for the real-time implementation of the navigation filter on space-qualified hardware.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/aerospace12030195/s1, Section S1: Crater detection by using the k-mean image clustering technique.

Author Contributions

Conceptualization, S.A., A.G., F.V.B., A.M.G., M.E.A., R.L.G., C.R. and G.C.; methodology, S.A., A.G., F.V.B., A.M.G., M.E.A., P.F. and R.T.; software, S.A., F.V.B., A.M.G., M.E.A., P.F. and R.T.; validation, S.A., A.G., F.V.B., A.M.G., M.E.A., R.L.G., C.R. and G.C.; formal analysis, S.A., A.G., F.V.B., A.M.G., M.E.A., P.F. and R.T.; investigation, S.A., A.G., F.V.B., A.M.G. and M.E.A.; resources, A.G., C.R. and G.C.; data curation, S.A., A.G., A.M.G., M.E.A., P.F. and R.T.; writing—original draft preparation, S.A. and A.G.; writing—review and editing, S.A., A.G., F.V.B., A.M.G., M.E.A., P.F., R.T., R.L.G., C.R. and G.C.; visualization, S.A., A.G., F.V.B. and A.M.G.; supervision, A.G., C.R. and G.C.; project administration, A.G.; funding acquisition, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

S.A., A.G., F.V.B., A.M.G., M.E.A., P.F. and R.T. acknowledge funding from the Italian Space Agency (ASI) grant n. 2023-60-HH.0.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article and Supplementary Material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this section, we describe the pipeline used to retrieve ground truth (GT) labels that are consistent with the crater locations in the training images. As detailed in Section 2.3, GT labels are defined as rectangular bounding boxes that circumscribe the elliptical rim of the craters located in the image tile. To retrieve 2D ellipses in the map-projected image domain, the following pipeline was adopted for each crater.
As a first step, a 3D ellipse is defined in the Moon’s body-fixed frame. The ellipse, whose shape is consistent with the crater parameters reported in the catalog, is orthogonal to the X-axis of the Moon’s body-fixed frame, and its center is located at ( ρ , l o n , l a t ) = ( R M , 0 , 0 ) . A rigid transformation (consisting of two elementary rotations) is then applied to the 3D ellipse’s points so that, after the transformation, the center of the transformed ellipse is located at ( ρ , l o n , l a t ) = ( R M , l o n , l a t ) . Given a 3D point P on the initial ellipse, the corresponding point in the transformed ellipse ( P ) is retrieved as:
P = R y ( l a t ) R z ( l o n ) T P
As a final step, the transformed 3D points are projected on the image tiles according to the map projection parameters of the mosaic (Appendix C). This operation yields a set of 2D points that are consistent with the rim of the crater in the tile extracted from the mosaic.

Appendix B

In this section, we outline how to compute the geometric invariants of a crater triad. Given a crater triad defined by craters ( i , j , k ) whose centroids are located at pixels c i = ( x , y ) i , c j = ( x , y ) j , and c k = ( x , y ) k , the relative position vectors are computed as:
ρ i j = c i c j
ρ j k = c j c k
ρ k i = c k c i
The cosines of the internal angles are then retrieved as:
cos ( θ i ) = ρ ^ i j · ρ ^ k i
cos ( θ j ) = ρ ^ j k · ρ ^ i j
cos ( θ k ) = ρ ^ k i · ρ ^ j k
Without loss of generality, it can be assumed that cos ( θ i ) > cos ( θ j ) > cos ( θ k ) . Hence, the cosines of the smallest and the largest internal angles are cos ( α S ) = cos ( θ i ) and cos ( α L ) = cos ( θ k ) , respectively. The normalized crater diameters are then computed as:
D ¯ i = D i / ρ M A X
D ¯ j = D j / ρ M A X
D ¯ k = D k / ρ M A X
where ρ M A X = max | | ρ i j | | , | | ρ j k | | , | | ρ k i | | . To assess the sense of orientation of the triad, the cross product between the position vector from crater i to crater j and from crater j to crater k is computed. If the pointing direction of the resulting vector is along the out-of-the-plane direction (i.e., ( ρ i j × ρ j k ) · e 3 > 0 ), a counterclockwise orientation is assigned to the triad ( δ = 1 ), otherwise a clockwise orientation is assigned ( δ = 1 ). If two internal angles are similar, the sense of orientation of the triad cannot be reliably determined. Therefore, in case the minimum difference between any pair of internal angles is smaller than a threshold Δ θ , the triad is discarded (in this work, a conservative value of Δ θ = 5 ° was used). The computed invariant parameters are then collected in the triad descriptor d l (see Equation (10)).

Appendix C

In this section, the relationships to convert map coordinates into Cartesian coordinates (and vice versa) are presented for the equirectangular (Appendix C.1), and the north (Appendix C.2) and south (Appendix C.3) stereographic map projections.

Appendix C.1

The equirectangular projection maps meridians and parallels to straight vertical and horizontal lines, respectively. The projection is centered at latitude l a t 0 = 0 ° and longitude l o n 0 (e.g., l o n 0 = π ). Given the planetocentric coordinates ( l o n , l a t ) of a point, the corresponding map coordinates ( x , y ) are retrieved by using a linear relationship as [29]:
x = ( l o n l o n 0 ) R ¯ y = l a t R ¯
where R ¯ is the reference planetary radius associated with the projection, and l o n and l a t are expressed in radians. Given a point on the map ( x , y ) , the planetocentric coordinates of the corresponding surface point are retrieved by inverting the previous equations as:
l o n = x / R ¯ + l o n 0 l a t = y / R ¯

Appendix C.2

The north polar stereographic projection is centered on the north pole of the celestial body. Meridians extend radially from the center and parallels are concentric circles around the center. Longitude l o n = 0 ° extends straight down from the center and longitude l o n = 90 ° extends to the right.
The spherical form of the north stereographic projection is defined as:
r = x 2 + y 2 = 2 R ¯ tan l a t π / 2 2 θ = arctan 2 y , x = l o n π / 2
where R ¯ is the reference planetary radius associated with the projection, and l o n and l a t are expressed in radians.
Given the map coordinates ( x , y ) of a point, the planetocentric coordinates of the corresponding point ( l o n , l a t ) are retrieved from Equation (A13) as:
l o n = arctan 2 ( y , x ) + π / 2 l a t = 2 arctan x 2 + y 2 2 R ¯ + π 2
Given the planetocentric coordinates ( l o n , l a t ) of a point on the planet’s surface, the coordinates of the corresponding map-projected point can be retrieved by inverting Equation (A14) (Table A1).
Table A1. Planetocentric-to-map-coordinates conversion for the north polar stereographic projection.
Table A1. Planetocentric-to-map-coordinates conversion for the north polar stereographic projection.
Planetocentric Coordinates ( lon , lat )  1Map Coordinates ( x , y )
l a t = π / 2 x = 0 y = 0
l a t < π / 2
l o n = 0 or l o n = π
x = 0 y = 2 R ¯ sign ( l o n π / 2 ) tan ( l a t / 2 π / 4 )
l a t < π / 2
l o n = ± π / 2
x = 2 R ¯ sign ( l o n ) | tan ( l a t / 2 π / 4 ) | y = 0
l a t < π / 2
l o n 0 and l o n π and l o n ± π / 2
x = 2 R ¯ sign ( l o n ) | tan ( l a t / 2 π / 4 ) | 1 + tan 2 ( l o n π / 2 ) y = x tan ( l o n π / 2 )
1  l o n ( π , π ] .

Appendix C.3

The south polar stereographic projection is centered on the south pole of the celestial body. Meridians extend radially from the center and parallels are concentric circles around the center. Longitude l o n = 0 ° extends straight up from the center, and longitude l o n = 90 ° extends to the right.
The spherical form of the south stereographic projection is defined as:
r = x 2 + y 2 = 2 R ¯ tan l a t + π / 2 2 θ = arctan 2 x , y = l o n
where R ¯ is the reference planetary radius associated with the projection, and l o n and l a t are expressed in radians.
Given the coordinates of a point on the map ( x , y ), the planetocentric coordinates of the corresponding point ( l o n , l a t ) can be straightforwardly computed as:
l o n = arctan 2 ( x , y ) l a t = 2 arctan x 2 + y 2 2 R ¯ π 2
Given the planetocentric coordinates ( l o n , l a t ) of a point, the associated map coordinates ( x , y ) can be retrieved by inverting Equation (A16) (see Table A2).
Table A2. Planetocentric-to-map-coordinates conversion for the south polar stereographic projection.
Table A2. Planetocentric-to-map-coordinates conversion for the south polar stereographic projection.
Planetocentric Coordinates ( lon , lat )  1Map Coordinates ( x , y )
l a t = π / 2 x = 0 y = 0
l a t > π / 2
l o n = 0 or l o n = π
x = 0 y = 2 R ¯ sign ( π / 2 l o n ) tan ( π / 4 + l a t / 2 )
l a t > π / 2
l o n = ± π / 2
x = 2 R ¯ sign ( l o n ) tan ( π / 4 + l a t / 2 ) y = 0
l a t > π / 2
l o n 0 and l o n π
x = 2 R ¯ sign ( l o n ) tan ( π / 4 + l a t / 2 ) 1 + 1 / tan 2 ( l o n ) y = x / tan ( l o n )
1  l o n ( π , π ] .

References

  1. Campbell, J.; Synnott, S.; Bierman, G. Voyager orbit determination at Jupiter. IEEE Trans. Autom. Control 1983, 28, 256–268. [Google Scholar] [CrossRef]
  2. Riedel, J.; Bhaskaran, S.; Synnott, S.; Bollman, W.; Null, G. An Automomous Optical Navigation and Control System for Interplanetary Exploration Missions. In Proceedings of the Second IAA International Conference on Low-Cost Planetary Missions, Laurel, MD, USA, 16–19 April 1996; International Academy of Astronautics: Stockholm, Sweden, 1996. [Google Scholar]
  3. De Santayana, R.P.; Lauer, M. Optical measurements for ROSETTA navigation near the comet. In Proceedings of the 25th International Symposium on Space Flight Dynamics (ISSFD), Munich, Germany, 19–23 October 2015. [Google Scholar]
  4. Uo, M.; Shirakawa, K.; Hasimoto, T.; Kubota, T.; Kawaguchi, J. Hayabusa touching-down to Itokawa-Autonomous guidance and navigation. J. Space Technol. Sci. 2006, 22, 32–41. [Google Scholar]
  5. McCarthy, L.K.; Adam, C.D.; Leonard, J.M.; Antresian, P.G.; Nelson, D.; Sahr, E.; Pelgrift, J.; Lessac-Chenen, E.J.; Geeraert, J.; Lauretta, D. OSIRIS-REx landmark optical navigation performance during orbital and close proximity operations at asteroid Bennu. In Proceedings of the AIAA SciTech 2022 Forum, San Diego, CA, USA, 3–7 January 2022; p. 2520. [Google Scholar]
  6. Johnson, A.E.; Cheng, Y.; Trawny, N.; Montgomery, J.F.; Schroeder, S.; Chang, J.; Clouse, D.; Aaron, S.; Mohan, S. Implementation of a Map Relative Localization System for Planetary Landing. J. Guid. Control Dyn. 2023, 46, 618–637. [Google Scholar] [CrossRef]
  7. Ishida, T.; Fukuda, S.; Kariya, K.; Kamata, H.; Takadama, K.; Kojima, H.; Sawai, S.; Sakai, S. Vision-based navigation and obstacle detection flight results in SLIM lunar landing. Acta Astronaut. 2025, 226, 772–781. [Google Scholar] [CrossRef]
  8. Owen, W.M.J. Methods of optical navigation. In Proceedings of the AAS Spaceflight Mechanics Conference, AAS 11-215, New Orleans, LA, USA, 13–17 February 2011. [Google Scholar]
  9. Christian, J.A.; Derksen, H.; Watkins, R. Lunar crater identification in digital images. J. Astronaut. Sci. 2021, 68, 1056–1144. [Google Scholar] [CrossRef]
  10. Gaskell, R.W.; Barnouin-Jha, O.S.; Scheeres, D.J.; Konopliv, A.S.; Mukai, T.; Abe, S.; Saito, J.; Ishiguro, M.; Kubota, T.; Hashimoto, T.; et al. Characterizing and navigating small bodies with imaging data. Meteorit. Planet. Sci. 2008, 43, 1049–1061. [Google Scholar] [CrossRef]
  11. Gaskell, R.W.; Barnouin, O.S.; Daly, M.G.; Palmer, E.E.; Weirich, J.R.; Ernst, C.M.; Daly, R.T.; Lauretta, D.S. Stereophotoclinometry on the OSIRIS-REx Mission: Mathematics and Methods. Planet. Sci. J. 2023, 4, 63. [Google Scholar] [CrossRef]
  12. Cheng, Y.; Ansar, A.; Johnson, A. Making an Onboard Reference Map From MRO/CTX Imagery for Mars 2020 Lander Vision System. Earth Space Sci. 2021, 8, e2020EA001560. [Google Scholar] [CrossRef]
  13. Takadama, K.; Harada, T.; Kamata, H.; Ozawa, S.; Fukuda, S.; Sawai, S. Evaluating an Integration of Spacecraft Location Estimation with Crater Detection Toward Smart Lander for Investigating Moon. In Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS), Montreal, QC, Canada, 17–19 June 2014. [Google Scholar]
  14. Tanaami, T.; Takeda, Y.; Aoyama, N.; Mizumi, S.; Kamata, H.; Takadama, K.; Ozawa, S.; Fukuda, S.; Sawai, S. Crater detection using Haar-like feature for Moon landing system based on the surface image. Trans. Jpn. Soc. Aeronaut. Space Sci. Aerosp. Technol. Jpn. 2012, 10, 39–44. [Google Scholar] [CrossRef]
  15. Downes, L.M.; Steiner, T.J.; How, J.P. Lunar terrain relative navigation using a convolutional neural network for visual crater detection. In Proceedings of the 2020 American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 4448–4453. [Google Scholar]
  16. Powell, W.; Campola, M.; Sheets, T.; Davidson, A.; Welsh, S. Commercial Off-the-Shelf GPU Qualification for Space Applications; NASA: Washington, DC, USA, 2018.
  17. Wang, H.; Myint, S.; Verma, V.; Winetraub, Y.; Yang, J.; Cidon, A. Mars Attacks!: Software Protection Against Space Radiation. In Proceedings of the 22nd ACM Workshop on Hot Topics in Networks. HotNets ’23, Cambridge, MA, USA, 28–29 November 2023; ACM: New York, NY, USA, 2023; pp. 245–253. [Google Scholar] [CrossRef]
  18. Mu, L.; Xian, L.; Li, L.; Liu, G.; Chen, M.; Zhang, W. YOLO-Crater Model for Small Crater Detection. Remote Sens. 2023, 15, 5040. [Google Scholar] [CrossRef]
  19. Genova, A.; Andolfo, S.; Ciambellini, M.; Federici, P.; Teodori, R.; Torrini, T.; Zavoli, A.; Del Vecchio, E.; Maria Gargiulo, A.; Petricca, F.; et al. Sensor data fusion for precise orbit determination of interplanetary spacecraft. In Proceedings of the 2024 International Conference on Space Robotics (iSpaRo), Luxembourg, 24–27 June 2024; pp. 22–27. [Google Scholar] [CrossRef]
  20. Martin, I.; Dunstan, M.; Gestido, M.S. Planetary surface image generation for testing future space missions with PANGU. In Proceedings of the 2nd RPI Space Imaging Workshop, Saratoga Springs, NY, USA, 28–30 October 2019; Sensing, Estimation, and Automation Laboratory (SEAL), Department of Mechanical, Aerospace, and Nuclear Engineering, Rensselaer Polytechnic Institute: Troy, NY, USA, 2019. [Google Scholar]
  21. Brochard, R.; Lebreton, J.; Robin, C.; Kanani, K.; Jonniaux, G.; Masson, A.; Despré, N.; Berjaoui, A. Scientific image rendering for space scenes with the SurRender software. arXiv 2018, arXiv:1810.01423. [Google Scholar]
  22. Pajusalu, M.; Iakubivskyi, I.; Schwarzkopf, G.J.; Knuuttila, O.; Väisänen, T.; Bührer, M.; Palos, M.F.; Teras, H.; Le Bonhomme, G.; Praks, J.; et al. SISPO: Space imaging simulator for proximity operations. PLoS ONE 2022, 17, e0263882. [Google Scholar] [CrossRef] [PubMed]
  23. Pugliatti, M.; Buonagura, C.; Topputo, F. CORTO: The Celestial Object Rendering TOol at DART Lab. Sensors 2023, 23, 9595. [Google Scholar] [CrossRef] [PubMed]
  24. Blender Development Team. Blender 3.6 Reference Manual. Available online: https://docs.blender.org/manual/en/3.6/ (accessed on 2 January 2025).
  25. Epic Games. Unreal Engine 4.27 Documentation. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/unreal-engine-5-5-documentation (accessed on 2 January 2025).
  26. Penttilä, A.; Palos, M.F.; Näsilä, A.; Kohout, T. Blender modeling and simulation testbed for solar system object imaging and camera performance. In Proceedings of the Copernicus Meetings, Vienna, Austria, 23–27 May 2022. number No. EPSC2022-788. [Google Scholar]
  27. Smith, K.W.; Anastas, N.; Olguin, A.; Fritz, M.; Sostaric, R.R.; Pedrotty, S.; Tse, T. Building Maps for Terrain Relative Navigation Using Blender: An Open Source Approach. In Proceedings of the AIAA SCITECH 2022 Forum, San Diego, CA, USA, 3–7 January 2022. [Google Scholar] [CrossRef]
  28. Villa, J.; Mcmahon, J.; Nesnas, I. Image Rendering and Terrain Generation of Planetary Surfaces Using Source-Available Tools. In Proceedings of the 46th Annual AAS Guidance, Navigation & Control Conference, Breckenridge, CO, USA, 31 January–5 February 2023; pp. 1–24. [Google Scholar]
  29. Snyder, J.P. Map Projections—A Working Manual; US Government Printing Office: Washington, DC, USA, 1987; Volume 1395.
  30. Yamazaki, J.; Mitsuhashi, S.; Yamauchi, M.; Tachino, J.; Honda, R.; Shirao, M.; Tanimoto, K.; Tanaka, H.; Harajima, N.; Omori, A.; et al. High-Definition Television System Onboard Lunar Explorer Kaguya (SELENE) and Imaging of the Moon and the Earth. Space Sci. Rev. 2010, 154, 21–56. [Google Scholar] [CrossRef]
  31. Wang, Y.; Wu, B.; Xue, H.; Li, X.; Ma, J. An Improved Global Catalog of Lunar Impact Craters (>1 km) with 3D Morphometric Information and Updates on Global Crater Analysis. J. Geophys. Res. Planets 2021, 126, e2020JE006728. [Google Scholar] [CrossRef]
  32. Speyerer, E.; Robinson, M.; Denevi, B.; Team, L.S. Lunar Reconnaissance Orbiter Camera global morphological map of the Moon. In Proceedings of the 42nd Lunar and Planetary Science Conference, The Woodlands, TX, USA, 7–11 March 2011; p. 2387. [Google Scholar]
  33. Cheng, Y.; Ansar, A. Landmark Based Position Estimation for Pinpoint Landing on Mars. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 1573–1578. [Google Scholar] [CrossRef]
  34. Silvestrini, S.; Piccinin, M.; Zanotti, G.; Brandonisio, A.; Bloise, I.; Feruglio, L.; Lunghi, P.; Lavagna, M.; Varile, M. Optical navigation for Lunar landing based on Convolutional Neural Network crater detector. Aerosp. Sci. Technol. 2022, 123, 107503. [Google Scholar] [CrossRef]
  35. Cauligi, A.; Swan, R.M.; Ono, H.; Daftry, S.; Elliott, J.; Matthies, L.; Atha, D. ShadowNav: Crater-Based Localization for Nighttime and Permanently Shadowed Region Lunar Navigation. In Proceedings of the 2023 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2023; pp. 1–12. [Google Scholar] [CrossRef]
  36. Cheng, Y.; Miller, J.K. Autonomous Landmark Based Spacecraft Navigation System. In Proceedings of the 13th AAS/AIAA Space Flight Mechanics Conference, Ponce, Puerto Rico, 9–13 February 2003. [Google Scholar]
  37. Andolfo, S.; Genova, A.; Federici, P.; Teodori, R.; Cottini, V. Precise orbit determination through a joint analysis of optical and radiometric data. In Proceedings of the 2024 IEEE 1st International Conference on Space Robotics (iSpaRo), Luxembourg, 24–27 June 2024. [Google Scholar]
  38. Cheng, Y.; Johnson, A.E.; Matthies, L.H.; Olson, C.F. Optical Landmark Detection for Spacecraft Navigation. In Proceedings of the 13th AAS/AIAA Astrodynamics Specialist Conference, Monterey, CA, USA, 13–17 August 2003. [Google Scholar]
  39. Clerc, S.; Spigai, M.; Simard-Bilodeau, V. A crater detection and identification algorithm for autonomous lunar landing. IFAC Proc. Vol. 2010, 43, 527–532. [Google Scholar] [CrossRef]
  40. Bilodeau, V.S.; Neveu, D.; Bruneau-Dbuc, S.; Alger, M.; de LaFontaine, J.; Clerc, S.; Drai, R. Pinpoint Lunar Landing Navigation using Crater Detection and Matching: Design and Laboratory Validation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Minneapolis, MN, USA, 13–16 August 2012. [Google Scholar] [CrossRef]
  41. Maass, B.; Woicke, S.; Oliveira, W.M.; Razgus, B.; Krüger, H. Crater Navigation System for Autonomous Precision Landing on the Moon. J. Guid. Control Dyn. 2020, 43, 1414–1431. [Google Scholar] [CrossRef]
  42. Stepinski, T.F.; Ding, W.; Vilalta, R. Detecting impact craters in planetary images using machine learning. In Intelligent Data Analysis for Real-Life Applications: Theory and Practice; IGI Global: Hershey, PA, USA, 2012; pp. 146–159. [Google Scholar]
  43. La Grassa, R.; Cremonese, G.; Gallo, I.; Re, C.; Martellato, E. YOLOLens: A Deep Learning Model Based on Super-Resolution to Enhance the Crater Detection of the Planetary Surfaces. Remote Sens. 2023, 15, 1171. [Google Scholar] [CrossRef]
  44. Wang, H.; Jiang, J.; Zhang, G. CraterIDNet: An End-to-End Fully Convolutional Neural Network for Crater Detection and Identification in Remotely Sensed Planetary Images. Remote Sens. 2018, 10, 1067. [Google Scholar] [CrossRef]
  45. Downes, L.; Steiner, T.J.; How, J.P. Deep Learning Crater Detection for Lunar Terrain Relative Navigation. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020. [Google Scholar] [CrossRef]
  46. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision–ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  47. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  48. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  49. Bodmann, P.R.; Rech, P.; Saveriano, M. Evaluating the Reliability of Vision Transformers for Space Robotics Applications. In Proceedings of the 2024 International Conference on Space Robotics (iSpaRo), Luxembourg, 24–27 June 2024; pp. 278–283. [Google Scholar] [CrossRef]
  50. La Grassa, R.; Martellato, E.; Cremonese, G.; Re, C.; Tullo, A.; Bertoli, S. LU5M812TGT: An AI-Powered global database of impact craters ≥0.4 km on the Moon. ISPRS J. Photogramm. Remote Sens. 2025, 220, 75–84. [Google Scholar] [CrossRef]
  51. Parracino, G.P.; Ceresoli, M.; Silvestrini, S.; Lavagna, M. Integrated Optical Terrain Relative Navigation for Autonomous Lunar Landing. In Proceedings of the 75th International Astronautical Congress (IAC-24), Milan, Italy, 14–18 October 2024; IAF: Paris, France, 2024. [Google Scholar]
  52. Robbins, S.J. A New Global Database of Lunar Impact Craters >1–2 km: 1. Crater Locations and Sizes, Comparisons With Published Databases, and Global Analysis. J. Geophys. Res. Planets 2019, 124, 871–892. [Google Scholar] [CrossRef]
  53. Du, L. Object Detectors in Autonomous Vehicles: Analysis of Deep Learning Techniques. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 217–224. [Google Scholar] [CrossRef]
  54. Park, W.; Jung, Y.; Bang, H.; Ahn, J. Robust Crater Triangle Matching Algorithm for Planetary Landing Navigation. J. Guid. Control Dyn. 2019, 42, 402–410. [Google Scholar] [CrossRef]
  55. Hanak, C.; Crain, T.; Bishop, R. Crater identification algorithm for the lost in low lunar orbit scenario. In Proceedings of the AAS Guidance & Control Conference, AAS 10-052, Breckenridge, CO, USA, 1–6 February 2008. [Google Scholar]
  56. Neira, J.; Tardós, J.D. Data association in stochastic mapping using the joint compatibility test. IEEE Trans. Robot. Autom. 2001, 17, 890–897. [Google Scholar] [CrossRef]
  57. Mortari, D.; Neta, B. K-vector range searching techniques. Adv. Astronaut. Sci. 2000, 105, 449–464. [Google Scholar]
  58. Marco Figuera, R.; Riedel, C.; Rossi, A.P.; Unnithan, V. Depth to Diameter Analysis on Small Simple Craters at the Lunar South Pole—Possible Implications for Ice Harboring. Remote Sens. 2022, 14, 450. [Google Scholar] [CrossRef]
  59. QGIS Development Team. QGIS Geographic Information System; QGIS Association: Zurich, Switzerland, 2024. [Google Scholar]
  60. Heyer, T.; Iqbal, W.; Oetting, A.; Hiesinger, H.; van der Bogert, C.; Schmedemann, N. A comparative analysis of global lunar crater catalogs using OpenCraterTool—An open source tool to determine and compare crater size-frequency measurements. Planet. Space Sci. 2023, 231, 105687. [Google Scholar] [CrossRef]
  61. Barker, M.; Mazarico, E.; Neumann, G.; Zuber, M.; Haruyama, J.; Smith, D. A new lunar digital elevation model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera. ICARUS 2016, 273, 346–355. [Google Scholar] [CrossRef]
  62. Fukuda, S.; Ishida, T.; Arakawa, T.; Nakagawa, T.; Murao, H. Thermal Cycle Tests of CLCC Solder Joints: Influence of Substrate, Solder, and Pad Patterns. Trans. Jpn. Soc. Aeronaut. Space Sci. Aerosp. Technol. Jpn. 2020, 18, 51–56. [Google Scholar] [CrossRef]
  63. Markley, F.L.; Crassidis, J.L. Fundamentals of Spacecraft Attitude Determination and Control; Springer: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  64. Trawny, N.; Mourikis, A.I.; Roumeliotis, S.I.; Johnson, A.E.; Montgomery, J.F. Vision-aided inertial navigation for pin-point landing using observations of mapped landmarks. J. Field Robot. 2007, 24, 357–378. [Google Scholar] [CrossRef]
  65. Houts, S.E.; Dektor, S.G.; Rock, S.M. A robust framework for failure detection and recovery for terrain-relative navigation. In Proceedings of the 18th International Symposium on Unmanned Untethered Submersible Technology (UUST 2013), Portsmouth, NH, USA, 11–14 August 2013. [Google Scholar]
  66. Ishii, H.; Usui, K.; Takadama, K.; Kamata, H.; Fukuda, S.; Sawai, S.; Sakai, S.i. Robust self-position estimation algorithm against displacement of crater detection in the SLIM spacecraft. In Proceedings of the 13th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS), Beijing, China, 20–22 June 2016. [Google Scholar]
  67. Christian, J.A.; Hong, L.; McKee, P.; Christensen, R.; Crain, T.P. Image-Based Lunar Terrain Relative Navigation Without a Map: Measurements. J. Spacecr. Rocket. 2021, 58, 164–181. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the feature-based optical navigation pipeline.
Figure 1. Schematic representation of the feature-based optical navigation pipeline.
Aerospace 12 00195 g001
Figure 2. Examples of meshes of the same surface area retrieved from DTMs at different resolutions, i.e., 128 px/deg (A), 16 px/deg (B), and 4 px/deg (C). A wireframe view of the mesh is also shown in the last panel, highlighting the vertices and the faces of the mesh.
Figure 2. Examples of meshes of the same surface area retrieved from DTMs at different resolutions, i.e., 128 px/deg (A), 16 px/deg (B), and 4 px/deg (C). A wireframe view of the mesh is also shown in the last panel, highlighting the vertices and the faces of the mesh.
Aerospace 12 00195 g002
Figure 3. A real image of the lunar surface taken by Kaguya’s HDTV TELE camera (left) and its rendered version retrieved through the Blender-based tool (right).
Figure 3. A real image of the lunar surface taken by Kaguya’s HDTV TELE camera (left) and its rendered version retrieved through the Blender-based tool (right).
Aerospace 12 00195 g003
Figure 4. A real image of the lunar surface taken by Kaguya’s HDTV WIDE camera (left) and its rendered version retrieved through the Blender-based tool (right).
Figure 4. A real image of the lunar surface taken by Kaguya’s HDTV WIDE camera (left) and its rendered version retrieved through the Blender-based tool (right).
Aerospace 12 00195 g004
Figure 5. Crater centroid projection onto the image plane based on a pinhole camera model.
Figure 5. Crater centroid projection onto the image plane based on a pinhole camera model.
Aerospace 12 00195 g005
Figure 6. Graphic representation of the topographic correction Δ h that is applied to account for crater’s depth.
Figure 6. Graphic representation of the topographic correction Δ h that is applied to account for crater’s depth.
Aerospace 12 00195 g006
Figure 7. Cataloged craters projected onto a synthetic lunar image. On the left, the projection of crater rims does not account for crater depth, resulting in inconsistencies with the observed rims in the image. On the right, when crater depth is included, the projected rims align closely with the imaged craters. Red points indicate the projected crater floors.
Figure 7. Cataloged craters projected onto a synthetic lunar image. On the left, the projection of crater rims does not account for crater depth, resulting in inconsistencies with the observed rims in the image. On the right, when crater depth is included, the projected rims align closely with the imaged craters. Red points indicate the projected crater floors.
Aerospace 12 00195 g007
Figure 8. A few examples of training images and ground truth labels (i.e., red bounding boxes).
Figure 8. A few examples of training images and ground truth labels (i.e., red bounding boxes).
Aerospace 12 00195 g008
Figure 9. Definition of ground truth (GT) crater labels for rotated tiles: (A) Tile extracted from the global lunar mosaic; (B) elliptical crater rims (green); (C) GT crater labels (orange), defined as bounding boxes tangent to the elliptical crater contours shown in panel (B); (D) rotated tile extracted from the mosaic, centered at the same location as tile (A), and rotated by +45°; (E) GT crater labels (red) computed by determining bounding boxes tangent to the rotated elliptical crater contours; (F) GT crater labels (white) derived from the rotated rectangular labels (orange) of the original tile; (G) comparison of GT crater labels based on rotated crater ellipses (red) and those based on rotated bounding boxes (white).
Figure 9. Definition of ground truth (GT) crater labels for rotated tiles: (A) Tile extracted from the global lunar mosaic; (B) elliptical crater rims (green); (C) GT crater labels (orange), defined as bounding boxes tangent to the elliptical crater contours shown in panel (B); (D) rotated tile extracted from the mosaic, centered at the same location as tile (A), and rotated by +45°; (E) GT crater labels (red) computed by determining bounding boxes tangent to the rotated elliptical crater contours; (F) GT crater labels (white) derived from the rotated rectangular labels (orange) of the original tile; (G) comparison of GT crater labels based on rotated crater ellipses (red) and those based on rotated bounding boxes (white).
Aerospace 12 00195 g009
Figure 10. Example of a crater triad, where the blue rectangles indicate the bounding boxes of craters detected by the network.
Figure 10. Example of a crater triad, where the blue rectangles indicate the bounding boxes of craters detected by the network.
Aerospace 12 00195 g010
Figure 11. Crater match filtering on a synthetic image captured by the onboard camera. Catalog-projected craters (white) are shown in the left panel along with the corresponding craters detected by the network (boxes). The right panel presents the 2D distribution of optical residuals, where inliers and outliers are represented by blue and red crosses, respectively. The 90 % confidence ellipse is also shown.
Figure 11. Crater match filtering on a synthetic image captured by the onboard camera. Catalog-projected craters (white) are shown in the left panel along with the corresponding craters detected by the network (boxes). The right panel presents the 2D distribution of optical residuals, where inliers and outliers are represented by blue and red crosses, respectively. The 90 % confidence ellipse is also shown.
Aerospace 12 00195 g011
Figure 12. Example images from the testing datasets retrieved by using the Blender-based pipeline. The panels refer to the same area on the lunar surface that is rendered at different solar azimuth and elevation angles (from left to right: Sun-from-west, Sun-from-southwest, Sun from-southeast).
Figure 12. Example images from the testing datasets retrieved by using the Blender-based pipeline. The panels refer to the same area on the lunar surface that is rendered at different solar azimuth and elevation angles (from left to right: Sun-from-west, Sun-from-southwest, Sun from-southeast).
Aerospace 12 00195 g012
Figure 13. Optical residuals for datasets with Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) illumination conditions, analyzed using a preliminary version of the crater detector. Each dataset has a fixed solar elevation angle of 15 ° . The 1 − σ confidence ellipses are shown in red.
Figure 13. Optical residuals for datasets with Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) illumination conditions, analyzed using a preliminary version of the crater detector. Each dataset has a fixed solar elevation angle of 15 ° . The 1 − σ confidence ellipses are shown in red.
Aerospace 12 00195 g013
Figure 14. Optical residuals for the Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) datasets obtained using an updated crater detector (all with a fixed 15 ° solar elevation angle). The red ellipses represent the 1 − σ confidence intervals.
Figure 14. Optical residuals for the Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) datasets obtained using an updated crater detector (all with a fixed 15 ° solar elevation angle). The red ellipses represent the 1 − σ confidence intervals.
Aerospace 12 00195 g014
Figure 15. Optical residuals for the Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) datasets obtained using an updated detector (all with a fixed 22 . 5 ° solar elevation angle). The red ellipses represent the 1 − σ confidence intervals.
Figure 15. Optical residuals for the Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) datasets obtained using an updated detector (all with a fixed 22 . 5 ° solar elevation angle). The red ellipses represent the 1 − σ confidence intervals.
Aerospace 12 00195 g015
Figure 16. Optical residuals for three example datasets computed by not accounting for the crater’s depth in the definition of the 3D crater’s coordinates (from left to right: Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) datasets (a fixed elevation angle of 22 . 5 ° is considered). The 1 − σ confidence ellipses are shown in red.
Figure 16. Optical residuals for three example datasets computed by not accounting for the crater’s depth in the definition of the 3D crater’s coordinates (from left to right: Sun-from-west (left), Sun-from-southwest (center), and Sun-from-southeast (right) datasets (a fixed elevation angle of 22 . 5 ° is considered). The 1 − σ confidence ellipses are shown in red.
Aerospace 12 00195 g016
Figure 17. QGIS user-interface displaying craters from our database (orange) and the Robbins’ catalog (light blue) within a selected lunar surface region (white boundaries). The basemap is the global LRO WAC morphologic mosaic [32].
Figure 17. QGIS user-interface displaying craters from our database (orange) and the Robbins’ catalog (light blue) within a selected lunar surface region (white boundaries). The basemap is the global LRO WAC morphologic mosaic [32].
Aerospace 12 00195 g017
Figure 18. Craters from our auxiliary catalog overlaid on the LRO WAC mosaic (A) and a slope map derived from the SLDEM2013 terrain model (B). A height profile, calculated along the red line in panel (B), passes through one of the craters, and is shown in panel (C), with a dot marking the crater’s floor. Although the crater rim contours are fully consistent with the LRO WAC mosaic, an offset is noted between these contours and the SLDEM2013 model.
Figure 18. Craters from our auxiliary catalog overlaid on the LRO WAC mosaic (A) and a slope map derived from the SLDEM2013 terrain model (B). A height profile, calculated along the red line in panel (B), passes through one of the craters, and is shown in panel (C), with a dot marking the crater’s floor. Although the crater rim contours are fully consistent with the LRO WAC mosaic, an offset is noted between these contours and the SLDEM2013 model.
Aerospace 12 00195 g018
Figure 19. Discrepancies in crater center coordinates: longitude (red) and latitude (blue) between the LRO WAC mosaic and the SLDEM2013 terrain model.
Figure 19. Discrepancies in crater center coordinates: longitude (red) and latitude (blue) between the LRO WAC mosaic and the SLDEM2013 terrain model.
Aerospace 12 00195 g019
Figure 20. Reconstructed spacecraft’s position errors (black) and formal uncertainties in the cross-track (left), along-track (center), and radial (right) directions. The dots represent image acquisitions by the camera, with red dots indicating images that prompted trajectory adjustments.
Figure 20. Reconstructed spacecraft’s position errors (black) and formal uncertainties in the cross-track (left), along-track (center), and radial (right) directions. The dots represent image acquisitions by the camera, with red dots indicating images that prompted trajectory adjustments.
Aerospace 12 00195 g020
Figure 21. Reconstructed spacecraft’s velocity errors (black) and uncertainties in the cross-track (left), along-track (center), and radial (right) directions. The dots represent image acquisitions by the camera, with red dots indicating images that led to adjustments to the spacecraft’s trajectory.
Figure 21. Reconstructed spacecraft’s velocity errors (black) and uncertainties in the cross-track (left), along-track (center), and radial (right) directions. The dots represent image acquisitions by the camera, with red dots indicating images that led to adjustments to the spacecraft’s trajectory.
Aerospace 12 00195 g021
Figure 22. Projected craters from the reference database on a synthetic image acquired at an altitude of approximately 25 km. Although multiple craters were expected based on the catalog, none were detected by the network, as most were very shallow and unsuitable for navigation purposes.
Figure 22. Projected craters from the reference database on a synthetic image acquired at an altitude of approximately 25 km. Although multiple craters were expected based on the catalog, none were detected by the network, as most were very shallow and unsuitable for navigation purposes.
Aerospace 12 00195 g022
Figure 23. Position estimation error over 100 Monte Carlo runs (stacked visualization), with each run representing a different realization of the initial state estimation error. The red curve represents the mean 3 − σ formal uncertainties across all runs.
Figure 23. Position estimation error over 100 Monte Carlo runs (stacked visualization), with each run representing a different realization of the initial state estimation error. The red curve represents the mean 3 − σ formal uncertainties across all runs.
Aerospace 12 00195 g023
Figure 24. Velocity estimation error over 100 Monte Carlo runs (stacked visualization), with each run representing a different realization of the initial state estimation error. The red curve represents the mean 3 − σ formal uncertainties across all runs.
Figure 24. Velocity estimation error over 100 Monte Carlo runs (stacked visualization), with each run representing a different realization of the initial state estimation error. The red curve represents the mean 3 − σ formal uncertainties across all runs.
Aerospace 12 00195 g024
Table 1. Initial and final mission epochs, with corresponding spacecraft coordinates and altitude above the lunar surface.
Table 1. Initial and final mission epochs, with corresponding spacecraft coordinates and altitude above the lunar surface.
ParameterInitial EpochFinal Epoch
Epoch (UTC)19 January 2024 14:50:0019 January 2024 15:15:00
Longitude25.4°25.3°
Latitude−74.4°−13.4°
Altitude61.0 km9.5 km
Table 2. Onboard sensor characteristics.
Table 2. Onboard sensor characteristics.
InstrumentParameterValue
CameraField of view 45 ° × 45 °
Detector size 512 × 512 px
Ground sampling distance∼16 m (10 km height)
Acquisition rate0.01 Hz (10 s)
AccelerometerVelocity Random Walk ( σ V R W ) 4.9 × 10 4 m/s2/ Hz
Acceleration Random Walk ( σ A c c R W ) 4.9 × 10 5 m/s3/ Hz
Acquisition rate100 Hz (0.01 s)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Andolfo, S.; Genova, A.; Buonomo, F.V.; Gargiulo, A.M.; El Awag, M.; Federici, P.; Teodori, R.; La Grassa, R.; Re, C.; Cremonese, G. Neural Network-Aided Optical Navigation for Precise Lunar Descent Operations. Aerospace 2025, 12, 195. https://doi.org/10.3390/aerospace12030195

AMA Style

Andolfo S, Genova A, Buonomo FV, Gargiulo AM, El Awag M, Federici P, Teodori R, La Grassa R, Re C, Cremonese G. Neural Network-Aided Optical Navigation for Precise Lunar Descent Operations. Aerospace. 2025; 12(3):195. https://doi.org/10.3390/aerospace12030195

Chicago/Turabian Style

Andolfo, Simone, Antonio Genova, Fabio Valerio Buonomo, Anna Maria Gargiulo, Mohamed El Awag, Pierluigi Federici, Riccardo Teodori, Riccardo La Grassa, Cristina Re, and Gabriele Cremonese. 2025. "Neural Network-Aided Optical Navigation for Precise Lunar Descent Operations" Aerospace 12, no. 3: 195. https://doi.org/10.3390/aerospace12030195

APA Style

Andolfo, S., Genova, A., Buonomo, F. V., Gargiulo, A. M., El Awag, M., Federici, P., Teodori, R., La Grassa, R., Re, C., & Cremonese, G. (2025). Neural Network-Aided Optical Navigation for Precise Lunar Descent Operations. Aerospace, 12(3), 195. https://doi.org/10.3390/aerospace12030195

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop