Next Article in Journal
Thermally Induced Vibration of a Flexible Plate with Enhanced Active Constrained Layer Damping
Previous Article in Journal
Propulsion Technologies for CubeSats: Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SpaceLight: A Framework for Enhanced On-Orbit Navigation Imagery

1
Innovation Academy for Microsatellites of CAS, Shanghai 201203, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Shanghai Engineering Center for Microsatellites, Shanghai 201203, China
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(7), 503; https://doi.org/10.3390/aerospace11070503
Submission received: 30 May 2024 / Revised: 20 June 2024 / Accepted: 21 June 2024 / Published: 23 June 2024

Abstract

:
In the domain of space rendezvous and docking, visual navigation plays a crucial role. However, practical applications frequently encounter issues with poor image quality. Factors such as lighting uncertainties, spacecraft motion, uneven illumination, and excessively dark environments collectively pose significant challenges, rendering recognition and measurement tasks during visual navigation nearly infeasible. The existing image enhancement methods, while visually appealing, compromise the authenticity of the original images. In the specific context of visual navigation, space image enhancement’s primary aim is the faithful restoration of the spacecraft’s mechanical structure with high-quality outcomes. To address these issues, our study introduces, for the first time, a dedicated unsupervised framework named SpaceLight for enhancing on-orbit navigation images. The framework integrates a spacecraft semantic parsing network, utilizing it to generate attention maps that pinpoint structural elements of spacecraft in poorly illuminated regions for subsequent enhancement. To more effectively recover fine structural details within these dark areas, we propose the definition of a global structure loss and the incorporation of a pre-enhancement module. The proposed SpaceLight framework adeptly restores structural details in extremely dark areas while distinguishing spacecraft structures from the deep-space background, demonstrating practical viability when applied to visual navigation. This paper is grounded in space on-orbit servicing engineering projects, aiming to address visual navigation practical issues. It pioneers the utilization of authentic on-orbit navigation images in the research, resulting in highly promising and unprecedented outcomes. Comprehensive experiments demonstrate SpaceLight’s superiority over state-of-the-art low-light enhancement algorithms, facilitating enhanced on-orbit navigation image quality. This advancement offers robust support for subsequent visual navigation.

1. Introduction

Space rendezvous and docking stands as a critical focal point of on-orbit services, enabling tasks such as spacecraft assembly and maintenance. These activities significantly contribute to prolonging the operational lifespan of space vehicles and enhancing their mission capabilities [1]. Visual navigation plays a pivotal role in on-orbit services, facilitating a range of tasks including attitude measurement [2], component identification [3], and visual multi-sensor fusion [4]. It is also critical in non-cooperative target detection [5], shape reconstruction of spacecraft [6], and formation flying [7,8], among various other domains. However, throughout the data acquisition process, on-orbit navigation images are unavoidably influenced by varying shooting conditions, giving rise to challenges such as extreme darkness, uneven illumination, and blurriness. These conditions pose obstacles for subsequent visual navigation tasks. Therefore, the imperative need for the development of an image enhancement framework for visual navigation is evident.
In the domain of image enhancement, numerous methods leveraging deep convolutional neural networks have been proposed and demonstrated efficacy. However, these methods predominantly find application in medical image enhancement [9,10], underwater image enhancement [11,12], remote sensing image enhancement [13,14], natural image enhancement [15], and other domains [16,17]. There is currently a lack of a dedicated method specifically tailored for enhancing on-orbit navigation images. Despite the visually appealing outcomes produced by the current methods, they lack originality and struggle with structural recovery, presenting a noteworthy drawback. In the context of visual navigation image enhancement, the primary objective is to fully restore the structural integrity of the spacecraft. Additionally, a majority of deep convolutional neural network algorithms necessitate paired image data. However, on-orbit images cannot simultaneously capture dark and bright images of the same visual scene. Consequently, we adopted non-paired learning. Under high-contrast conditions, the restoration of structure and texture details in extremely dark visual navigation images, especially in the absence of paired datasets and prior information, poses a formidable challenge. Furthermore, the predominant black deep-space background in visual navigation images exacerbates the difficulty of recovering image details in spacecraft structures within extremely dark areas. Given the characteristics of spacecraft with similar components, it is necessary to utilize image semantic information as a strong, key prior [18,19,20]. To this end, we employ a spacecraft semantic parsing network to generate prior attention maps. Upon further investigation, we observed that the Laplacian pyramid of gamma-corrected pre-enhanced images [21] encompasses numerous intricate structural features, which are challenging to restore in dark areas. This inspired us to define the global structural loss of the network.
The engineering project on which this study is based was launched into orbit in August 2022. This project comprises two satellites, named Innovation-16. The slave satellite captured images of the master satellite for visual navigation validation. Moreover, the project originated from the Shenzhou VII accompanying small satellite mission, which was designed to capture close-range and multi-angle images of the Shenzhou VII spacecraft. Importantly, it marked a significant achievement by capturing the inaugural real-life images of China’s orbiting spacecraft. In this study, we present a groundbreaking visual navigation image enhancement framework that incorporates a spacecraft semantic parsing attention network and a structural recovery enhancement network. This framework demonstrates the capability to effectively enhance navigation images, robustly restoring spacecraft structures. It addresses the inherent challenges of space navigation image degradation. Our contributions in this endeavor encompass the following key aspects:
  • Proposing the first-ever framework for enhancing space navigation images. This paper pioneers the integration of on-orbit navigation images into image enhancement research and solves important issues in visual navigation engineering. It introduces an innovative framework that combines a spacecraft semantic parsing attention network with a structural recovery enhancement network to effectively restore detailed and structured navigation images
  • Pioneering Application of Spacecraft Semantic Parsing Network. The introduction of a novel approach using a spacecraft semantic parsing network to generate attention-guided maps. These maps serve as inputs to the enhancement network, improving the utilization of prior information for more effective enhancement.
  • Define the global structural loss in navigation image enhancement. It involves initially pre-enhancing the original image using gamma correction, followed by the extraction of its Laplacian pyramid to construct the global structure loss function. This approach enables the network to effectively recover the structure of extremely dark images.
Extensive experimentation substantiates the advantages of our approach in image quality generation, highlighting the practical value of the framework. Notably, our algorithm has successfully been applied to real visual navigation images. The subsequent sections of this paper follow this structure: Section 2 provides an exhaustive review of the pertinent literature within the realm of image enhancement. Section 3 commences with the introduction of the proposed spacecraft semantic segmentation network, followed by the delineation of the visual navigation image enhancement network, and concludes with an overview of the entire SpaceLight process. Section 4 is dedicated to the presentation of experiments and the analysis of results. In Section 5, we delve into a comprehensive study on component ablation and application analysis. Ultimately, Section 6 summarizes the research presented in this paper and the future directions for visual navigation advancements.

2. Related Work

Image enhancement is a key direction in image processing, and significant achievements have been made to date [22]. The current research on image enhancement primarily focuses on applications in underwater, remote sensing, medical imaging, and autonomous driving. The range of research methods includes both traditional approaches and methods based on deep learning [17].
The traditional methods for image enhancement predominantly rely on image and signal processing techniques, including histogram equalization [16], genetic algorithms, wavelet transforms, dark channel prior, and the Retinex model [23]. For example, histogram equalization enhances contrast and brightness by redistributing pixel values but may amplify noise and cause overexposure. Contrast-limited adaptive histogram equalization (CLAHE), introduced by SM Pizer [24], is widely adopted for its ability to distribute pixels beyond a threshold evenly, achieving gradual enhancement. LIME [25], an algorithm estimating illumination maps, stands as a classical method in image enhancement. Wavelet transforms [26] decompose images into different scale frequency bands, capturing fine structural details effectively. The Retinex model [27], inspired by human visual perception, enhances images by separating illumination and reflection components, frequently employed in underwater images to mitigate color distortion. Traditional algorithms, despite their straightforward implementation and process clarity, often necessitate manual parameter tuning, lack robustness, and involve complex processing steps. In uncertain imaging environments characterized by variable lighting and diverse spacecraft surface materials in visual navigation images, these methods frequently fail to deliver satisfactory results.
In recent years, learning-based methods have made significant strides in image enhancement, propelled by increased computational power and expanded datasets [28]. Learning-based techniques have demonstrated success in a range of image enhancement tasks including deblurring, deraining, dehazing, super-resolution, and beautification [29]. Notably, generative adversarial networks (GANs), introduced by Ian Goodfellow in 2014 [30], represent one of the most promising methodologies in this area. GANs effectively address the challenge of data generation through an adversarial training process between a generator and a discriminator. This method enables the production of highly realistic images that conform to predefined conditions [31]. Convolutional neural networks (CNNs) are extensively applied in image enhancement, where a single convolutional layer captures basic image features like textures, edges, and local characteristics. Multiple layers collaborate to learn high-dimensional abstract features, facilitating a hierarchical learning process that significantly enhances the understanding of image content. Learning-based image enhancement methods can be categorized into five types based on their learning strategies: supervised learning, semi-supervised learning, unsupervised learning, zero-shot learning, and reinforcement learning. Since 2017, advancements in computational power and algorithmic innovations have driven substantial growth in these methods. LLNet, proposed by KG Lore, represents a pioneering effort in adapting convolutional neural networks (CNNs) specifically for low-light image enhancement [32], utilizing a deep autoencoder for enhancement and denoising. KinD [33] decomposes images into illumination and reflectance maps based on the Retinex theory. It requires training on paired images captured under different exposure conditions for weak light image enhancement. Alongside LLNet [34], LightenNet [35], and Retinex-Net [36], KinD operates under a supervised learning framework, requiring extensive paired datasets. In contrast, DRBN [37] is a semi-supervised method that initially focuses on fidelity using paired datasets. In its subsequent stage, DRBN prioritizes perceptual quality, employing non-paired datasets to refine low-light image details. While supervised learning requires a significant amount of paired training data, unsupervised learning extracts features from unlabeled data. EnlightenGAN [38], a pioneering method in low-light image enhancement, utilizes non-paired images for feature learning. This approach incorporates self-regularization techniques to guide the training process while preserving the texture and structure of the images. ZeroDCE [39] and RUAS [40] exemplify zero-shot learning methods in image enhancement. ZeroDCE enhances images by training a non-linear curve mapping across multiple exposures, whereas RUAS combines a neural network structure search with Retinex theory-based prior constraints for low-light enhancement. It further employs an optimization process to solve for the overall network architecture and introduces a reference-free cooperative learning strategy. DeepExposure [41] leverages reinforcement learning to adjust local exposure variations in sub-images while designing global learning rewards. This method effectively balances local exposures, achieving a comprehensive exposure equilibrium across the image.

3. The Proposed Method

Unlike the traditional scenario of enhancing low-light images, visual navigation images predominantly consist of exceedingly dark backgrounds and feature high-contrast illumination. The backdrop predominantly features deep black space. In this challenging imaging context, restoring spacecraft structures within these dark regions poses a significant challenge. To address this, we introduce the SpaceLight model, specifically tailored for this unique application. The model seamlessly integrates two distinct sub-networks: the spacecraft semantic parsing network and the visual navigation image enhancement network. Detailed descriptions of each sub-network’s functionalities and their integration are meticulously expounded upon in the following sections.

3.1. Spacecraft Semantic Parsing Network

We believe that integrating semantic priors can produce more realistic structures. Thus, for the first time, we introduce a semantic segmentation network into the visual navigation enhancement domain to segment on-orbit images. Unlike conventional application scenarios, visual navigation images typically feature a predominantly black background. The use of a segmentation network facilitates rapid identification of target areas and precise enhancement. Given the characteristic of spacecraft having similar components [42,43], we propose a spacecraft semantic parsing network. This network processes images to generate attention maps, thereby enhancing target areas more effectively.
The spacecraft semantic parsing network employs the U-Net architecture, originally introduced by Olaf Ronneberger [44]. Initially designed for cell segmentation in medical images, U-Net has proven successful across various fields such as medical imaging, industrial segmentation, and autonomous driving. Recognized for its efficiency and adaptability, U-Net is particularly beneficial for scenarios requiring model training with limited datasets. Given the challenges of obtaining large on-orbit navigation datasets, we chose U-Net for its lightweight and straightforward structure. The architecture of U-Net, depicted in Figure 1, is U-shaped and includes both a contracting path and an expansive path. The contracting path performs convolution and downsampling to extract features effectively, while the expansive path uses upsampling and skip connections to merge detailed low-level information with high-level semantic information from earlier layers. This integration is crucial for enhancing the resolution restoration of the original image.
During training, we utilize the stochastic gradient descent to optimize the model parameters. For the computation of the energy function, a pixel-wise softmax operation is applied to the final feature map. This operation converts the raw pixel values into corresponding probability distributions. The deviation between these probability distributions and the actual labels is quantified using the cross-entropy loss function. This metric assists in adjusting the model parameters, gradually optimizing them to better align with the training data. The softmax function is defined as follows:
q m ( x ) = exp a m ( x ) / m = 1 M exp a m ( x )
a m ( x ) represents the activation value of x in feature channel m, M stands for the number of classes, and q m ( x ) indicates the confidence level associated with the class predicted for x . The cross-entropy loss is defined as follows:
E = x Q w ( x ) l o g ( q r ( x ) ( x ) )
w ( x ) denotes the weight assigned to a specific position x , r : Q 1 , , M represents the true label function that assigns labels to each position, and q r ( x ) ( x ) represents the predicted probability of the true class at position x .
In our methodology, it is crucial to not only focus on the semantic parsing outcomes of navigation images but also to emphasize the dark regions within areas adjacent to the identified spacecraft components. We achieve this by assigning higher attention weights to these areas in our computational model, thereby improving the accuracy and detail of our analysis. This approach highlights significant but typically underrepresented features, ensuring enhanced image quality and reducing the risk of overexposure. Given the inherent darkness of the input images, the results produced by the spacecraft semantic parsing network are likely to capture only a portion of the actual semantic information. To address this limitation, we introduce a `distance factors module’ that performs secondary adjustments based on the outputs from the spacecraft semantic parsing. This module first calculates the Euclidean distance from each point to the nearest target point, then applies a nonlinear adjustment, and finally, adds this to the inverted grayscale image to generate the attention map, as depicted in Figure 2. These maps utilize cooler hues to denote lower attention weights and warmer hues for higher weights. We tailor the size of the attention map to match each feature map, subsequently multiplying it with all intermediate feature maps and the output image.
Calculation of distance factors:
D ( x i , y j ) = | x i x b | + | y j y b |
Nonlinear mapping adjustment:
D i , j = u · ( D ( x i , y j ) ) v
The target point coordinate within the image coordinate system is denoted as ( x b , y b ) , where ’target points’ refer to the points in semantic segmentation that are the nearest target points based on the Euclidean distance calculation, points that belong to the spacecraft class rather than the background. The point coordinates that need to be calculated are represented as ( x i , y j ) . Additionally, the coefficients u = 1 and v = 1.3 represent the mapping function, which can be manually adjusted based on the dataset.

3.2. On-Orbit Navigation Image Enhancement Network

As depicted in Figure 3, our network enhancement is inspired by the EnlightGAN architecture [38], featuring a lightweight unidirectional GAN structure, with Unet acting as the generator. We introduce a global–local–global triple discriminator that strategically emphasizes both global and local information. The preservation of structural integrity is critical in enhancing on-orbit navigation images, as previously discussed. The prior studies have employed perceptual loss in pixel space to maintain the structure and color of input images. However, in the context of extremely dark images and those with highly uneven lighting typical of navigation scenarios, these methods often fail to restore structural details adequately. To overcome this challenge and ensure our model retains the structural integrity of the input while generating a more natural visual effect, we have integrated a pre-enhancement module and devised a structure preservation loss. This specific loss is designed to maintain the structure of spacecraft in areas of extreme darkness during the training phase.
In order to enhance both the overall image and adaptively improve local regions, we utilize a global–local–global discriminator structure within the navigation image enhancement network. The global discriminator incorporates a relativistic approach, employing the least squares GAN (LSGAN) model instead of the traditional Sigmoid activation. This modification is designed to improve the discriminator’s ability to assess the relative realism between real and fake samples more accurately.
The loss functions for the global discriminator (D) and the generator (G) are as follows:
L D G l o b a l = E z r P real ( D R a ( z r , z f ) 1 ) 2 + E z f P fake D R a ( z f , z r ) 2
L G G l o b a l = E z f P fake ( D R a ( z f , z r ) 1 ) 2 + E z r P real D R a ( z r , z f ) 2
z r and z f represent samples drawn from the real and fake distributions, respectively. Real samples refer to high-quality images of spacecraft obtained in laboratory and virtual environments with clear and evenly distributed lighting, while fake samples are those generated by the generative network during training. The operations D R a ( z f , z r ) and D R a ( z r , z f ) indicate the standard functions of the relativistic discriminator, which compare a fake sample z f against a real sample z r , and vice versa.
D R a ( z f , z r ) = σ C ( z f ) E [ C ( z r ) ] σ C ( z r ) E [ C ( z f ) ]
D R a ( z r , z f ) = σ C ( z r ) E [ C ( z f ) ] σ C ( z f ) E [ C ( z r ) ]
C ( z f ) and C ( z r ) are preliminary, raw outputs from the discriminator. E [ C ( z r ) ] and E [ C ( z f ) ] indicate that these are mean values calculated over batches or datasets of real and fake samples, respectively. D R a ( z f ) and D R a ( z r ) are specified as the final processed outputs, which are used for calculating the probability of a sample being real. The adversarial loss for the local discriminator is expressed as follows:
L D L o c a l = E z r P real patches [ ( D ( z r ) 1 ) 2 ] + E z f P fake patches [ ( D ( z f ) ) 2 ]
L G L o c a l = E z f P fake patches [ ( D ( z f ) 1 ) 2 ]
To ensure consistency, many methodologies employ a pre-trained VGG model to measure the feature space distance between images. However, our experiments have revealed that, in scenarios involving non-paired training, reliance solely on self-regulation fails to restore certain fine structural details in extremely dark navigation images. Moreover, our observations indicate that although the use of a dehazing algorithm [45] results in distortion of spacecraft structure and color due to adjustments for overexposure, it simultaneously significantly enhances the visibility of structural details in the dark regions of the image. As a result, we introduce a global structural feature loss by employing the Laplacian pyramid representations of both the pre-enhanced and generated images. Specifically, we leverage a pre-trained VGG16 model on ImageNet to extract features from these Laplacian pyramids. The pre-enhancement module employs a dehazing algorithm, which treats the image as if it were hazy by utilizing pixel inversion, followed by enhancing visibility through gamma correction. Although the visual quality of the pre-enhanced images is substandard, this process significantly improves the capture of spacecraft structural details in extremely dark regions. The formula can be expressed as follows:
I out = 255 ( 255 I in ) γ
I in represents the input image, and γ represents the gamma correction factor. The Gaussian pyramid is constructed by sequentially downsampling I out , with each level I out i obtained from the preceding level I out i 1 . The Laplacian pyramid at each level is defined as follows:
L i = I out i u p s a m p l e ( I out i + 1 )
The upsample function denotes a bilinear interpolation method. The Laplacian pyramid is used to capture multiscale fine details from the pre-enhanced image, as illustrated in Figure 4.
As elucidated earlier, the definition of the global structural loss is as follows:
L S ( I ) = α L L S ( I ) + β L C ( I )
L L S ( I ) = 1 W i , j × H i , j x = 1 W i , j y = 1 H i , j ψ i , j ( I o u t k u p s a m p l e ( I o u t k + 1 ) ) ψ i , j ( G ( I ) ) 2
L C ( I ) = 1 W i , j × H i , j x = 1 W i , j y = 1 H i , j ψ i , j ( I ) ψ i , j ( G ( I ) ) 2
α and β serve as weight factors that adjust the significance of the components of the global structural loss. W i , j × H i , j represents the dimensions of the feature map. I represents the input image, G ( I ) is the output of the generation network, I o u t k denotes the k-th level output of the Laplacian pyramid, and ψ i , j is the feature extracted using a pre-trained VGG model, where i refers to the i-th max-pooling layer, and j is the j-th convolutional layer following the i-th max-pooling layer. The specific layers used are i = 5 and j = 1 . The overall loss function is defined as follows:
L L O S S = L S G l o b a l + L S L o c a l + L G G l o b a l + L G L o c a l

4. Experiment

4.1. Datasets

(1)
Spacecraft Semantic Segmentation Datasets:
(a)
The dataset includes a total of 3307 images, with a ratio of 9:1 between virtual images and those captured in a laboratory setting.
(b)
The dataset is randomly partitioned into a validation set (10%) and a training set (90%).
(2)
Navigation Image Enhancement Datasets:
(a)
Training Datasets:
Comprising unpaired images that are extremely dark or extremely bright.
  • Train A consists of 1427 images, with laboratory-captured images and Unity-generated virtual images combined in a 3:1 ratio.
  • Train B: Comprising 1020 images.
(b)
Test Datasets:
The test dataset combines 7741 laboratory-captured images and 73 real on-orbit navigation images, totaling 5 datasets, all with a resolution of 2248 × 2048.
  • Test A: Laboratory-Captured Dataset
    Comprises 627 images captured under low-light conditions simulating an approach towards satellites.
  • Test B: Laboratory-Captured Dataset
    Comprises 3231 images capturing a rotating satellite under oblique illumination from a faint light source.
  • Test C: Laboratory-Captured Dataset
    Comprises 449 images capturing the proximity approach of satellites under changing illumination conditions.
  • Test D: Laboratory-Captured Dataset
    Comprises 3434 images capturing the proximity approach of rotating satellites under varying illumination conditions.
  • Test E: On-Orbit Navigation Imaging Dataset
    Comprises 73 images captured by a spacecraft in low Earth orbit, featuring various imaging environments including Earth’s illuminated regions, shadowed regions, and the transitional zone between illuminated and shadowed areas.

4.2. Implementation Details

This paper utilizes PyTorch to construct the network within a Python 3.8 environment, executed on a 12th Gen Intel(R) Core(TM) i7-12700H processor(Intel Corporation, Santa Clara, CA, USA). at 2.30 GHz and an NVIDIA GeForce RTX 3090 GPU. The spacecraft semantic parsing network was trained for 200 epochs using the Adam optimizer. Training commenced with an initial learning rate of 0.0001, which was held constant for the first 120 epochs and then reduced linearly over the remaining 80 epochs. To expedite the training pace, the network input was set to [512, 512], and all convolutional layers were configured with a kernel size of [3, 3]. The training process lasted approximately 10 h. The spacecraft semantic parsing network contains approximately 24.89 million trainable parameters and occupies 2253 MB of memory during training.
The navigation image enhancement network was similarly trained for 100 epochs, with the entire network optimized using the Adam optimizer at a learning rate of 0.0001. The learning rate remained constant for the initial 60 epochs, then linearly decayed to zero over the subsequent 40 epochs. The training process lasted approximately 20 h. To enhance the diversity of the generated images, data augmentation techniques such as random cropping and image pooling were employed. The navigation image enhancement network contains approximately 6.96 million trainable parameters and occupies approximately 3540 MB of memory during training.
For comparative evaluation, eight enhancement models—including SpaceLight (ours), LIME [25], KinD [33], EnlightGAN [38], PSENet [46], SCI [47], Night [48], and HEP [49]—were rigorously tested, and their outcomes were meticulously documented for analysis.

4.3. Evaluation Metrics

The approaches for assessing image quality are broadly classified into subjective and objective methods. Subjective methods rely on human observers to evaluate the fidelity of an image compared to real-world visuals. Objective methods, in contrast, use mathematical algorithms to measure image quality based on image characteristics and statistical information. This paper employs four distinct quantitative metrics specifically chosen for evaluating image enhancement and restoration effects.
The peak signal-to-noise ratio (PSNR) evaluates the structural resemblance between two images. A higher PSNR value indicates better image quality. I original ( i ) and I processed ( i ) denote the pixel values at corresponding locations in the original and enhanced images, respectively, while MAX represents the maximum possible pixel value. PSNR is calculated using the following equations:
M S E = 1 N i = 1 N ( I original ( i ) I processed ( i ) ) 2
P S N R = 10 log 10 M A X 2 M S E
The structural similarity (SSIM) metric evaluates the similarity between an original image and its enhanced version by analyzing luminance, contrast, and structural elements [50,51]. SSIM values range from [−1, 1], with values near 1 indicating higher image fidelity. In the SSIM formula, x and y are the images being compared. The local means, standard deviations, and covariance of pixel values in the corresponding image regions are represented by μ x ,   μ y ,   σ x ,   σ y , and σ x y , respectively. Constants c 1 and c 2 stabilize the calculation in cases of weak denominators. I, C, and S denote the luminance, contrast, and structure comparison functions.
The SSIM index is defined by
S S I M ( x , y ) = [ I ( x , y ) ] α [ C ( x , y ) ] β [ S ( x , y ) ] γ
α , β , and γ are weights used to adjust the relative importance of luminance similarity, contrast similarity, and structural similarity, respectively. In certain situations, it may be desirable to give more weight to contrast similarity or structural similarity, which can be achieved by adjusting α , β , and γ . Simplifying with α = β = γ = 1 :
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
Multi-scale structural similarity (MS-SSIM) is an enhancement of the original SSIM metric, extending the evaluation of structural similarity across multiple scales. This method calculates weighted averages of SSIM measurements at different scales, providing a more comprehensive evaluation of image quality that captures both localized and overall image structures. The MS-SSIM values range from 0 to 1, where higher values indicate better image fidelity. Here, x and y are the two images being compared, M is the total number of scales used in the evaluation, S S I M i ( x , y ) is the SSIM at the i-th scale, and w i is the weighting factor for each scale.
The MS-SSIM for images x and y is calculated as
M S S S I M ( x , y ) = i = 1 M S S I M i ( x , y ) w i
M S S S I M ( x , y ) = [ l ( x , y ) ] α M · i = 1 M [ c ( x , y ) ] β i · [ S ( x , y ) ] γ i
Learned perceptual image patch similarity (LPIPS) is a deep-learning-based metric that evaluates perceptual differences between images by analyzing attributes such as color, texture, and structure. The metric produces values between 0 and 1, where values closer to 0 signify higher image quality. In this metric, I o r i g i n a l and I p r o c e s s e d are the images under comparison. Features extracted from these images at layer i by a pre-trained network are denoted as f i ( I o r i g i n a l ) and f i ( I p r o c e s s e d ) .
The LPIPS index is calculated using
L P I P S ( I o r i g i n a l , I p r o c e s s e d ) = i = 1 n w i · f i ( I o r i g i n a l ) f i ( I p r o c e s s e d ) 2

5. Results Analysis

5.1. Quantitative Comparison

This paper presents a quantitative comparison of our algorithm, SpaceLight, against the established methods. These include LIME [25], a classical supervised method, KinD [33], and five unsupervised algorithms: EnlightGAN [38], Night [48], HEP [49], SCI [47], and PSENet [46]. The implementations for these algorithms were obtained from code repositories specified by their respective authors. The quantitative evaluations were conducted using five distinct datasets: Datasets 1 through 4 consist of laboratory-captured images, while Dataset 5 features authentic on-orbit navigation images. As demonstrated in Table 1, Table 2, Table 3 and Table 4, the SpaceLight algorithm exhibits significant enhancement effects, showing a clear advantage across various metrics. However, a decline in these metrics is observed in Table 5 during tests with on-orbit navigation images. This decline predominantly occurs in images featuring Earth or cloud backgrounds. The primary objective of our algorithm is to minimize the enhancement of these backgrounds, focusing instead on the robust restoration of satellite structures. As a result, the performance metrics are lower in Experiment 5.
To address this challenge, we annotated satellite regions within the images and computed specific metrics for these areas. The results, presented in Table 6, indicate that our metrics have maintained optimal values, highlighting the algorithm’s advanced capabilities. Moreover, our framework operates on the NVIDIA GeForce RTX 3090(NVIDIA Corporation, Santa Clara, CA, USA), processing 256 × 256 images in just 0.01766227 s. This processing speed significantly exceeds the operational requirements for on-orbit navigation spacecraft, demonstrating exceptional efficiency.
In tandem, the data presented in Table 1, Table 2, Table 3 and Table 4 outline various experimental scenarios under extreme low-light conditions. Table 1 features satellite navigation images captured under consistent, direct lighting, whereas Table 2 displays images under tilted and low-intensity lighting. Table 3 and Table 4 include images of rotating and approaching satellites under dynamic lighting conditions. The numerical assessments across these tables demonstrate the superior adaptability of our algorithm to varying lighting conditions and angles, demonstrating its potential for on-orbit applications.
The analysis across Table 1 through Table 6 reveals significant insights into the performance of our framework. Notably, SpaceLight consistently outperforms other methods, delivering exceptional metrics and producing clear visual results. It effectively distinguishes spacecraft from deep space backgrounds, focusing on and restoring the structure of the spacecraft with high fidelity. Figure 5 illustrates these distinctions, displaying results processed by SpaceLight alongside those from other state-of-the-art methods. The red bounding boxes highlight areas with significant discrepancies within the images, aiding in visual comparison. The top three images in the examples are captured from laboratory settings, while the last two depict real on-orbit navigation images captured in both shadowed and illuminated areas.
As shown in Figure 6, SpaceLight performs exceptionally well across all four metrics. In terms of PSNR, SpaceLight significantly improves the pixel-level differences between the enhanced and original images, indicating its superior performance in reducing noise and distortion. For the SSIM metric, SpaceLight demonstrates high levels of brightness, contrast, and structural similarity, suggesting its strong ability to maintain image structural integrity and prevent the loss of important details. The MS-SSIM results further confirm SpaceLight’s capacity to preserve image structural similarity across multiple scales, which is crucial for handling complex images and scenes. On the LPIPS metric, SpaceLight maintains high visual consistency, making the enhanced images appear more natural and realistic. To ensure consistency in the observation of all four metrics, the LPIPS values have been inverted. Compared to other algorithms, SpaceLight occupies the largest area in the radar chart, indicating its overall superior performance across all metrics. This comprehensive advantage showcases its effectiveness in enhancing pixel differences, maintaining image structure, and improving perceptual quality. In summary, SpaceLight excels in enhancing pixel differences, maintaining image structure, and improving perceptual quality. This indicates that the algorithm effectively balances various metrics during the image enhancement process, enhancing both objective image quality (such as PSNR and SSIM) and subjective visual effects (such as LPIPS). This visual comparison vividly demonstrates the exceptional performance of the SpaceLight algorithm across a range of scenarios, both in laboratory and on-orbit settings.

5.2. Qualitative Comparison

As depicted in Figure 5, the KinD, Night, and HEP algorithms significantly brighten images but fail to preserve image originality, resulting in oversaturation and texture loss. This often leads to excessive noise and color distortion, obscuring or completely losing the structure of the spacecraft. Conversely, EnlightGAN tends to overemphasize exceptionally dark backgrounds, causing unnecessary alterations and some color distortion in spacecraft structure restoration. LIME aggressively enhances images, amplifying irrelevant background elements, reducing spacecraft visibility, and introducing significant noise. PSENet and SCI maintain inherent image characteristics better, resulting in relatively satisfactory enhancement effects. However, they still struggle to effectively augment the spacecraft structure in darker areas. In contrast, our proposed method, SpaceLight, achieves optimal quality metrics by enhancing spacecraft structures clearly and accurately. SpaceLight avoids pseudoshadows caused by overexposure, minimizes interference from background elements, and consistently outperforms other methods. The favorable outcomes achieved with SpaceLight confirm its reliability as a framework for on-orbit navigation processing. Furthermore, these results demonstrate SpaceLight’s capability to effectively support subsequent vision-related tasks.

5.3. Ablation Study

In this section, we conduct a series of experiments to evaluate the stability of SpaceLight and assess the influence of its key components. Specifically, we examine the impacts of the spacecraft semantic parsing network (SSP-Network) and the structural preservation loss(SP Loss). Figure 7 illustrates the individual augmentation effects of each module when operated independently.
It is evident that employing either the SSP-Network or the SP Loss alone provides a certain level of effectiveness in enhancing structures within darker regions. However, the combined integration of the SP Loss and SSP-Network significantly increases the overall enhancement intensity. Consequently, we propose using a semantic parsing network to produce self-attention maps and applying a structure-preserving loss in the generative network. This approach aims to generate more competitively enhanced images for on-orbit navigation.

5.4. Applications

In this section, we evaluate the efficacy of the SpaceLight algorithm for semantic segmentation tasks on navigation images, particularly those under extremely dark conditions and with extreme lighting variations. We utilized navigation images enhanced by various algorithms, as illustrated in Figure 5, as inputs for a trained semantic segmentation network. This allowed us to assess the accuracy of the segmentation maps produced. The first three images used in the assessment were captured under diverse lighting conditions in a laboratory setting, while the last two images were obtained from satellites in low Earth orbit.
Figure 8 demonstrates the effectiveness of SpaceLight in restoring structural details of spacecraft in extremely dark, deep-space environments. This capability significantly enhances the accuracy of semantic segmentation in such navigation contexts, preserving the integrity of spacecraft components substantially. In the segmentation maps, red indicates solar panels, green denotes the satellite body, and yellow marks the antenna. By comparing the original images with those enhanced by SpaceLight, a marked improvement in the segmentation of spacecraft components is observed. This enhancement is primarily due to SpaceLight’s ability to minimize background interference, facilitating the on-orbit identification of spacecraft elements. Notably, the region highlighted by the blue box showcases the exceptional effectiveness of our algorithm.

5.5. Limitations

While SpaceLight’s non-supervised image enhancement methodology is effective, it has inherent limitations. The incorporation of structural preservation loss within the algorithm boosts structural recovery performance but may impact color fidelity in images. Furthermore, a key component of our framework, the semantic parsing network, is specifically designed for entities such as satellites and spacecraft. Consequently, its application to other types of objects would necessitate retraining the network.

6. Conclusions

This article has introduced a novel navigation image enhancement framework, SpaceLight, which comprises two primary components: the spacecraft semantic parsing network and the on-orbit navigation image enhancement network. The principal objective of this framework is to enhance on-orbit navigation images by specifically restoring spacecraft structures within environments characterized by extreme darkness, which is crucial for subsequent tasks such as visual navigation and on-orbit servicing. The key findings from this study are summarized as follows:
In contrast to ground-based low-light images, on-orbit navigation images often exhibit extreme darkness or high-contrast lighting conditions, making it challenging to discern structures within dark areas against the predominantly dark deep space background. Moreover, the acquisition of paired on-orbit navigation image datasets, unlike those for ground images, is impractical. To address these challenges, we developed a navigation image enhancement framework based on spacecraft semantic priors. The semantic parsing network, which generates attention maps, employs a lightweight U-Net structure. The enhancement network utilizes a combination of GANs and structural preservation loss, along with attention maps, to restore the intricate structures of spacecraft. Notably, this study includes experiments conducted for the first time with navigation images captured by on-orbit spacecraft, further validating the natural visual effects and structural preservation capabilities of our method. Comprehensive experimental results demonstrate the exceptional performance of SpaceLight in enhancing navigation images. Additionally, the experiments detailed in the application section of the paper further demonstrate the effectiveness of SpaceLight in visual tasks, underscoring its practical utility in navigation scenarios.
Looking forward, we plan to focus on extracting prior information from on-orbit navigation images to further enhance visual navigation systems. Our aim is to integrate image enhancement techniques with advanced tasks such as on-orbit target tracking, attitude estimation [2], component identification [3], and visual multi-sensor fusion [4]. This integration will provide robust visual support for future space on-orbit servicing missions. The continued research and innovation in this field are expected to significantly improve the robustness and accuracy of visual navigation systems, thereby enhancing the efficacy and safety of space on-orbit service missions.

Author Contributions

Conceptualization, Z.Z. and L.C.; methodology, Z.Z.; software, Z.Z. and J.F.; validation, Z.Z., J.F. and Z.Z.; formal analysis, J.F.; investigation, L.D.; resources, L.C.; data curation, L.C.; writing—original draft preparation, Z.Z.; writing—review and editing, J.F.; visualization, C.S.; supervision, D.L.; project administration, C.S.; funding acquisition, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Major Project grant number Y3ZDYFWN01.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy restrictions.

Acknowledgments

We would like to thank the laboratory at Shanghai Tech University for their assistance in data collection.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Flores-Abad, A.; Ma, O.; Pham, K.; Ulrich, S. A review of space robotics technologies for on-orbit servicing. Prog. Aerosp. Sci. 2014, 68, 1–26. [Google Scholar] [CrossRef]
  2. Peng, J.; Xu, W.; Yan, L.; Pan, E.; Liang, B.; Wu, A.G. A pose measurement method of a space noncooperative target based on maximum outer contour recognition. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 512–526. [Google Scholar] [CrossRef]
  3. Shao, Y.; Wu, A.; Li, S.; Shu, L.; Wan, X.; Shao, Y.; Huo, J. Satellite Component Semantic Segmentation: Video Dataset and Real-time Pyramid Attention and Decoupled Attention Network. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 7315–7333. [Google Scholar] [CrossRef]
  4. Palmerini, G.B. Combining thermal and visual imaging in spacecraft proximity operations. In Proceedings of the 2014 13th International Conference on Control Automation Robotics and Vision (ICARCV), Singapore, 10–12 December 2014; pp. 383–388. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Deng, L.; Feng, J.; Chang, L.; Li, D.; Qin, Y. A survey of precision formation relative state measurement technology for distributed spacecraft. Aerospace 2022, 9, 362. [Google Scholar] [CrossRef]
  6. Volpe, R.; Sabatini, M.; Palmerini, G.B. Reconstruction of the Shape of a Tumbling Target from a Chaser in Close Orbit. In Proceedings of the 2020 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2020; pp. 1–11. [Google Scholar] [CrossRef]
  7. Miao, L.; Zhou, D.; Li, X. Spatial non-cooperative object detection based on deep learning. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 116–121. [Google Scholar]
  8. Peng, J.; Xu, W.; Yuan, H. An efficient pose measurement method of a space non-cooperative target based on stereo vision. IEEE Access 2017, 5, 22344–22362. [Google Scholar] [CrossRef]
  9. Islam, S.M.; Mondal, H.S. Image enhancement based medical image analysis. In Proceedings of the 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kanpur, India, 6–8 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  10. You, Q.; Wan, C.; Sun, J.; Shen, J.; Ye, H.; Yu, Q. Fundus image enhancement method based on CycleGAN. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4500–4503. [Google Scholar]
  11. Krishnan, H.; Lakshmi, A.A.; Anamika, L.; Athira, C.; Alaikha, P.; Manikandan, V. A novel underwater image enhancement technique using ResNet. In Proceedings of the 2020 IEEE 4th Conference on Information &Communication Technology (CICT), Chennai, India, 3–5 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–5. [Google Scholar]
  12. Li, H.; Zhuang, P.; Wei, W.; Li, J. Underwater image enhancement based on dehazing and color correction. In Proceedings of the 2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), Xiamen, China, 16–18 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1365–1370. [Google Scholar]
  13. Kaplan, N.; Erer, I.; Gulmus, N. Remote sensing image enhancement via bilateral filtering. In Proceedings of the 2017 8th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 19–22 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 139–142. [Google Scholar]
  14. Liu, P.; Wang, H.; Li, J.; Dong, W.; Zhang, J. Study of Enhanced Multi-spectral Remote-sensing-satellite Image Technology Based on Improved Retinex-Net. In Proceedings of the 2022 2nd International Conference on Algorithms, High Performance Computing and Artificial Intelligence (AHPCAI), Guangzhou, China, 21–23 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 484–489. [Google Scholar]
  15. Guo, X.; Hu, Q. Low-light image enhancement via breaking down the darkness. Int. J. Comput. Vis. 2023, 131, 48–66. [Google Scholar] [CrossRef]
  16. Qi, Y.; Yang, Z.; Sun, W.; Lou, M.; Lian, J.; Zhao, W.; Deng, X.; Ma, Y. A comprehensive overview of image enhancement techniques. Arch. Comput. Methods Eng. 2021, 29, 583–607. [Google Scholar] [CrossRef]
  17. Kim, W. Low-light image enhancement: A comparative review and prospects. IEEE Access 2022, 10, 84535–84557. [Google Scholar] [CrossRef]
  18. Bau, D.; Strobelt, H.; Peebles, W.; Wulff, J.; Zhou, B.; Zhu, J.Y.; Torralba, A. Semantic photo manipulation with a generative image prior. arXiv 2020, arXiv:2005.07727. [Google Scholar] [CrossRef]
  19. Shen, Z.; Lai, W.S.; Xu, T.; Kautz, J.; Yang, M.H. Deep semantic face deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8260–8269. [Google Scholar]
  20. Chan, K.C.; Wang, X.; Xu, X.; Gu, J.; Loy, C.C. Glean: Generative latent bank for large-factor image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 14245–14254. [Google Scholar]
  21. Cap, Q.H.; Fukuda, A.; Iyatomi, H. A Practical Framework for Unsupervised Structure Preservation Medical Image Enhancement. arXiv 2023, arXiv:2304.01864. [Google Scholar]
  22. Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.M.; Gu, J.; Loy, C.C. Low-light image and video enhancement using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 9396–9416. [Google Scholar] [CrossRef]
  23. Parihar, A.S.; Singh, K. A study on Retinex based method for image enhancement. In Proceedings of the 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 19–20 January 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 619–624. [Google Scholar]
  24. Pizer, S.M. Contrast-limited adaptive histogram equalization: Speed and effectiveness stephen m. pizer, r. eugene johnston, james p. ericksen, bonnie c. yankaskas, keith e. muller medical image display research group. In Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990; Volume 337, p. 2. [Google Scholar]
  25. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
  26. Chi, J.N.; Zhang, C.; Qin, Y.J.; Wang, C.J. A novel image enhancement approach based on wavelet transformation. In Proceedings of the 2009 Chinese Control and Decision Conference, Guilin, China, 17–19 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 4387–4392. [Google Scholar]
  27. Wang, Y.; Chang, R.; He, B.; Liu, X.; Guo, J.H.; Lendasse, A. Underwater image enhancement strategy with virtual retina model and image quality assessment. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–5. [Google Scholar]
  28. Jiao, L.; Zhao, J. A survey on the new generation of deep learning in image processing. IEEE Access 2019, 7, 172231–172263. [Google Scholar] [CrossRef]
  29. Ni, Z.; Yang, W.; Wang, S.; Ma, L.; Kwong, S. Towards Unsupervised Deep Image Enhancement With Generative Adversarial Network. IEEE Trans. Image Process. 2020, 29, 9140–9151. [Google Scholar] [CrossRef] [PubMed]
  30. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K., Eds.; Curran Associates, Inc.: New York, NY, USA, 2014; Volume 27. [Google Scholar]
  31. Liu, Z.; Yuan, T.; Lin, Y.; Zeng, B. Recent Advances of Generative Adversarial Networks. In Proceedings of the 2022 IEEE 2nd International Conference on Electronic Technology, Communication and Information (ICETCI), Changchun, China, 27–29 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 558–562. [Google Scholar]
  32. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
  34. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  35. Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
  36. Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef]
  37. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3063–3072. [Google Scholar]
  38. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  39. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 1780–1789. [Google Scholar]
  40. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 10561–10570. [Google Scholar]
  41. Yu, R.; Liu, W.; Zhang, Y.; Qu, Z.; Zhao, D.; Zhang, B. Deepexposure: Learning to expose photos with asynchronously reinforced adversarial learning. Adv. Neural Inf. Process. Syst. 2018, 31, 1673–1682. [Google Scholar]
  42. Armstrong, W.; Draktontaidis, S.; Lui, N. Semantic Image Segmentation of Imagery of Unmanned Spacecraft Using Synthetic Data; Technical Report; Montana State University: Bozeman, MT, USA, 2021. [Google Scholar]
  43. Dung, H.A.; Chen, B.; Chin, T.J. A spacecraft dataset for detection, segmentation and parts recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 2012–2019. [Google Scholar]
  44. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  45. Dong, X.; Pang, Y.A.; Wen, J.G. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the ACM SIGGRAPH 2010 Posters, New York, NY, USA, 25–29 July 2010. SIGGRAPH ’10. [Google Scholar] [CrossRef]
  46. Nguyen, H.; Tran, D.; Nguyen, K.; Nguyen, R. PSENet: Progressive Self-Enhancement Network for Unsupervised Extreme-Light Image Enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 1756–1765. [Google Scholar]
  47. Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
  48. Jin, Y.; Yang, W.; Tan, R.T. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 404–421. [Google Scholar]
  49. Zhang, F.; Shao, Y.; Sun, Y.; Zhu, K.; Gao, C.; Sang, N. Unsupervised low-light image enhancement via histogram equalization prior. arXiv 2021, arXiv:2112.01766. [Google Scholar]
  50. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  51. Wang, Z.; Bovik, A. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
Figure 1. Spacecraft Semantic Parsing Network Structure.
Figure 1. Spacecraft Semantic Parsing Network Structure.
Aerospace 11 00503 g001
Figure 2. Attention Map.
Figure 2. Attention Map.
Aerospace 11 00503 g002
Figure 3. On-Orbit Navigation Image Enhancement Network.
Figure 3. On-Orbit Navigation Image Enhancement Network.
Aerospace 11 00503 g003
Figure 4. Pre-Enhancement Laplacian Pyramid.
Figure 4. Pre-Enhancement Laplacian Pyramid.
Aerospace 11 00503 g004
Figure 5. Visual Outcomes of Different Image Enhancement Algorithms.
Figure 5. Visual Outcomes of Different Image Enhancement Algorithms.
Aerospace 11 00503 g005
Figure 6. Comprehensive Performance Comparison Across All Tests.
Figure 6. Comprehensive Performance Comparison Across All Tests.
Aerospace 11 00503 g006
Figure 7. Impact of Component Variations on Image Generation.
Figure 7. Impact of Component Variations on Image Generation.
Aerospace 11 00503 g007
Figure 8. Comparative Semantic Segmentation Results of Various Image Enhancement Algorithms.
Figure 8. Comparative Semantic Segmentation Results of Various Image Enhancement Algorithms.
Aerospace 11 00503 g008
Table 1. Performance Metrics Comparison for TestA.
Table 1. Performance Metrics Comparison for TestA.
SpaceLightLIMEKinDEnlightGANNightHEPSCIPSENet
PSNR25.8375 225.73005.368210.919813.315312.797126.6539 122.4496
SSIM0.4842 10.28300.01470.04340.03770.05290.4267 20.1971
MS-SSIM0.7165 10.28080.02370.04690.05450.059880.5697 20.2964
LPIPS0.0393 10.22170.41830.16440.17210.0651 20.09670.1822
1 the superior outcomes; 2 the next-best results.
Table 2. Performance Metrics Comparison for TestB.
Table 2. Performance Metrics Comparison for TestB.
SpaceLightLIMEKinDEnlightGANNightHEPSCIPSENet
PSNR26.4139 124.56435.326610.928612.971912.832626.2267 221.9694
SSIM0.5575 10.21560.01220.039500.01710.05400.3550 20.1607
MS-SSIM0.7174 10.26860.02490.050200.05790.06650.5562 20.2920
LPIPS0.0435 10.28570.44710.20970.23190.0641 20.15440.2382
1 the superior outcomes; 2 the next-best results.
Table 3. Performance Metrics Comparison for TestC.
Table 3. Performance Metrics Comparison for TestC.
SpaceLightLIMEKinDEnlightGANNightHEPSCIPSENet
PSNR26.8031 123.39595.649510.925712.573712.892824.3550 220.1984
SSIM0.5582 10.20410.01530.03950.02920.03840.3039 20.1318
MS-SSIM0.7295 10.24970.02850.05450.05790.08250.5693 20.2978
LPIPS0.058310.25290.40570.19240.19720.0669 20.14450.2099
1 the superior outcomes; 2 the next-best results.
Table 4. Performance Metrics Comparison for TestD.
Table 4. Performance Metrics Comparison for TestD.
SpaceLightLIMEKinDEnlightGANNightHEPSCIPSENet
PSNR28.5612 124.51004.595910.527812.056212.689526.7876 222.5661
SSIM0.5074 10.15700.00630.02290.00920.04250.2937 20.1220
MS-SSIM0.7066 10.27000.02000.04300.04890.06110.5562 20.2874
LPIPS0.0370 10.21160.41730.17510.21890.0642 20.11680.1806
1 the superior outcomes; 2 the next-best results.
Table 5. Performance Metrics Comparison for TestE.
Table 5. Performance Metrics Comparison for TestE.
SpaceLightLIMEKinDEnlightGANNightHEPSCIPSENet
PSNR25.9703 123.9686 27.931014.343411.844414.139223.962722.7644
SSIM0.49780.47370.01030.21740.17240.26580.6510 10.5273 2
MS-SSIM0.69610.82210.17100.17780.12110.23830.7881 20.8268 1
LPIPS0.2234 20.31800.66900.27610.31070.1318 10.25680.2581
1 the superior outcomes; 2 the next-best results.
Table 6. Performance Metrics Comparison for TestF.
Table 6. Performance Metrics Comparison for TestF.
SpaceLightLIMEKinDEnlightGANNightHEPSCIPSENet
PSNR36.6030 134.2695 222.003329.525626.816625.523833.402032.9006
SSIM0.9080 10.86290.57400.54780.51380.46050.8869 20.8781
MS-SSIM0.9899 10.98670.96460.96650.96690.96400.9891 20.9887
LPIPS1.23 × 10−6 11.71 × 10−6 21.22 × 10−51.84 × 10−63.24 × 10−63.47 × 10−62.19 × 10−62.31 × 10−6
1 the superior outcomes; 2 the next-best results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Feng, J.; Chang, L.; Deng, L.; Li, D.; Si, C. SpaceLight: A Framework for Enhanced On-Orbit Navigation Imagery. Aerospace 2024, 11, 503. https://doi.org/10.3390/aerospace11070503

AMA Style

Zhang Z, Feng J, Chang L, Deng L, Li D, Si C. SpaceLight: A Framework for Enhanced On-Orbit Navigation Imagery. Aerospace. 2024; 11(7):503. https://doi.org/10.3390/aerospace11070503

Chicago/Turabian Style

Zhang, Zhang, Jiaqi Feng, Liang Chang, Lei Deng, Dong Li, and Chaoming Si. 2024. "SpaceLight: A Framework for Enhanced On-Orbit Navigation Imagery" Aerospace 11, no. 7: 503. https://doi.org/10.3390/aerospace11070503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop