Next Article in Journal
Exact Solution for Cladding Modes in Twisted Optical Fibers
Next Article in Special Issue
A Novel PMDI Fiber Optic Hydrophone Incorporating IOC-Based Phase Modulator
Previous Article in Journal
Dependence of the Michelson Interferometer-Based Membrane-Less Optical Microphone–Photoacoustic Spectroscopy Gas-Sensing Method on the Fundamental Parameters of a Photoacoustic Gas Cell
Previous Article in Special Issue
Investigation of Hybrid Remote Fiber Optic Sensing Solutions for Railway Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Star-Detection Algorithm under Stray-Light Interference

1
Key Laboratory of Science and Technology on Space Optoelectronic Precision Measurement, Chinese Academy of Sciences, Chengdu 610209, China
2
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(8), 889; https://doi.org/10.3390/photonics10080889
Submission received: 4 July 2023 / Revised: 26 July 2023 / Accepted: 27 July 2023 / Published: 1 August 2023
(This article belongs to the Special Issue Advanced Photonic Sensing and Measurement)

Abstract

:
The interference caused by stray light leads to the invalid attitude of star sensors in orbit, thus affecting the attitude control of satellites. In order to overcome this problem, this paper proposes a fast star-detection algorithm with strong stray-light suppression ability. The first step in the proposed method is stray-light suppression. The highlighted pixels are unified and then erosion and dilation operations based on a large template are performed. Using the background image only, which is filled with stray light, the cleaner star image is obtained by subtracting the background from the unified image. The second step in the proposed method is binarization. The binary star image is obtained by using a line-segment strategy combined with a local threshold. The third step in the proposed method is star labeling. It comprises connected-domain labeling based on the preordering of pixels and the calculation of centroid coordinates of stars in each connected domain. The experimental results show that the proposed algorithm extracts the stars stably under the interference of different stray lights. The proposed method consumes less resources, and the output delay is only 18.256 us. Moreover, the successful identification rate is 98% and the attitude accuracy of the X and Y axes is better than 5″(3σ) when the star sensor works at the speed of zero.

1. Introduction

A star sensor [1] is a high-precision instrument for measuring the attitude of satellites. The star sensors are installed outside the satellites and usually operate in the presence of interference caused by stray light [2], such as sunlight, moonlight, earth-atmosphere light [3], etc. Due to the interference caused by stray light, the attitude data acquired using star sensors are easily corrupted. As a result, these data are unable to provide the attitude information of satellites accurately. Traditional star sensors rely on the lens hood [4] for performing the shading function. In order to achieve a suitable shading effect, a common technique is to increase the number of vanes inside the lens hood and the length of the lens hood. However, it is noteworthy that an increase in the volume and weight of the star sensor is an obvious disadvantage. Recently, with the advancements in China’s satellite network mission, multi-satellite launching technology has become commonplace. Therefore, the modern development trends of star sensors mainly focus on miniaturization [5] and low power consumption. Please note that the smaller size of the star sensor indicates the smaller size of the lens hood, thus causing a significant reduction in the shading effect. Therefore, it is particularly important to propose a new method of star detection [6,7], which can perform stably under the interference of stray light in real-time.
The traditional star-detection algorithms include threshold segmentation [8], window filtering [9,10,11], etc. These methods have a better star-detection ability for star images with clean backgrounds. However, under the interference of stray light present in the field of view, the performance of traditional algorithms degrades significantly. As a result, the research community has presented various methods for addressing this issue. Yu et al. [12] designed a new 7 × 7 filtering template, which improved the background estimation method and designed a new full-frame background filtering method. However, the limited template is unable to adapt to the different quantities of stray light. Wang et al. [13] studied the local adaptive threshold method, which had a high level of star detection under some complex backgrounds. However, due to high computational complexity, the local threshold method is often unable to perform in real-time. Inspired by the biological vision mechanism, Wei et al. [14] used multi-scale segmentation for realizing multiscale patch-based contrast measure (MPCM). This method adjusts the contrast between the target and the background and detects the bright and dark targets based on threshold segmentation. However, this method has poor suppression ability for a relatively bright thick clutter. Lu [15] proposed a first-order curvature-based method (MDWCM) for small-target detection in the presence of complex background interference. The performance of this algorithm was limited due to its computational complexity and long delay.
In order to deal with the stray-light interference in different scenarios, this paper proposes a new star-detection algorithm with strong stray-light suppression ability. The proposed method has good engineering applicability and real-time performance.
The main innovations of this work are summarized below.
First, the highlighted pixels are unified and then horizontal erosion and dilation operations using a large template are performed. Using this method, a background comprising all the stray light is obtained easily. Then, an enhanced and clean star image is obtained by image subtraction. Second, the star labeling implements the connected-domain labeling based on the preordered pixels. The labeling method is divided into different conditions if the distances between stars may be too small, or if the shape of stars may be heterotypic. The proposed algorithm is verified by performing experiments. The experimental results show that the proposed algorithm effectively detects accurate stars in the presence of interference caused by different stray lights. Moreover, the proposed algorithm has low computational complexity and real-time performance in different platforms. The field experiment shows that the proposed algorithm is beneficial for improving the successful identification rate and guaranteeing the attitude accuracy of the star sensor at different speeds.
The rest of this paper is organized as follows.
Section 2 presents the proposed algorithm. Section 3 presents the experimental results. Finally, Section 4 and Section 5 conclude this paper and present directions for future work.

2. Materials and Methods

As presented in Figure 1, the proposed algorithm is divided into three steps: stray-light suppression, binarization and star labeling.
The star image f can be represented as an accumulation of background f B , stray light f C , and stars f s as:
f = f B + f C + f s ,
where the background f B is closely related to the detector parameter a d c _ g a i n and d i g i t a l _ g a i n , which are usually determined during the calibration process of the dark field. By adjusting the exposure time and gain parameters, the star target in the star image has a high signal-to-noise ratio, and the star is not saturated. It is noteworthy that the stray light f C generally represents divergent characteristics in star images, including continuous light spots around the stars. The grayscale value of the light spots is high when the pixel is close to the stray-light center. As presented in Figure 2, under the interference of stray light, the contrast ratio between stars and the background decreases significantly, and low-magnitude stars are overwhelmed by the light spots. This results in a serious decline in star-detection results. The three-dimensional distribution of grayscale values presented in Figure 2 shows that the grayscale value of stray light is indefinite. Therefore, this work proposes a new star-detection algorithm with a strong ability to suppress stray light. The proposed algorithm has real-time performance and strong engineering applicability.

2.1. Stray-Light Suppression

The purpose of this step is to eliminate the stray light f C from the star image f and improve the signal-to-noise ratio of stars. We use the methods of background prediction and background subtraction for eliminating the stray light. The suitable filter template W B is designed to realize the background prediction.
Unlike normal backgrounds, we consider the impact of stray light on the background prediction. It is noteworthy that due to the inconsistency in the size and distribution of stray light, the direct template filter is useless. Therefore, this work proposes an innovative technique to address this issue, i.e., combining the highlighted pixels and unifying with the horizontal erosion and dilation operations.
In this step, first, the highlighted pixels are unified. The pixels with gray values greater than the highlighted threshold M are set to M uniformly. The aim is to fuse the inhomogeneous boundary and inner of the stray light together for eliminating the local gradient characteristics of stray light, thus improving the background segmentation.
f i = M f i > M f i e l s e ,
where M denotes the highlighted threshold, f i denotes the original image, and f i denotes the unified image. Based on prior knowledge, the average gray value A v e f i 1 of the previous star image f i 1 and the average gray value A v e f B of the black background image f B from the dark field are considered comprehensively. The relative minimum value is selected as the highlighted threshold M .
M = m i n 1.5 A v e f i 1 , 5 A v e f B .
As presented in Figure 3, after the unification of the highlighted pixels, the grayscale consistency between the boundary and inner region of stray light becomes better.
Second, the filter template W B is used for background segmentation. Considering that the window sizes of stars range from 3 × 3 to 9 × 9 [16], the small template S E 1 for image convolution is used for the erosion operation. It can eliminate the star object and discrete single-pixel noise. We select the horizontal 1 × 20 template as it reduces the data cache during the implementation. Afterwards, the dilation operation is performed by using the large template S E 2 . This recovers the scale information of stray light in the star image as much as possible. In order to guarantee the sufficiency of background suppression, the size of the dilation template can be expanded appropriately. The horizontal 1 × 40 template is selected as S E 2 . The background f B is obtained after erosion and dilation operations.
f B = ( f i S E 1 ) S E 2 .
Third, we obtain the clean star image f c by subtracting the background f B from the unified image f i . This is mathematically expressed as follows:
f c = f i f B .
As presented in Figure 4, the stray-light suppression is realized through background subtraction. The resulting star points have a higher signal-to-noise ratio in the star image.

2.2. Binarization

After the suppression of stray light, the star image with a clean background is obtained. In order to segment the stars from the background, threshold segmentation is required. The traditional global threshold cannot adapt to the stars of different magnitudes. Additionally, the computational complexity of the traditional block threshold is high as well. Therefore, this work proposes the idea of a horizontal local threshold. Please note that the local threshold is calculated while the star image is sliding. The horizontal local threshold is also conducive to reducing the storage consumption in engineering applications.
As shown in the Figure 5, the pixels are scanned from left to right. We compare the current pixel I c with the mean value I a of the left eight pixels. If I c > I a + G th is satisfied, then the current pixel I c is considered as the grayscale step point, which initially meets the standard of the star edge. The output binary value I b at the corresponding position is set to 1, and the length L i of star line is also set to 1. Then, the gray threshold T h of the subsequent pixel is locked as I a + G th , and the comparison with the next pixel is performed. If the subsequent pixel I c satisfies I c > T h , the output binary value I b at the corresponding position is set to 1, and the length of the star line segment is incremented by 1. If the subsequent pixel I c satisfies I c T h , the output binary value I b at the corresponding position is set to 0. At the same time, it is necessary to estimate whether the length L i is greater than 1. If this is not satisfied, it is considered that the previous grayscale step point is only a single-pixel noise. Please note that it is necessary to re-assign the binary value I b 1 of the previous pixel to 0. When the search for an old star ends and the search for a new star begins, the current pixel I c should be compared with the mean value I a of the left eight pixels. The gray threshold T h cannot be locked until the edge of the star is obtained.
Therefore, the lines with lengths greater than 1 are set to 1, while other pixels are set to 0.

2.3. Labeling

After binarization, it is necessary to mark the connected domain of the star points. The traditional methods of connected-domain labeling [17] include eight-connected-domain labeling or four-connected-domain labeling. These methods usually require at least two traversals for realizing connected-domain labeling. Moreover, these methods consume more memory and cause greater output delay. As shown in Figure 6a, the shapes of normal stars tend to be Gaussian-like distributions. However, shapes similar to those shown in Figure 6b or Figure 6c also appear. In order to enable the adoption of different shapes of stars and achieve higher real-time performance, this work proposes a new connected-domain labeling method based on preordered pixels. As presented in Figure 7, we design a two-line filter template and a judgment strategy for the connected domain. CP represents the current pixel, and L , UL , U , UR , UR 1 , UR 2 are preorder pixels.
As presented in Figure 8, there may exist some scenarios when the distance between two stars is only 1 pixel. In addition, considering that the shape of stars may be heterotypic, such as in Figure 6, the process of marking connected domains must be divided into different cases.
When CP = 1 and there already exists a label number of connected domains for the current pixel (the label numbers of all pixels are 0 initially), then we skip the current pixel and analyze the next pixel.
When CP = 1 and the current label number LN = 0.
As presented in Figure 9, if L = 0 , UL = 0 , U = 0 , UR = 0 , UR 1 = 0 ,and UR 2 = 0 , the current pixel CP is marked as the new connected domain, and the label number L N is set to m ( m 0 ).
If at least one of L , UL , U , UR , UR 1 , UR 2 equals 1, we analyze UR 1 or UR 2 in terms of which entity becomes equal to 1 first. As shown in Figure 10, if one of them is equal to 1, we search the pixels towards left in the first row until we find 0. Then, the previous column number is set to j . At the same time, we search the pixel from CP towards the right until we find 0, and the previous column number is set to i . Finally, we analyze the two column numbers.
As shown in Figure 11, if i < j 1 is satisfied, CP is not related to UR 1 and UR 2 . Here, we compare the preordered pixels L , UL , U ,   and   UR . (1) If all the preordered pixels are equal to 0, the label number L N of CP is set as m n . (2) If one of the preordered pixels is equal to 1 and their label numbers are equal to m n , then the label number L N of CP is set as m n . On the contrary, if one of the preordered pixels is equal to 1 and their label numbers are different, the label number L N of CP is set as the maximum value m b among them. At the same time, we need to build the mapping table between the minimum value m s and corresponding maximum value m b .
As presented in Figure 12, if i j 1 is satisfied, CP is related to UR 1 and UR 2 . Then, we compare the preordered pixels L , UL , U ,   and   UR . (1) If all the preordered pixels are equal to 0, the label numbers of the pixels from CP to the rightmost growing point (column number equals i ) are set as the label number m c of UR 1   and   UR 2 . (2) If one of the preordered pixels is equal 1, the label numbers of the pixels from CP to the rightmost growing point (column number equals i ) are set as the maximum value m b of L , UL , U , UR , UR 1 ,   and   UR 2 . At the same time, we need to build the mapping table between the minimum value m s and corresponding maximum value m b . When the entire star image is traversed, all the minimum values of the label numbers should be replaced with the maximum value.
As presented in Figure 13, after the image is completely traversed, we obtain the label number of the star effectively.
During the process of connected-domain marking, we calculate the accumulation G m of gray pixels I x , y , the product accumulation X m of coordinate x and gray pixels I x , y , and the product accumulation Y m of coordinate y and gray pixels I x , y in the same label m .
G m = G m + I x , y , X m = X m + x I x , y , Y m = Y m + y I x , y .
As presented in Figure 14, based on the mapping relationship m s : m b between the minimum values m s and maximum values m b during the marking of the connected domain, we merge G m s into G m b , merge X m s into X m b , and merge Y m s into Y m b at the same time.
When the merging operation is completed, the coarse centroid coordinates ( x q , y q ) of each connected domain are calculated as follows:
x q = X m G m ,   y q = Y m G m .  
When the coarse centroid coordinates x q , y q are obtained, the precise centroid coordinates x c , y c of the stars are calculated by using the method of centroid extraction with the threshold [18]. As presented in Figure 15, considering the coarse centroid coordinates x q , y q as the center, the local 15 × 15 image is obtained. Then, the average gray value of 56 pixels in the surrounding area N is calculated and is set as the background value B . The difference between each pixel value I x , y and the background value B in the central area M ( 14 × 14 ) is used as the actual gray value of the star after the stray-light suppression. The precise coordinates x c , y c are obtained as follows:
x c = x = 1 14 y = 1 14 I x , y x x = 1 14 y = 1 14 I x , y , y c = x = 1 14 y = 1 14 I x , y y x = 1 14 y = 1 14 I x , y , I x , y = I x , y B   I x , y > B 0   e l s e .

2.4. The Hardware Implementation of the Proposed Algorithm

As shown in Figure 16, the hardware implementation scheme comprises three pipeline modules, including stray-light suppression, binarization, and star labeling.
Among them, the key steps of hardware implementation [19] are stray-light suppression and star labeling.
The first key step is the suppression of stray light. It includes four steps: highlighted pixels’ unification, horizontal erosion and dilation operations, and background subtraction. If a traditional s × t rectangular window is used for erosion and dilation operations, multiple lines of data have to be stored, which requires multiple FIFOs for storing the row pixels. In order to save the resources and reduce the processing time, 1 × s size window is used for erosion and dilation operations. Therefore, only one FIFO is required to realize the cache of image line, and the search of the maximum and minimum values in the neighborhood pixels can be completed easily by comparing the adjacent cache data. After erosion and dilation operations, the background subtraction operation is used to subtract the dilated image from the original image after pixel alignment. Therefore, the original image is required to complete the delay calibration based on two-stage FIFO.
The second key step is star labeling. The purpose of star labeling is to find the connected domains and calculate the centroid coordinates of each connected domain. As shown in Figure 17, the algorithm proposed in this work is different from the traditional four connected domains or eight connected domains. The preordered binary pixels of two adjacent rows are used to perform a logical comparison. Therefore, only one FIFO is needed to store the binary image. At the same time, in order to calculate the coarse centroid coordinates, it is necessary to calculate the accumulation G m of gray I x , y , the product accumulation X m of coordinate x and gray I x , y , and the product accumulation Y m of coordinate y and gray I x , y . Therefore, it is necessary to instantiate three internal dual-port RAM. The label number of the connected domain is set as the address of RAM, and G m , X m , and   Y m are set as the data for reading and writing.

3. Results

3.1. Experimental Conditions

In this section, the algorithm proposed in this work is verified by performing experiments. Based on the empirical results, we set G th = 20 , S E 1 = 1 × 20 , S E 2 = 1 × 40 .
In Section 3.2.1, the simulations are performed to verify the ability of the proposed algorithm in different working conditions. There are eight real star-image sequences (S1–S8) used for performing the experiments. There is moonlight interference, sunlight interference, earth-atmosphere light interference, or daylight interference in the star images. The simulation platform comprises a computer with 2.5 GHz Intel I7 CPU and 16 GB of memory. The simulation software is MATLAB R2012b, and the operating system is Windows 7.
In Section 3.2.2, we compare the resource consumption on three different field programmable gate array (FPGA) platforms and calculate the delay between the last line of the image and accumulations. The simulation software is Modelsim SE 6.4e.
In Section 3.2.3, the field experiment is performed. The experiment platform is a self-developed miniaturized star sensor. The star sensor is installed on the two-dimensional rotation table, and it faces the moon directly. The image resolution is 2048 × 2048 pixels, and the frame frequency is 4 Hz. The integration time of the star sensor is 100 ms. The real-time attitude quaternion [20,21] and exposure time are output and used to analyze the attitude accuracy and successful identification rate of the star sensor.
The Monte Carlo analysis is used to calculate the successful identification rate [22] for successive frames of a video sequence. The attitude accuracy is analyzed when the two-axis rotation table is operated at the speed of 0 (the actual star is still moving slowly due to the rotation of the Earth) and 1°/s, respectively. In order to obtain the attitude accuracy, we calculate the fitting attitude quaternion Q 0 , Q 1 , Q 2 , Q 3 from the actual attitude quaternion q 0 , q 1 , q 2 , q 3 . Finally, we obtain the standard deviation STD x , STD y , STD z from the error value Err x , Err y , Err z as follows:
X q 0 = q 0 Q 0 q 1 Q 1 q 2 Q 2 q 3 Q 3 , X q 1 = q 0 Q 1 + q 1 Q 0 + q 2 Q 3 q 3 Q 2 , X q 2 = q 0 Q 2 + q 2 Q 0 + q 3 Q 1 q 1 Q 3 , X q 3 = q 0 Q 3 + q 3 Q 0 + q 1 Q 2 q 2 Q 1 ,
M = X q 1 X q 1 X q 2 X q 2 X q 3 X q 3 + X q 0 X q 0 2 X q 1 X q 2 + X q 3 X q 0 2 X q 1 X q 3 X q 2 X q 0 2 X q 1 X q 2 X q 3 X q 0 X q 1 X q 1 + X q 2 X q 2 X q 3 X q 3 + X q 0 X q 0 2 X q 2 X q 3 + X q 1 X q 0 2 X q 1 X q 3 + X q 2 X q 0 2 X q 2 X q 3 X q 1 X q 0 X q 1 X q 1 X q 2 X q 2 X q 3 X q 3 + X q 0 X q 0 ,
E r r x = atan M 2,3 M 3,3 180 3600 π , E r r y = asin M [ 1,3 ] 180 3600 π , E r r z = atan M 1,2 M 1,1 180 3600 π .
The units of error value Err x , Err y , Err z are arc-seconds.

3.2. Experimental Results

3.2.1. The Analysis of Star Detection in Real Image Sequences

In this experiment, the proposed algorithm is applied to analyze the real image sequences. There are eight real star-image sequences (S1–S8) used to perform the experiments, and the eight sequences are acquired under the interference of stray light. Sequence S1 denotes the condition when the earth-atmosphere light entered the field of view; S2 to S7 denote the conditions with moonlight or sunlight interference. S8 denotes the condition when the star image is acquired during daytime. These sequences represent different conditions under stray-light interference. Therefore, we verify the effect of the proposed algorithm in different working conditions.
As presented in Figure 18, the first and third rows represent the original star images with stray-light interference. The second and fourth rows represent the images filled with the detected star. The stars are marked with 10 × 10 white boxes. In S1, due to the influence of the earth-atmosphere light in the field of view, the average gray value of the whole image is much higher as compared to the average gray value of the black background. Therefore, the proposed algorithm selects five times the gray value of the black background as the highlight threshold by default. It can be seen that no false stars are extracted in the earth-atmosphere light, and several stars with low signal-to-noise ratios in the black background are extracted successfully.
In S2 to S8, although the scenes are different, there is a common feature. The proportion of stray light on the target surface is small, and the energy is not strong. Therefore, 1.5 times the average gray value of the whole image is set as the highlight threshold by default. Among S2, S3, S5, S6, and S7, a large number of star points are extracted in the field of view. In S4 and S8, the brightest stars in the field of view are also extracted correctly.
In order to guarantee the accuracy of calculations, at least four of the brightest stars are chosen for performing subsequent attitude calculations. As shown in Figure 18, the algorithm proposed in this work effectively extracts the brightest stars in the star image under the stray-light interference. This also guarantees the development of subsequent identification algorithms.

3.2.2. The Analysis of Resource Consumption and Delay

First, we implement the algorithm in different FPGA platforms and analyze the resource consumption of the proposed algorithm. The three platforms are commonly used in the development of star sensors.
Table 1 shows the resource consumption results of the algorithm proposed in this paper under three FPGA platforms. The resource consumption also includes the driver of the detector and the logic of data acquisition and storage. As shown in Table 1, the logic consumption does not exceed half of the chip itself, and the RAM consumption is caused by the FIFO cache of image rows and the storage of dual-port RAM. According to the maximum clock frequency, the algorithm can run at a higher frequency, and it meets the timing requirements of the star sensor.
Second, we analyze the output delay for star coordinates. As presented in Figure 19, D a t a _ i denotes the original star image output from the detector. Moreover, s t a r _ g r a y _ d a t a , s t a r _ x _ d a t a , and s t a r _ y _ d a t a denote the accumulations of gray values and coordinates, which can be used to calculate the precise star coordinates. From Figure 19, it is evident that the delay between the last line of the image and the accumulations is 18.256 μs (the master clock is 80 MHZ). This means that the centroid coordinates of stars can be quickly calculated as soon as the star image completes the readout, guaranteeing a sufficient time margin for subsequent matching and identification operations.

3.2.3. The Field Experiment

The field experiment is performed in Hami, Xinjiang. As presented in Figure 20a, we apply the algorithm in the self-developed star sensor, which is based on the FPGA chip A3PE3000L-484FBGA. The FPGA completes the star-detection algorithm, and the adjacent ARM chip completes the matching and identification processes. Finally, as shown in Figure 20b, the star sensor outputs the effective attitude quaternion and exposure time.
In this experiment, the star sensor faces the moon, and the three-axis attitude accuracy is calculated by using the effective attitude quaternion and exposure time. We calculate the attitude accuracy with the proposed algorithm when the two-axis rotation table operated at the speed of 0 and 1°/s.
As presented in Figure 21 and Figure 22, the attitude accuracy of the X and Y axes is better than 5″(3σ) when the speed of the star sensor reaches zero. The attitude accuracy of the X and Y axes is better than 20″(3σ) when the speed of star sensor reaches 1°/s. Therefore, the attitude accuracy meets the requirement of the star sensor, and it proves that the proposed algorithm can guarantee the attitude accuracy at different speeds.
Except the accuracy calculation, in the long-term moon alignment test (the speed of the star sensor reaches zero), we also analyze the successful identification rate when the moon enters the field of view. As shown in Table 2, using the proposed algorithm, the successful identification rate reaches 98%. While using the local threshold segmentation algorithm, the successful identification rate is only 53%.
The experimental results show that the algorithm proposed in this work significantly improves the successful identification rate of the star sensor and guarantees the accuracy requirements of attitude when the star sensor is operated at different speeds.

4. Discussion

In order to meet the strict launch requirements of star networks, the development trend of star sensors is focused on smaller sizes and lower power consumption. A smaller size means a smaller lens hood and a worse shading effect. Therefore, star-detection algorithms should consider the interference of stray light.
Compared with the local adaptive threshold method or the multiscale patch-based contrast measure, the output delay of the proposed algorithm is much smaller due to its simple implementation architecture. Furthermore, other algorithms, including the window filtering methods, are limited to the size of the filtering template and weighted value. Therefore, the existing algorithms cannot deal with the stray-light interference in different scenarios.
In this study, we mainly analyze the algorithm performance from three metrics. They are the intuitive capability of star detection, the resource consumption and output delay in engineering applications, and the attitude accuracy and successful identification rate of the star sensor.
The experimental results show that the proposed algorithm possesses excellent stray-light-suppression and star-detection abilities. Apart from these factors, the proposed algorithm also has strong engineering application characteristics. The proposed algorithm fully combines the characteristics of FPGA pipeline and parallelization technology and adopts a small amount of FIFO and row-pixel processing technology. The experimental results show that the resource consumption of the proposed algorithm is small, and the output delay of the algorithm is only 18.256 μs. Using the proposed algorithm, the successful identification rate reaches 98%. The attitude accuracy of the X and Y axes is better than 5″(3σ) when the speed of the star sensor reaches 0 and better than 20″(3σ) when the speed of the star sensor reaches 1°/s.
However, the proposed algorithm has certain limitations. When the sunlight enters the field of view with a tiny incident angle, the gray levels of most pixels in the image are close to saturation and the stars are completely covered. In this situation, the proposed algorithm cannot detect stars exactly. Therefore, further research is necessary to deal with these special scenarios and further improve the ability to detect stars.

5. Conclusions

This work proposed a new algorithm for extracting stars under the interference of stray light in an efficient manner. The proposed algorithm innovatively unifies the highlighted pixels and performs horizontal erosion and dilation operations based on a large template. The background containing stray light is obtained after erosion and dilation operations, and the stray light is suppressed by subtracting the background from the unified image. In addition, the proposed algorithm marks the connected domain based on the preordered pixels and calculates the centroid coordinates of the stars in each connected domain.
The experimental results show that the proposed algorithm possesses star-detection abilities even when there is interference caused by different stray-light sources. The proposed algorithm also consumes less resources and has a smaller output delay. Moreover, the proposed algorithm is beneficial for improving the successful identification rate and guaranteeing the attitude accuracy of the star sensor at different speeds.
In the future, the proposed algorithm will be applied to an on-orbit task for further verification, to improve its stray-light-suppression capability in the star sensor. This will improve the adaptability of the star sensor in different maneuvering states.

Author Contributions

Conceptualization, K.L.; methodology, K.L.; software, K.L.; validation, K.L. and L.L.; formal analysis, H.L.; investigation, K.L.; resources, R.Z. (Renjie Zhao); data curation, K.L.; writing—original draft preparation, K.L.; writing—review and editing, R.Z. (Rujin Zhao); visualization, K.L.; supervision, E.L.; project administration, K.L.; funding acquisition, R.Z. (Rujin Zhao). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China under Grant No. 2019YFA0706001.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research was supported by the Sichuan Outstanding Youth Science and Technology Talent Project (2022JDJQ0027). This research was also supported by CAS “Light of West China” Program, and Special support for talents from the Organization Department of Sichuan Provincial Party Committee.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liebe, C.C. Accuracy performance of star trackers—A tutorial. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 587–599. [Google Scholar] [CrossRef]
  2. Clermont, L.; Michel, C.; Stockman, Y. Stray Light Correction Algorithm for High Performance Optical Instruments: The Case of Metop-3MI. Remote Sens. 2022, 14, 1354. [Google Scholar] [CrossRef]
  3. Roger, J.C.; Santer, R.; Herman, M.; Deuzé, J.L. Polarization of the solar light scattered by the earth-atmosphere system as observed from the U.S. shuttle. Remote Sens. Environ. 1994, 48, 275–290. [Google Scholar] [CrossRef]
  4. Liu, W.D. Lens-Hood Design of Starlight Semi-Physical Experimental Platform. Laser Optoelectron. Prog. 2012, 49, 162–167. [Google Scholar] [CrossRef]
  5. Xu, M.Y.; Shi, R.B.; Jin, Y.M.; Wang, W. Miniaturization Design of Star Sensors Optical System Based on Baffle Size and Lens Lagrange Invariant. Acta Opt. Sin. 2016, 36, 0922001. [Google Scholar] [CrossRef]
  6. Kwang-Yul, K.; Yoan, S. A Distance Boundary with Virtual Nodes for the Weighted Centroid Localization Algorithm. Sensors 2018, 18, 1054. [Google Scholar] [CrossRef] [Green Version]
  7. Fialho, M.; Mortari, D. Theoretical Limits of Star Sensor Accuracy. Sensors 2019, 19, 5355. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. He, Y.Y.; Wang, H.L.; Feng, L.; You, S.H.; Lu, J.H.; Jiang, W. Centroid extraction algorithm based on grey-gradient for autonomous star sensor. Opt.-Int. J. Light Electron Opt. 2019, 194, 162932. [Google Scholar] [CrossRef]
  9. Seyed, M.F.; Reza, M.M.; Mahdi, N. Flying small target detection in ir images based on adaptive toggle operator. IET Comput. Vis. 2018, 12, 527–534. [Google Scholar] [CrossRef]
  10. Gonzalez, R.C.; Woods, R.E.; Masters, B.R. Digital Image Processing, Third Edition. J. Biomed. Opt. 2009, 14, 029901. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, Y.; Du, B.; Zhang, L. A spatial filter based framework for target detection in hyperspectral imagery. In Proceedings of the 2013 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Gainesville, FL, USA, 26–28 June 2013; pp. 1–4. [Google Scholar] [CrossRef]
  12. Yu, L.W.; Mao, X.N.; Jin, H.; Hu, X.C.; Wu, Y.K. Study on Image Process Method of Star Tracker for Stray Lights Resistance Filtering Based on Background. Aerosp. Shanghai 2016, 33, 26–31. [Google Scholar] [CrossRef]
  13. Wang, H.T.; Luo, C.Z.; Wang, Y.; Wang, X.Z.; Zhao, S.F. Algorithm for star detection based on self-adaptive background prediction. Opt. Tech. 2009, 35, 412–414. [Google Scholar] [CrossRef]
  14. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  15. Lu, R.T.; Yang, X.G.; Li, W.P.; Ji, W.F.; Li, D.L.; Jing, X. Robust infrared small target detection via multidirectional derivative-based weighted contrast measure. IEEE Geosci. Remote Sens. Lett. 2020, 1, 1–5. [Google Scholar] [CrossRef]
  16. Lu, K.L.; Liu, E.H.; Zhao, R.J.; Zhang, H.; Lin, L.; Tian, H. A Curvature-Based Multidirectional Local Contrast Method for Star Detection of a Star Sensor. Photonics 2022, 9, 13. [Google Scholar] [CrossRef]
  17. Perri, S.; Spagnolo, F.; Corsonello, P. A Parallel Connected Component Labeling Architecture for Heterogeneous Systems-on-Chip. Electronics 2020, 9, 292. [Google Scholar] [CrossRef] [Green Version]
  18. Wan, X.W.; Wang, G.Y.; Wei, X.G.; Li, J.; Zhang, G.J. Star Centroiding Based on Fast Gaussian Fitting for Star Sensors. Sensors 2018, 18, 2836. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, W.; Zhao, W.; Li, H.; Dai, S.; Han, C.; Yang, J. Iterative Decoding of LDPC-Based Product Codes and FPGA-Based Performance Evaluation. Electronics 2020, 9, 122. [Google Scholar] [CrossRef] [Green Version]
  20. Han, J.L.; Yang, X.B.; Xu, T.T.; Fu, Z.Q.; Chang, L.; Yang, C.L.; Jin, G. An End-to-End Identification Algorithm for Smearing Star Image. Remote Sens. 2021, 13, 4541. [Google Scholar] [CrossRef]
  21. Schiattarella, V.; Spiller, D.; Curti, F. A novel star identification technique robust to high presence of false objects: The multi-poles algorithm. Adv. Space Res. 2017, 59, 2133–2147. [Google Scholar] [CrossRef]
  22. Rijlaarsdam, D.; Yous, H.; Byrne, J.; Oddenino, D.; Furano, G.; Moloney, D. Efficient star identification using a neural network. Sensors 2020, 20, 3684. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of the proposed algorithm: (a) stray-light suppression; (b) binarization; (c) star labeling.
Figure 1. The flowchart of the proposed algorithm: (a) stray-light suppression; (b) binarization; (c) star labeling.
Photonics 10 00889 g001
Figure 2. The stray light and the three-dimensional distribution of grayscale values.
Figure 2. The stray light and the three-dimensional distribution of grayscale values.
Photonics 10 00889 g002
Figure 3. The three-dimensional distribution of stray light before and after the unification of the highlighted pixels.
Figure 3. The three-dimensional distribution of stray light before and after the unification of the highlighted pixels.
Photonics 10 00889 g003
Figure 4. The subtraction of the unified image f i and the background f B , and the three-dimensional distribution of the clean star image f c .
Figure 4. The subtraction of the unified image f i and the background f B , and the three-dimensional distribution of the clean star image f c .
Photonics 10 00889 g004
Figure 5. The process of star-image binarization.
Figure 5. The process of star-image binarization.
Photonics 10 00889 g005
Figure 6. Different shapes of stars: (a) the Gaussian-like distributions; (b,c) the heterotypic distributions.
Figure 6. Different shapes of stars: (a) the Gaussian-like distributions; (b,c) the heterotypic distributions.
Photonics 10 00889 g006
Figure 7. Two-line filter template based on the preordered pixels.
Figure 7. Two-line filter template based on the preordered pixels.
Photonics 10 00889 g007
Figure 8. A situation where the distance between two stars is only 1 pixel.
Figure 8. A situation where the distance between two stars is only 1 pixel.
Photonics 10 00889 g008
Figure 9. The beginning of a new connected domain.
Figure 9. The beginning of a new connected domain.
Photonics 10 00889 g009
Figure 10. The process of searching for a pixel in two rows.
Figure 10. The process of searching for a pixel in two rows.
Photonics 10 00889 g010
Figure 11. CP is not related to UR 1 and UR 2 when i < j 1 .
Figure 11. CP is not related to UR 1 and UR 2 when i < j 1 .
Photonics 10 00889 g011
Figure 12. CP is not related to UR 1 and UR 2 when i j 1 .
Figure 12. CP is not related to UR 1 and UR 2 when i j 1 .
Photonics 10 00889 g012
Figure 13. The labeling process of the heterotypic star.
Figure 13. The labeling process of the heterotypic star.
Photonics 10 00889 g013
Figure 14. The merging operation of the minimum label values and maximum label values.
Figure 14. The merging operation of the minimum label values and maximum label values.
Photonics 10 00889 g014
Figure 15. The method of centroid extraction with the threshold.
Figure 15. The method of centroid extraction with the threshold.
Photonics 10 00889 g015
Figure 16. The hardware implementation scheme of the proposed algorithm.
Figure 16. The hardware implementation scheme of the proposed algorithm.
Photonics 10 00889 g016
Figure 17. The implementing method of accumulation.
Figure 17. The implementing method of accumulation.
Photonics 10 00889 g017
Figure 18. The detection result for eight different sequences.
Figure 18. The detection result for eight different sequences.
Photonics 10 00889 g018
Figure 19. The output delay for star coordinates.
Figure 19. The output delay for star coordinates.
Photonics 10 00889 g019
Figure 20. The field experiment: (a) the star sensor installed on the two-dimensional rotation table; (b) the output effective attitude accuracy.
Figure 20. The field experiment: (a) the star sensor installed on the two-dimensional rotation table; (b) the output effective attitude accuracy.
Photonics 10 00889 g020
Figure 21. The attitude accuracy when the dynamic speed of the star sensor reaches zero.
Figure 21. The attitude accuracy when the dynamic speed of the star sensor reaches zero.
Photonics 10 00889 g021
Figure 22. The attitude accuracy when the dynamic speed of the star sensor reaches 1°/s.
Figure 22. The attitude accuracy when the dynamic speed of the star sensor reaches 1°/s.
Photonics 10 00889 g022
Table 1. The resource consumption results of the algorithm under three platforms.
Table 1. The resource consumption results of the algorithm under three platforms.
PlatformResource Consumption
xc3s700an-4fgg484Slice Flip Flops: 50%
4 input LUTs: 65%
BRAMS: 20%
MULT18×18SIOS: 10%
Maximum frequency: 174.642 MHZ
A3pe3000L-484FBGACORE: 30.87%
GLOBAL: 33.33%
RAM/FIFO: 48.21%
Maximum frequency: 179.856 MHZ
M2S090T-1FG484ILUT: 6.57%
DFF: 5.34%
RAM64×18: 3.57%
RAM1K18: 13.76%
Chip Global: 37.50%
Maximum frequency: 172.1 MHZ
Table 2. The successful identification rate obtained using different algorithms.
Table 2. The successful identification rate obtained using different algorithms.
SituationThe Successful Identification Rate
Local threshold segmentation algorithm53%
The proposed algorithm98%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, K.; Li, H.; Lin, L.; Zhao, R.; Liu, E.; Zhao, R. A Fast Star-Detection Algorithm under Stray-Light Interference. Photonics 2023, 10, 889. https://doi.org/10.3390/photonics10080889

AMA Style

Lu K, Li H, Lin L, Zhao R, Liu E, Zhao R. A Fast Star-Detection Algorithm under Stray-Light Interference. Photonics. 2023; 10(8):889. https://doi.org/10.3390/photonics10080889

Chicago/Turabian Style

Lu, Kaili, Huakang Li, Ling Lin, Renjie Zhao, Enhai Liu, and Rujin Zhao. 2023. "A Fast Star-Detection Algorithm under Stray-Light Interference" Photonics 10, no. 8: 889. https://doi.org/10.3390/photonics10080889

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop