Next Article in Journal
Energy Prediction and Energy Management in Kinetic Energy-Harvesting Wireless Sensors Network for Industry 4.0
Previous Article in Journal
Shock Wave Characterization Using Different Diameters of an Optoacoustic Carbon Nanotube Composite Transducer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stray Light Nonuniform Background Elimination Method Based on Image Block Self-Adaptive Gray-Scale Morphology for Wide-Field Surveillance

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(14), 7299; https://doi.org/10.3390/app12147299
Submission received: 24 June 2022 / Revised: 17 July 2022 / Accepted: 18 July 2022 / Published: 20 July 2022

Abstract

:
Space-based wide-field surveillance systems are of great significance in maintaining the security of space resources by avoiding collisions between space targets. However, their performance is hindered by stray light phenomena. The nonuniform background noise caused by stray light significantly hampers subsequent target detection, leading to a high frequency of false alarms. To solve this problem, we propose a robust and accurate nonuniform background elimination method based on image block self-adaptive gray-scale morphology (IBSGM). First, we define two kinds of structural operators with different sizes and domains, which make full use of the difference between the target pixels and surrounding background pixels. Then, we block the original surveillance image and find the size of the largest target in each block by the minimum bounding rectangle method to determine the optimal size of the structural operator suitable for each block. Finally, we perform morphological processing using the defined structural operators to eliminate nonuniform backgrounds from images. Experimental results on simulated and real image datasets demonstrate that the proposed IBSGM method has higher precision in eliminating the nonuniform background when compared to other methods.

1. Introduction

Since Sputnik-1 was launched in 1957, the pace of space exploration quickened [1,2], and the number of targets in space (including satellites and space debris) has increased rapidly [3,4]. According to the Union of Concerned Scientists’ satellite database, as of 1 January 2022, there were 4852 satellites in orbit accompanied by a mass of space debris. The famous scientist Kessler pointed out that once these targets collide, every collision could trigger a cascade of collisions that create more space debris and pose a massive threat to operational satellites [5]. For example, space debris of about 10 cm in diameter can destroy any operational satellite it collides with, which would have severe implications for human space activities [6]. To address the growing demand for space exploration while also assuring the security of the space environment, countries all over the world are installing a strategic layout of space-based surveillance systems; for example, the Fengyun satellite program of China [7], the Space Surveillance System of Canada (CSSS), the Space-Based Space Surveillance system (SBSS) [8,9], the Space Tracking and Surveillance System (STSS) and the Space-Based Infrared System (SBIRS) [10] of the USA. However, as space-based surveillance systems operate in outer space, most of the detected space targets are dim with high magnitude. Accordingly, stray light can have a severe impact on the detection performance of space-based surveillance systems [11]. In such cases, the surveillance image is the only data source. If stray light cannot be effectively suppressed, it will create a harsh nonuniform background signal in the surveillance image, reducing its clarity and dynamic range. Furthermore, substantial degradation effects can appear in surveillance images that greatly obstruct the subsequent recognition and detection of space targets [12,13]. In conclusion, stray light noise badly affects the normal operation and detection ability of these systems. In addition, a surveillance image may represent a vast number of targets of various sizes [14]. Precisely eliminating nonuniform backgrounds generated by stray light without losing targets of various sizes is a challenge that remains. As a result, research on nonuniform background elimination methods for surveillance images is urgently needed.
In recent years, the reduction of stray light has become a problem not only in the field of instrument measurement, but also in the preprocessing of faint target detection and tracking. At present, current nonuniform background elimination methods for surveillance images can be roughly divided into two types: a (1) parametric model-based method, and an (2) image feature-based method. In parametric model-based methods, the nonuniform background is eliminated by constructing a parametric model. Specifically, abundant images are collected in advance, then classified to create a training data set to build a parametric model, such as a spline function or polynomial model, or recent deep learning-based models [15,16]. However, these algorithms are not very effective in eliminating nonuniform backgrounds from surveillance images. On the one hand, parametric models must be accurate, such as the order or the number of terms in the polynomial. However, in practice, there are huge differences in the distributions of nonuniform backgrounds in different images, which makes it difficult to accurately determine the parameters under different conditions or scenes. In addition, even if we have enough surveillance images for training a parametric model, there may still be some unknown cases that make it difficult for fixed-parameter models to eliminate nonuniform backgrounds from surveillance images that are not included in the dataset. On the other hand, some targets with low signal-to-noise ratios (SNRs) exist in surveillance images. Minor flaws in the parametric model development process could result in target losses or greater false alarm rates, lowering the accuracy of subsequent target recognition [17].
With image feature-based methods, the problems of nonuniform background elimination can be solved well in unknown situations because only the imaging features of the image itself need to be considered. These methods can be divided into frequency domain and spatial domain methods. For the frequency domain method, wavelet-based and curvelet-based approaches are most common [18,19]. Due to the demand for domain conversions, these methods are too complex and require much computation time. We need to utilize the difference between the target and background noise in the frequency domain to eliminate the background; however, they are very similar in the frequency domain, which can cause confusion. Since it is difficult to distinguish a low-SNR target from a nonuniform background [20], the accuracy to eliminate the nonuniform background is limited. For the spatial–domain method, since the image is processed directly in the spatial domain, it tends to have a faster runtime than frequency–domain filtering. Methods include mean iterative filtering [21], filtering based on average or gradient thresholding [22], new star target segmentation (NSTS) [23], morphology operation [24] and improved new top-hat transformation (INTHT) [25]. Although these spatial–domain filtering methods can be used to eliminate nonuniform backgrounds from surveillance images, they have some disadvantages: (1) the calculation includes all pixels in the filter, which will have a significant impact on the accuracy of nonuniform background estimation, and (2) they are quite sensitive to the size of the filter. An unreasonable size will decrease the background elimination effect and could even cause some targets to be lost. Therefore, effective and reliable elimination of nonuniform backgrounds from surveillance images while retaining targets remains an open challenge.
To solve these problems, a new, accurate and robust stray light nonuniform background elimination method is proposed, named image block self-adaptive gray-scale morphology (IBSGM). A flowchart of the procedure is shown in Figure 1, which can be broken down into three steps: (1) definition of structural operators, (2) division of the original surveillance image into blocks and determination of the optimal size of the structural operator suitable for each block by the method of the minimum bounding rectangle, and (3) a morphological operation based on constructed structural operators to estimate and eliminate the nonuniform background. In the first step, two structural operators with varying sizes and domains are defined. We analyze the features of surveillance images and the gray value difference between the target pixels and surrounding background pixels, which provides evidence for constructing new structural operators. In the second step, we block the original surveillance image and find the size of the largest target in each image block by the method of the minimum bounding rectangle. This establishes the optimal size of structural operator suitable for each image block. In the third step, we perform morphological processing on the surveillance images using the designed structural operators. With structural operators of the optimal size, we can reliably conserve target pixels while only using pixels from the surrounding background in the morphological operations. In this way, we solve the problem of surveillance images being sensitive to the size of the filter, which plagued previous methods. Finally, experiment with simulated image datasets and real acquired image datasets show that the proposed ISSGM approach eliminates stray light nonuniform backgrounds with greater accuracy and robustness than other methods.

2. Principles of the Formation and Elimination of Stray Light Backgrounds

Stray light is non-imaging light that radiates to the detector surface or imaging light that propagates via an abnormal path and reaches the detector. The influence of the formation mechanism and the principle of eliminating nonuniform backgrounds from surveillance images will be discussed in detail in this section. There are different sources of stray light, which are of three main types: (1) internal radiation stray light, which is the infrared heat radiation generated by high-temperature optomechanical components such as control motors and temperature-controlled optics inside the system during its normal operation; (2) non-target stray light beyond the field of view, which refers to non-target optical signals that propagate directly or indirectly on the focal plane of the image sensor, and which come from radiation sources located outside the field of view of an optical telescope; and (3) imaging target stray light in the field of view, which can be understood as light rays from targets that reach the focal plane of the detector via abnormal means.
Regarding the internal radiation stray light, its wavelengths are mostly distributed on the micrometer scale, so this kind of stray light mainly influences the infrared imaging system rather than the space-based wide-field surveillance systems applied to visible light in this paper. Hence, we ignore its effect on surveillance images. Non-target stray light beyond the field of view exists widely in all kinds of optical telescopes, especially in the large field optical telescopes used to detect faint targets. After strong stray light enters the optical system, it causes a nonuniform noise signal to reach the detector, increasing the image gray values and spreading the gray scale from one edge to the other. The main reason for this is a gradual change in the material scattering intensity. Furthermore, since stray light is beyond the field of view, the gray value is maximized at the corresponding edge of a surveillance image, as shown in Figure 2a. We name this the first type of stray light background. Imaging target stray light in the field of view also causes a nonuniform noise signal and increases the gray values of surveillance images, resulting in a relatively bright area in the surveillance image with gray scale spreading from the center to the periphery, as shown in Figure 2b. The primary cause of this is the complete reflection in the lens that occurs when light from brighter stars reaches the lens at a specific angle. We name this the second type of stray light background. Any complex image can be regarded as a combination of these two forms of background.
To better comprehend the features of nonuniform backgrounds in surveillance images, we will analyze them using energy transfer equations [26]. To simplify the complex stray light transmission process, we can divide the transmission path into several parts, including several emitting and receiving surfaces where the receiving surface of each process is the emitting surface of the next. The transmission of stray light on any two surfaces conforms to the radiation transfer theory [27], the principle of which is shown in Figure 3.
According to the radiative transfer theory, between two media surfaces, light energy propagates as follows [28]:
d Φ C = L S dA S dA C cos θ C cos θ S R S C 2
where A C and A S are the areas of the receiving and the source surfaces, respectively, R S C refers to the center length between the source and receiving surfaces, L s is the radiance of the source surface, Φ C represents the receiving surface flux, and θ C and θ S are the angles between the center line and the normal line of the respective surfaces. Equation (1) can be simplified by breaking it into three parts, as follows [29]:
d Φ C = L S E S E S dA S cos θ S cos θ C dA C R S C 2
d Φ C = B R D F d Φ S d Ω s c
where E S represents the incident irradiance, d Φ S is the output flux, and BRDF is the bidirectional reflectance distribution function, which refers to the scattering properties of the material surface and defines the ratio of the scattering radiance to the incident irradiance of a rough surface, and d Ω S C represents the projected solid angle between the source and receiving surfaces. In terms of the differential form of Equations (2) and (3), we are roughly aware that the distribution of a nonuniform background affected by stray light exhibits a gradual form rather than an abrupt change.
In general, for space-based wide-field surveillance systems, the influence of stray light can be suppressed by optomechanical structures such as the hood and light-blocking ring [30]. However, the absorption rates of baffles and blades are always between 95% and 97%. At this time, the rest of the stray light will still be received by the detector, resulting in a nonuniform background. We still need to eliminate the background through image processing.
The surveillance image is described in the following way:
F i , j = T i , j + S i , j + B i , j + N i , j
where i , j denotes the pixel coordinates, F is an original surveillance image, T refers to the space target, S is the star, the light from stars is the cause of the second type of stray light background, and B represents the background and N refers to the noise. The fundamental principle of nonuniform background elimination is to estimate B accurately and robustly while preserving S and T. Based on the above analysis, an original surveillance image F has the following two features: (1) Most of the pixels in F are occupied by background-region pixels, whose gray values are different from those of target-region pixels, and (2) the nonuniform background caused by stray light exhibits a gradual rather than an abrupt change. Based on the first feature of the original surveillance image, we define two structural operators to take advantage of this difference. Based on the second feature, a morphology-based method can be used to eliminate the nonuniform background, since it features a slow change.

3. Nonuniform Background Elimination

In this section, we will introduce the IBSGM method in detail for eliminating nonuniform background while retaining stars and space targets.

3.1. Definition of Structural Operators

Based on the features of surveillance images described in Section 2, we can eliminate stray light nonuniform backgrounds by using the information of varying gray values between the target and background pixels. We define two structural operators, Δ B and B b , to better utilize this difference, as shown in Figure 4a,b.
In Figure 4, B o and B i are defined as the outer and inner structural operators of Δ B , respectively. Δ B = B o B i refers to the ring-like area between B o and B i . B b represents a uniform square area, whose size is bigger than B o , and K, L and Q refer to the sizes of B i , B o and B b , respectively, with K < L < Q .
For the structural operator Δ B , the coordinate of its center point O is x 0 , y 0 , x L 2 , L 2 , y L 2 , L 2 .
Δ B = f x , y = 1 , x x 0 2 + y y 0 2 > K 2 0 , x x 0 2 + y y 0 2 K 2
For the structural operator B b , x Q 2 , Q 2 , y Q 2 , Q 2 .
B b = g x , y = 1
With the current algorithms, due to all the pixels covered in the filter being used for calculation, it is difficult to distinguish a complex background region from a target region with high accuracy. The structural operators that we define contribute to resolving it.

3.2. Self-Adaptive Size Adjustment

Surveillance images contain targets of different sizes. A structural operator with a fixed size will influence the background elimination effect and even reduce the information of faint targets. Before performing morphological processing, we block the original surveillance image and find the optimal size of the structural operator suitable for each image block by the minimum bounding rectangle method. To solve the problem of surveillance images being sensitive to the fixed-size structural operators used in other methods, the self-adaptive size adjustment of the structural operator is conducted as follows:
An original surveillance image F 0 with 1024 × 1024 imaging pixels is divided into several image blocks f c with 32 × 32 pixels, where c is the index of the image blocks. Each block is binarized with the threshold obtained by the following equations:
T h c = μ c + α σ c 1 c k
μ c = 1 M × N i = 1 M j = 1 N f i , j
σ c = 1 M × N i = 1 M j = 1 N f i , j μ c 2
where k represents the number of image blocks, μ c and σ c are the mean and standard deviation of each block, respectively, M and N refer to the rows and columns of each image block, respectively, and α is a coefficient, where α = 2 is selected according to the features of the surveillance images.
The binarized image b c can be obtained by:
b c i , j = 0 , f i , j < T h c 1 , f i , j T h c
In order to determine the size (K and L) of the optimal structural operator Δ B for each image block f c , we find the set of the minimum bounding rectangle sides of the connected domain in each binary image block b c . As shown in Figure 5, the value of the largest side in each set is used as the K value of the structural operator Δ B , while the value of parameter L can be obtained by the value of K. Since the sizes of the largest targets differ in different image blocks, we use self-adaptive adjustable structural operators to perform morphological operations.

3.3. Gray-Scale Morphological Operation

First, we use the self-adaptive adjustable structural operator Δ B to execute a dilation operation on each image block f c , and then stitch the operated image blocks into the size of the original image to obtain image F 1 , as shown in Equation (11).
F 1 = c = 1 k f c i , j Δ B = c = 1 k max f c i m , j n + Δ B m , n | i m , j n D f c , m , n D Δ B
where D f c and D Δ B refer to the domains of f c and Δ B , respectively. In terms of the definition of the dilation operation, if a target is exactly in the internal structural operator B i , since it is not in the defined domain of Δ B , the pixel value of this part will not participate in the dilation procedure. As a result, the dilation process will only use pixels in the background surrounding targets. In this way, the target pixels are replaced by background pixels to protect them from being eliminated. We choose the value of the largest side in each image block to be the K-value of the structural operator Δ B to ensure that targets are in the internal structural operator B i . As most stars and space targets are circular, the internal structural operator B i , which is also circular, can perform the replacement most accurately. This is equivalent to a “keying” operation and fundamentally improves the accuracy of nonuniform background elimination.
Regarding the two parameters of Δ B , K is ascertained in Section 3.2. For L, it controls the number of background pixels involved in calculation. If L is too large, some targets will mistake their neighboring target pixels as their surrounding background pixels during the dilation operation. Incorrect replacement causes mutual interference between targets. Therefore, we assign L a value of K + 2 , which not only avoids interference, but also ensures that enough background pixels are included in the calculation, thereby ensuring accuracy. The result of this process is shown in Figure 6b.
Then, as described by Equation (12), we employ the structural operator B b to execute an erosion operation Θ on image F 1 .
F 2 = F 1 Θ B b = min F 1 i + m , j + n B b m , n | i + m , j + n D F 1 , m , n D B b
where D F 1 and D B b refer to the domains of F 1 and B b , respectively. The optimal structural operator Δ B for each image block is presented in Section 3.2. If the target is too large (bigger than 32 × 32 pixels) or does not appear completely within an image block, some target pixels will be mistaken as background pixels and will remain in image F 1 . Hence, we need to retrieve these lost targets by performing an erosion operation through the structural operator B b on image F 1 . Considering the previously acquired surveillance image information, the majority of targets will not exceed 50 × 50 pixels. Furthermore, in terms of the surveillance image features described in Section 2, most of the pixels in F are occupied by background-region pixels. As a result, we set the value of parameter Q to 50, which not only ensures that large targets are retrieved, but also confirms that stray light backgrounds are not confused for targets.
Moreover, the dilation operation in Equation (11) will improve the overall gray value of the image and dilate the range of the nonuniform background region. The erosion operation not only adjusts the overall brightness of image F 1 , but also further reduces the target-region gray value that is replaced by the surrounding background region in the dilation process, which ensures that the brightness of the target is not overly diminished. The result is shown in Figure 6c.
Since the size of B b in the erosion process is larger than the size of Δ B in the dilation process Q > L , it reduces the nonuniform background region that needs to be eliminated. Consequently, we reuse B b to execute a dilation operation on image F 2 as shown in Equation (13). In short, the nonuniform background region that needs to be eliminated will not change.
F 3 = F 2 B b = max F 2 i m , j n + B b m , n | i m , j n D F 2 , m , n D B b
The above is the case when targets are fully or partially in the definition domain of the inner structural operator B i . However, if there are no target pixels in the definition domain of B i , at this time, the relationship of this substitution is uncertain. Therefore, we take the minimum value of F 3 and the original image F 0 as shown in Equation (14).
F 4 = min F 0 , F 3
The result is shown in Figure 6d, where F 4 is the final nonuniform background needing to be eliminated.
Based on the IBSGM method, we can eliminate nonuniform backgrounds accurately while retaining stars and space targets.

4. Experiments and Discussion

To verify the advantages of the IBSGM method, we compared it with three other methods: top-hat transformation (THT), mean iterative filtering (MIF), improved new top-hat transformation (INTHT). These methods were used with the same simulated and real images.

4.1. Simulation Experimental Principles and Results

A flowchart of the simulation image experiment is shown in Figure 7.
First, an image only containing stars and space targets with a uniform background was simulated and generated according to the Tycho-2 Catalogue and the relevant parameters of the optical system with a 80° × 35° field of view. Tycho-2 Catalogue contains a large number of stars and has a limiting magnitude V~11 and mean sky density of 60 objects per square degree. It was used as a surveillance image without a nonuniform background, as shown in Figure 8a. In the simulation process, the parameter of the GSENSE6060 detector produced by Gpixel incorporation were referenced, which has a 10 μm pixel size and 3 s exposure time. Typical PSF (point spread function) could be fitted to a gaussian function. Then, we simulated the two types of stray light backgrounds described in Section 2 by ray tracing, as shown in Figure 2, which are synthesized with the simulated uniform surveillance image. Two types of simulation surveillance images with nonuniform backgrounds are shown in Figure 8b,c. Finally, different algorithms are used to eliminate the two types of stray light nonuniform backgrounds in the simulated surveillance images. The experimental results are shown in Figure 9 and Figure 10.
For the evaluation of the reference image, we use the root mean square error (RMSE) to evaluate the effect of nonuniform background elimination. Here, O and I are the simulated surveillance images with a uniform background and the processed surveillance image, respectively.
R M S E = 1 m × n i = 1 m j = 1 n O i , j I i , j 2
where m and n refer to the rows and columns of O and I , respectively.
The smaller the RMSE value, the closer the corrected image is to the original image with a uniform background, and the better the ability of the algorithm to eliminate the stray light nonuniform background. The RMSE results are shown in Table 1.
Combining the results of Figure 9 and Figure 10 and Table 1, the proposed IBSGM method has higher accuracy in eliminating both types of stray light nonuniform backgrounds. Therefore, the image processed image by our method is closer to the original simulated image with a uniform background. Since more complex situations cannot be studied by simulation, we used real images to further verify the performance of IBSGM and analyze its advantages in comparison with other methods in the next subsection.

4.2. Real Image Experimental Results and Discussion

Real surveillance images were captured by a CMOS (complementary metal-oxide-semiconductor) sensor with a 3 s exposure time. The detector has 10   K × 10   K pixels, 10 × 10 field of view and a 12-bit gray level. To see the experimental results more intuitively, we cropped the image to 1024 × 1024 pixels. The results of nonuniform background elimination are shown in Figure 11, where (a–c) are original surveillance images and (d–o) refer to those obtained after eliminating the nonuniform backgrounds in various ways.
Unlike the simulation experiment, in the real image experiment, we cannot obtain images without nonuniform backgrounds that only contain stars and space targets. As a result, we cannot evaluate the effect of nonuniform background elimination in terms of RMSE.

4.2.1. Accuracy of Nonuniform Background Elimination

To quantitatively compare the accuracy of the above four methods, we adopt residual analysis. In the residual image, since the majority of pixels are background pixels, the greater the mean, the higher the image’s overall gray value, so the residual background of the first type of stray light is greater. The greater the standard deviation, the higher the degree of gray-value fluctuation, so that more of the second type of stray light background remains. In a word, the smaller the mean and standard deviation, the higher the accuracy of the correction algorithm.
In the residual image, we applied an exclusion domain to remove the interference of targets before calculating the mean and standard deviation. Specifically, the threshold T h is obtained by adaptive threshold segmentation. We then establish the exclusion domain e B as:
e B i , j = 0 , f B i , j T h 1 , f B i , j < T h
where f B refers to the processed surveillance image. Finally, we obtain the residual image R as shown in Equation (17):
R i , j = f B i , j e B i , j
The results of the residual image means and standard deviations obtained by different algorithms are shown in Table 2.
In the THT method, all pixels (target and background regions) covered by the fixed structural operator participate in the calculation to eliminate the background, which leaves a great deal of nonuniform background noise in the residual image. Therefore, the accuracy of THT is relatively low. In the MIF method (five iterations), as in the THT method, the size of the filter is fixed and the pixels (including target and background) participate in the calculation, making it difficult to further improve the accuracy in complex environments even by increasing the number of iterations. The INTHT method, in contrast to the above two methods, uses structural operators with different domains to decrease the target regions involved in the calculation. However, it uses a fixed-size structural operator to distinguish target and background pixels, which does not work well for smaller-sized targets and does not consider the sensitivity of the structural operator. The IBSGM method defines two structural operators with a size that constantly self-adapts to the size of the target. The optimum structural operators are used to perform morphological operations in each image block. Thereby, the accuracy of nonuniform background elimination is greatly improved. We can see that IBSGM has substantially greater accuracy in eliminating stray light nonuniform backgrounds compared with the other methods.

4.2.2. Accuracy of Target Retention

The purpose of nonuniform background elimination is to better ensure subsequent target recognition. Therefore, the algorithm should preserve targets in surveillance images as much as possible. To see the result more intuitively, a nonuniform background requiring elimination is shown in Figure 12, where (a–c) are the original surveillance images and (d-o) refer to the nonuniform background that needs to be estimated and eliminated by different algorithms.
To gain a better idea of how accurate different algorithms are at retaining their targets, we determined the number of targets that are retained after background elimination based on a connected domain method. The target retention results are shown in Table 3.
In the THT method, as explained in Section 4.2.1, the accuracy of nonuniform background elimination is relatively low. This means that some nonuniform background noise remains in the processed image, significantly reducing target detection accuracy. In the MIF method (5 iterations), the target gray value is decreased to some extent after several iterations, resulting in the loss of low-SNR targets. Moreover, certain brighter targets are mistaken for the nonuniform background noise that is to be deleted. The INTHT method is limited by the fixed-size structural operator, leading to some targets being mistaken for the background and lost. Hence, its target retention accuracy still needs to be improved. Meanwhile, with the IBSGM method, two structural operators are constructed with a size that constantly self-adapts to the target size and combined with morphological operations. This eliminates nonuniform backgrounds with high precision and robustness while retaining more targets. In some instances, the rate of target retention cannot reach 100% due to interference from too many targets in the surveillance image.

4.2.3. Computation Time

Table 4 compares the computation times taken for different methods to process a 1024 × 1024 test image. Image processing was undertaken in MATLAB R2020b on a PC computer with an i5-9400f CPU (2.90 GHz) and 16 GB of main memory.
Although the computation time of the proposed IBSGM method was higher due to the self-adaptation process, the corresponding accuracy of nonuniform background elimination and target retention were greatly improved, which is exactly what is required for target recognition.

5. Conclusions

Stray light nonuniform background elimination is not only a requirement of space-based surveillance, but is also an essential prerequisite for subsequent target detection and tracking. To overcome the insufficiency of current methods in accurate stray light nonuniform background elimination, we proposed a robust and accurate elimination method based on image block self-adaptive gray-scale morphology (IBSGM).
In this study, we first analyzed the formation and elimination principles of stray light nonuniform backgrounds. Then, we defined two structural operators with different sizes and domains, which can make full use of the difference information between the target region and surrounding background region. Finally, we blocked the original surveillance image to obtain the optimal sizes of the structural operators suitable for performing morphological operations on each image block. The experimental results on simulated and real image datasets show that, compared with other methods, IBSGM has higher precision in eliminating nonuniform backgrounds with nearly no target losses.

Author Contributions

Conceptualization, J.W. and X.W.; methodology, J.W.; software, J.W.; validation, J.W. and Y.L.; formal analysis, J.W.; data curation, J.W.; writing—original draft preparation, J.W.; writing—review and editing, J.W. and Y.L.; supervision, X.W. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Strategic Priority Research Program of Chinese Academy of Sciences, grant number XDA17010205.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Acknowledgments

The authors are grateful for the anonymous reviewers’ critical comments and constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xie, D.; Huang, Y.; Yan, C.; Wang, H.; Xu, A. Research on Space-Based Visible Detection for Conical Space Targets. Appl. Sci. 2022, 12, 4426. [Google Scholar] [CrossRef]
  2. Castronuovo, M.M. Active space debris removal—A preliminary mission analysis and design. Acta Astronaut. 2011, 69, 848–859. [Google Scholar] [CrossRef]
  3. Muntoni, G.; Montisci, G.; Pisanu, T.; Andronico, P.; Valente, G. Crowded Space: A Review on Radar Measurements for Space Debris Monitoring and Tracking. Appl. Sci. 2021, 11, 1364. [Google Scholar] [CrossRef]
  4. Ren, S.; Yang, X.; Wang, R.; Liu, S.; Sun, X. The Interaction between the LEO Satellite Constellation and the Space Debris Environment. Appl. Sci. 2021, 11, 9490. [Google Scholar] [CrossRef]
  5. Garrett, H.B.; Pike, C.P. Collision Frequency of Artificial Satellites: Creation of a Debris Belt. Space Syst. Their Interact. Earth’s Space Environ. 2015, 707–736. [Google Scholar]
  6. Murtaza, A.; Pirzada, S.; Xu, T.; Liu, J. Orbital Debris Threat for Space Sustainability and Way Forward. IEEE Access 2020, 8, 61000–61019. [Google Scholar] [CrossRef]
  7. Zhang, X.; Chen, B.; He, F.; Song, K.; He, L. Wide-field auroral imager onboard the fengyun satellite. Light Sci. Appl. 2019, 8, 1–12. [Google Scholar] [CrossRef] [Green Version]
  8. Brinton, T. SBSS Satellite On Track To Enter Operations in Spring. Space News. 2011, 22, 7. [Google Scholar]
  9. Sharma, J.; Stokes, G.; von Braun, C. Toward operational space-based space surveillance. Lincoln Lab. J. 2002, 13, 309–334. [Google Scholar]
  10. Li, S.; Li, C.; Yang, X.; Zhang, K.; Yin, J.F. Infrared Dim Target Detection Method Inspired by Human Vision System. Opt. Int. J. Light Electron Opt. 2020, 206, 164167. [Google Scholar] [CrossRef]
  11. Yue, W.; Emmett, I. A Practical Approach to Landsat 8 TIRS Stray Light Correction Using Multi-Sensor Measurements. Remote Sens. 2018, 10, 589. [Google Scholar]
  12. Hardy, T.; Cain, S.; Jeon, J.; Blake, T. Improving space domain awareness through unequal-cost multiple hypothesis testing in the space surveillance telescope. Appl. Opt. 2015, 54, 5481–5494. [Google Scholar] [CrossRef]
  13. Hardy, T.; Cain, S.; Blake, T. Unequal a priori probability multiple hypothesis testing in space domain awareness with the space surveillance telescope. Appl. Opt. 2016, 55, 4036–4046. [Google Scholar] [CrossRef]
  14. Liu, D.; Wang, X.; Li, Y.; Xu, Z.M.; Wang, J.N.; Mao, Z.H. Space target detection in optical image sequences for wide-field surveillance. Int. J. Remote Sens. 2020, 41, 1–12. [Google Scholar] [CrossRef]
  15. Cao, X.; Rong, S.; Liu, Y.; Li, T.Y.; Wang, Q.; He, B. Non-uniform illumination correction for underwater image using fully convolutional network. IEEE Access 2020, 8, 109989–110002. [Google Scholar] [CrossRef]
  16. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  17. Peng, L.; Zhang, T.; Liu, Y.; Li, M.; Peng, Z. Infrared Dim Target Detection Using Shearlet’s Kurtosis Maximization under Non-Uniform Background. Symmetry 2019, 11, 723. [Google Scholar] [CrossRef] [Green Version]
  18. Wen, J.; Li, S.; Sun, J. A new binarization method for non-uniform illuminated document images. Pattern Recogn. 2013, 46, 1670–1690. [Google Scholar] [CrossRef]
  19. Liu, N.; Xie, J. Interframe phase-correlated registration scene-based nonuniformity correction technology. Infrared Phys. Technol. 2015, 69, 198–205. [Google Scholar] [CrossRef]
  20. Bai, X.; Zhang, B.; Du, Z.; Liu, T.; Jin, T.; Xue, B.D.; Zhou, F.G. Survey on dim small target detection in clutter background: Wavelet, inter-frame and filter based algorithms. Procedia Eng. 2011, 15, 479–483. [Google Scholar] [CrossRef] [Green Version]
  21. Xi, J.; Wen, D.; Ersoy, O.; Yi, H.W.; Yao, D.L.; Song, Z.X.; Xi, S.B. Space debris detection in optical image sequences. Appl. Opt. 2016, 55, 7929–7940. [Google Scholar] [CrossRef]
  22. Mustafa, W.A.; Yazid, H. Background correction using average filtering and gradient based thresholding. J. Telecommun. Electron. Comput. Eng. 2016, 5, 81–88. [Google Scholar]
  23. Jiang, J.; Li, L.; Zhang, G. Robust and accurate star segmentation algorithm based on morphology. Opt. Eng. 2016, 55, 6. [Google Scholar] [CrossRef] [Green Version]
  24. Sun, T.; Xing, F.; Bao, J.; Ji, S.; Li, J. Suppression of stray light based on energy information mining. Appl. Opt. 2018, 57, 9239–9245. [Google Scholar] [CrossRef] [PubMed]
  25. Xu, Z.; Liu, D.; Yan, C.; Hu, C.H. Stray light nonuniform background correction for a wide-field surveillance system. Appl. Opt. 2020, 59, 10719–10728. [Google Scholar] [CrossRef] [PubMed]
  26. Xu, Z.; Liu, D.; Yan, C.; Hu, C.H. Stray Light Elimination Method Based on Recursion Multi-Scale Gray-Scale Morphology for Wide-Field Surveillance. IEEE Access 2021, 9, 16928–16936. [Google Scholar] [CrossRef]
  27. Bennett, H.E. Scattering characteristics of optical materials. Opt. Eng. 1978, 17, 480–488. [Google Scholar] [CrossRef]
  28. Li, J.; Yang, Y.; Qu, X.; Jiang, C. Stray Light Analysis and Elimination of an Optical System Based on the Structural Optimization Design of an Airborne Camera. Appl. Sci. 2022, 12, 1935. [Google Scholar] [CrossRef]
  29. Wei, L.; Yang, L.; Fan, Y.-P.; Cong, S.-S.; Wang, Y.-S. Research on Stray-Light Suppression Method for Large Off-Axis Three-Mirror Anastigmatic Space Camera. Sensors 2022, 22, 4772. [Google Scholar] [CrossRef]
  30. Xu, Z.; Hu, C.; Yan, C. Vane structure optimization method for stray light suppression in a space-based optical system with wide field of view. Opt. Eng. 2019, 58, 1. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed IBSGM method.
Figure 1. Flowchart of the proposed IBSGM method.
Applsci 12 07299 g001
Figure 2. Two types of simulated stray light background gray-scale images. (a) the first type; (b) the second type.
Figure 2. Two types of simulated stray light background gray-scale images. (a) the first type; (b) the second type.
Applsci 12 07299 g002
Figure 3. Radiation transfer schematic.
Figure 3. Radiation transfer schematic.
Applsci 12 07299 g003
Figure 4. Defined structural operators used in the IBGSM method. (a) structural operator Δ B ; (b) structural operator B b .
Figure 4. Defined structural operators used in the IBGSM method. (a) structural operator Δ B ; (b) structural operator B b .
Applsci 12 07299 g004
Figure 5. Process of finding the maximum target size in an image block.
Figure 5. Process of finding the maximum target size in an image block.
Applsci 12 07299 g005
Figure 6. Morphological operation results. (a) Original real surveillance image; (b) dilation operation result after Equation (11); (c) erosion operation result after Equation (12); and (d) minimum value result after Equation (14).
Figure 6. Morphological operation results. (a) Original real surveillance image; (b) dilation operation result after Equation (11); (c) erosion operation result after Equation (12); and (d) minimum value result after Equation (14).
Applsci 12 07299 g006
Figure 7. Flowchart of the simulation image experiment.
Figure 7. Flowchart of the simulation image experiment.
Applsci 12 07299 g007
Figure 8. Simulated surveillance images with (a) a uniform background; (b) the first type of stray light background; and (c) the second type of stray light background.
Figure 8. Simulated surveillance images with (a) a uniform background; (b) the first type of stray light background; and (c) the second type of stray light background.
Applsci 12 07299 g008
Figure 9. Elimination result for the first type of stray light background. (ad) Simulated original surveillance images; (eh) nonuniform backgrounds estimated by the THT, MIF, INTHT and IBSGM methods, respectively; and (il) background elimination results of the THT, MIF, INTHT, and IBSGM methods, respectively.
Figure 9. Elimination result for the first type of stray light background. (ad) Simulated original surveillance images; (eh) nonuniform backgrounds estimated by the THT, MIF, INTHT and IBSGM methods, respectively; and (il) background elimination results of the THT, MIF, INTHT, and IBSGM methods, respectively.
Applsci 12 07299 g009
Figure 10. Elimination result for the second type of stray light background. (ad) Simulated original surveillance images; (eh) nonuniform backgrounds estimated by the THT, MIF, INTHT, and IBSGM methods, respectively; and (il) background elimination results of the THT, MIF, INTHT, and IBSGM methods, respectively.
Figure 10. Elimination result for the second type of stray light background. (ad) Simulated original surveillance images; (eh) nonuniform backgrounds estimated by the THT, MIF, INTHT, and IBSGM methods, respectively; and (il) background elimination results of the THT, MIF, INTHT, and IBSGM methods, respectively.
Applsci 12 07299 g010
Figure 11. Nonuniform background elimination results. (ac) Original real surveillance images; background elimination results of (df) THT; (gi) MIF; (jl) INTHT; and (mo) IBSGM.
Figure 11. Nonuniform background elimination results. (ac) Original real surveillance images; background elimination results of (df) THT; (gi) MIF; (jl) INTHT; and (mo) IBSGM.
Applsci 12 07299 g011
Figure 12. Nonuniform background estimation by different algorithms. (ac) Real original surveillance images. Nonuniform backgrounds estimated by (df) THT; (gi) MIF; (jl) INTHT; and (mo) IBSGM.
Figure 12. Nonuniform background estimation by different algorithms. (ac) Real original surveillance images. Nonuniform backgrounds estimated by (df) THT; (gi) MIF; (jl) INTHT; and (mo) IBSGM.
Applsci 12 07299 g012
Table 1. RMSEs of the surveillance images by different algorithms.
Table 1. RMSEs of the surveillance images by different algorithms.
Stray Light BackgroundTHTMIFINTHTIBSGM
First type8.498212.40525.00444.8320
Second type4.71217.24773.13712.6711
Table 2. Means and standard deviations of the residual image obtained by different methods.
Table 2. Means and standard deviations of the residual image obtained by different methods.
Figure 11aFigure 11bFigure 11c
Background
Residual
MeanStandard
Deviation
MeanStandard
Deviation
MeanStandard
Deviation
THT21.25189.341721.72369.199921.82029.3119
MIF3.15593.02202.19042.42972.40882.5845
INTHT0.44080.49830.42700.40770.50710.5887
IBSGM0.02310.15020.02490.15590.02450.1736
Table 3. Comparison of target retention rates using different methods.
Table 3. Comparison of target retention rates using different methods.
MethodFigure 12aFigure 12bFigure 12c
THT83%85%81%
MIF86%88%85%
INTHT93%96%92%
IBSGM98%99%97%
Table 4. Comparison of the computation times of different methods.
Table 4. Comparison of the computation times of different methods.
MethodComputation Time (s)
THT0.517
MIF3.521
INTHT0.436
IBSGM6.934
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, J.; Wang, X.; Li, Y. Stray Light Nonuniform Background Elimination Method Based on Image Block Self-Adaptive Gray-Scale Morphology for Wide-Field Surveillance. Appl. Sci. 2022, 12, 7299. https://doi.org/10.3390/app12147299

AMA Style

Wang J, Wang X, Li Y. Stray Light Nonuniform Background Elimination Method Based on Image Block Self-Adaptive Gray-Scale Morphology for Wide-Field Surveillance. Applied Sciences. 2022; 12(14):7299. https://doi.org/10.3390/app12147299

Chicago/Turabian Style

Wang, Jianing, Xiaodong Wang, and Yunhui Li. 2022. "Stray Light Nonuniform Background Elimination Method Based on Image Block Self-Adaptive Gray-Scale Morphology for Wide-Field Surveillance" Applied Sciences 12, no. 14: 7299. https://doi.org/10.3390/app12147299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop