Next Article in Journal
Research on Co-Channel Interference Cancellation for Underwater Acoustic MIMO Communications
Next Article in Special Issue
Energy-Based Adversarial Example Detection for SAR Images
Previous Article in Journal
An Aerial and Ground Multi-Agent Cooperative Location Framework in GNSS-Challenged Environments
Previous Article in Special Issue
MAEANet: Multiscale Attention and Edge-Aware Siamese Network for Building Change Detection in High-Resolution Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noise Parameter Estimation Two-Stage Network for Single Infrared Dim Small Target Image Destriping

The College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(19), 5056; https://doi.org/10.3390/rs14195056
Submission received: 20 August 2022 / Revised: 19 September 2022 / Accepted: 5 October 2022 / Published: 10 October 2022

Abstract

:
The existing nonuniformity correction methods generally have the defects of image blur, artifacts, image over-smoothing, and nonuniform residuals. It is difficult for these methods to meet the requirements of image enhancement in various complex application scenarios. In particular, when these methods are applied to dim small target images, they may remove dim small targets as noise points due to the image over-smoothing. This paper draws on the idea of a residual network and proposes a two-stage learning network based on the imaging mechanism of an infrared line-scan system. We adopt a multi-scale feature extraction unit and design a gain correction sub-network and an offset correction sub-network, respectively. Then, we pre-train the two sub-networks independently. Finally, we cascade the two sub-networks into a two-stage network and train it. The experimental results show that the PSNR gain of our method can reach more than 15 dB, and it can achieve excellent performance in different backgrounds and different intensities of nonuniform noise. Moreover, our method can avoid losing texture details or dim small targets after effectively removing nonuniform noise.

1. Introduction

With the development of infrared remote sensing detection systems, infrared line-scan detectors have been widely used in military and civilian fields such as battlefield surveying, maritime surveillance, and urban traffic monitoring. However, restricted by factors such as the level of manufacturing process, there is nonuniform noise in the response between the detection elements of the line-scan sensor, resulting in a certain strip effect in the original infrared image obtained. This requires that we need to first perform nonuniformity correction on the image before performing target detection [1]. In addition, the infrared remote sensing detection system not only has a long observation distance, but also is often interfered with by complex background clutter and noise. Therefore, the targets often show the characteristics of dim small targets such as low signal-to-noise ratio and lack of effective structural information. According to the definition of the Society of Photo-Optical Instrumentation Engineers (SPIE), the target with a local signal-to-noise ratio <5 dB and a pixel size ≤ 9 × 9 is regarded as a weak target [2]. These characteristics of dim small targets make them very easy to be regarded as noise points, which requires us to pay special attention to the problem of image over-smoothing when removing strip noise.
For the fixed pattern noise (FPN) generated by infrared imaging systems, researchers have proposed two kinds of nonuniformity correction methods, which are calibration-based and scene-based [3].
The calibration-based correction method utilizes the two-point or multi-point response between detection elements of the sensor to the black body radiation source and calculates the correction parameters through mathematical fitting. The advantage of this kind of method is that the algorithm is simple, but its correction performance is affected by factors such as ambient temperature and integration time changes. Therefore, the calibration-based correction method needs to be performed periodically, and the detection system needs to suspend the mission work while observing the blackbody radiation source for calibration [4,5].
The scene-based correction methods can be divided into the multi-frame method and single-frame method, which correct the nonuniformity of the image according to the correlation of the scene. Since these kinds of methods do not need to observe the calibration source, they has the advantage of not affecting the normal operation of the infrared system.
The classic multi-frame methods mainly include Neural Network [6], Constant Statistics [7], Kalman Filtering [8], and Inter-frame Registration Algorithm [9]. However, the multi-frame methods produce artifacts when the scene motion changes suddenly. In addition, due to the complexity of the algorithm, the real-time performance of such methods in practical applications is often not ideal. Although some recent multi-frame methods such as the Adaptive Deghosting Method in Neural Network [10], Temporal–Spatial Nonlinear Filtering [11], parameter estimation [12], and Feature Pattern Matching [13] have made some improvements to these shortcomings, in general, the single-frame method is still the main research direction at present.
Single-frame methods mainly include Midway Histogram Equalization (MHE) [14], 1-D Guided Filtering (1D-GF) [15], Sparse Unidirectional Hybrid Total Variation [16], Enhanced Low-rank Prior and Total Variation Regulation [17], Wavelet Decomposition and Total Variation-Guided Filtering [18], Least Squares and Gradient Domain-Guided Filtering [19], and so on. In addition, there are some methods that focus on over-smoothing. For example, Mingxuan Li et al. proposed an adaptive edge-preserving operator (AEPO) based on edge contrast to prevent the loss of edge details [20]. In another paper, the Multi-Scale Wavelet Transform adopted by Mingxuan Li et al., can also protect non-stripe information in infrared images [21]. The Fuzzy Matrix and Statistical Filtering method proposed by Sichong Huang et al. also has a certain ability to retain detailed information [22].
In recent years, with the success of deep learning methods in various fields such as image classification [23], target detection [24], face recognition [25], and even short-term power load forecasting [26], more and more researchers are also applying deep learning to the field of image nonuniformity correction. Currently, open-source deep learning nonuniformity correction methods include SNRCNN [27], ICSRN [28], DLS-NUC [29], and SNRWDNN [30]. In addition, there are Deep Multi-scale Residual Network [31], Cascade Residual Attention CNN [32], Deep Multi-scale Dense Connection CNN [33], and so on. Significantly, Timing Li et al. proposed that the use of long connections in deep networks can solve the problem of image information loss caused by transposed convolution [34], and Sifan Zhang et al. proposed a regularization method based on features extracted by wavelet to help restore image details [35].
However, although some existing methods have achieved certain results in preserving image details, they have not fundamentally solved the problem of image over-smoothing. In this regard, based on the nonuniform noise model of line-scan detectors, we transform the problem of nonuniformity correction into the problem of estimating noise parameters. We designed a two-stage fully convolutional network including a gain correction sub-network and an offset correction sub-network. At the same time, we produced datasets to pre-train and train the network. In general, our contributions are as follows:
  • A destriping method based on a noise parameter estimation two-stage network is proposed, which can adapt to the input image size and effectively correct the real nonuniformity infrared image;
  • According to the nonuniformity response model of the line-scan detector, a deep learning dataset for strip noise parameter estimation and image reconstruction is produced;
  • A multi-scale feature extraction unit is designed to use image information more effectively, and the proposed network has excellent generalization to different intensities of nonuniform noise and different backgrounds;
  • The noise parameter estimation mechanism in our network can fundamentally solve the problem that texture details and dim small targets may be removed due to image over-smoothing.

2. Methods

In this paper, two deep learning sub-networks for estimating the gain correction coefficient and estimating the offset correction factor are designed, respectively. After pre-training the sub-networks, we cascade the two sub-networks through a multiplication structure and an addition structure as a two-stage network. Finally, we train the two-stage network, and the resulting network model can perform nonuniformity correction on real infrared images.

2.1. Nonuniformity Response Model and Datasets

Our two-stage deep learning network model requires a large amount of data for training; however, due to less research on deep learning methods for nonuniformity correction, no ready-made datasets are available. We downloaded the MS-COCO datasets [36], converted the images in the datasets into grayscale images as clean infrared images, and then added nonuniform noise to these clean infrared images according to the nonuniformity response model of the infrared line-scan detector to produce our datasets.
The nonuniformity response of an infrared line-scan detector can be expressed as follows:
y i j = g i x i j + o i ,
where x i j and y i j represent the actual response and observed value of the detector unit i scanned to the row j , respectively; g i and o i represent the gain and offset of the detector unit i , respectively.
The image nonuniformity correction is essentially to remove the nonuniform noise generated by each detector unit shown in Equation (1) from the observed signal y i j , so as to obtain the estimated value x i j ^ of the actual response x i j . The process can be expressed as follows:
x i j ^ = G i y i j + O i ,
where G i and O i are the gain correction coefficient and the offset correction factor, respectively:
G i = 1 / g i ,
O i = o i / g i ,
Therefore, the problem of image nonuniformity correction is the problem of estimating the gain correction coefficient G i and the offset correction factor O i .
Based on the above nonuniformity response model, we produced our datasets. First, we cropped the image of size 128 × 128 from the image in the MS-COCO dataset. Second, we converted the cropped image to grayscale image, and we backed up the grayscale image as “Ori”. Third, we added noise to the grayscale image according to Equation (1) and save the noised image as “Nuf”. At the same time, we substituted the noise factors g i and o i into Equation (3) and Equation (4), respectively, for calculation to obtain correction factors G i and O i . Finally, we obtained the mat data of {“Nuf”,”Ori”, “G”, “O”}, where “Nuf” are grayscale image of size 128 × 128 with nonuniform noise added, “Ori” are the corresponding un-noised original grayscale image of size 128 × 128, “G” are the corresponding nonuniformity gain correction coefficients of size 1 × 128, and “O” are the corresponding nonuniformity offset correction factors of size 1 × 128. The specific operation of producing the datasets in this paper is shown in Figure 1.
It should be noted that in the step 3, the multiplicative noise g i of the same image obeys the uniform distribution of 1 σ g , 1 + σ g , the additive noise o i of the same image follows a Gaussian distribution with a mean of 0 and a standard deviation of σ o , and each image corresponds to a different random standard deviation, σ g 0 , 0.2 , σ o 0 , 32 .
As above, we generated 500,000 nonuniformity images, of which 200,000 images were used to pre-train our two sub-networks, and the other 300,000 images were used to train the two-stage network with the two sub-networks cascaded.

2.2. Network Design

Our two-stage network mainly consists of two parts, including a gain correction sub-network and an offset correction sub-network. Our network adopts the design principle of fully convolutional network (FCN), which can adapt to the size of the input images [37]. In our network, the convolutional layer is a multi-scale feature extraction unit, as shown in Figure 2.
As can be seen from Figure 2, we cascade 32 asymmetric convolution kernels of 1 × 5 and 32 asymmetric convolution kernels of 5 × 1 and connect the cascade with 32 convolution kernels of 1 × 1 and 64 convolution kernels of 3 × 3 in parallel. In addition, the cascade of asymmetric convolution kernel of 1 × 5 and asymmetric convolution kernel of 5 × 1 is equivalent to a 5 × 5 convolution kernel in the perceived field of view [38]. Moreover, each convolution operation is followed by a Leaky ReLU activation function [39] to introduce nonlinearity. Then, we concatenate the multi-scale features obtained by convolution in the channel direction. At last, we adopt a convolution kernel of 1 × 1 to fuse the features across channels and reduce the dimension. By adopting multi-scale feature extraction units, we can not only increase the adaptability to targets of different scales, but also improve the network expression ability without increasing the network depth to avoid the phenomenon of gradient dispersion.
Shown in the Figure 3 is the gain correction sub-network structure. We use 8 convolution layers with multiple feature extraction units, and then use a convolution kernel of 1 × 1 to reduce the output to a single channel, so as to obtain an output with the same size as the input image. At last, we use the column mean of this output as an estimate of the gain correction coefficient.
In this sub-network, we used the nonuniformity infrared image “Nuf” as input, the saved gain correction coefficient “ G i “ as the true value label, and the mean square error as the loss function for pre-training, where the loss function of the gain correction sub-network can be expressed as:
L G M S E = 1 / W i = 1 W G i G i 2 ,
where W is the width of the image, G i is the estimated value of the gain correction coefficient output by the gain correction sub-network, and G i is the saved true value of the gain correction coefficient.
Shown in the Figure 4 is the offset correction sub-network structure. In this sub-network, we use 15 convolution layers with multi-feature extraction units and a convolution kernel of 1 × 1, and finally use the column mean of the output as the estimate of the offset correction factor.
Since the offset correction sub-network and the gain correction sub-network are pre-trained independently, both sub-networks can use the same nonuniformity infrared image datasets. Different from the gain correction sub-network, the offset correction sub-network uses the image obtained by multiplying the saved nonuniformity infrared image “Nuf” and the gain correction coefficient “ G i “ as input and uses the saved offset correction factor “ O i “ as the true value label. In this sub-network, we also use the mean square error as the loss function for pre-training, as shown in the formula:
L O M S E = 1 / W i = 1 W O i O i 2 ,
where W is the width of the image, O i is the estimated value of the offset correction coefficient output by the offset correction sub-network, and O i is the saved true value of the offset correction coefficient.
Shown in Figure 5 is the two-stage deep learning network structure of gain and offset correction. In our network, since the gain correction coefficient and offset correction factor of the line-scan detector are estimated by column, and the “small” feature of the dim small target makes it impossible to have a decisive influence on the estimation of the column, the problem that may lead to image over-smoothing and remove the dim small targets when reconstructing the images pixel by pixel based on image correlation in traditional methods can be avoided.
It is important to emphasize that the training set of the cascaded two-stage network cannot be the same as the training set of the two sub-networks. When training the cascaded two-stage network, we use the nonuniformity infrared image “Nuf” as input, use the saved clean infrared image “Ori” as the true value label, and use the mean square error as the loss function, as shown in the formula:
L I M S E = 1 / W · H i = 1 W j = 1 H I i j I i j 2 ,
where W is the width of the image, H is the height of the image, I i j is the reconstructed image output by the network model, I i j is the saved clean image, and i and j are the pixel coordinates of the image.

3. Results

In experiments, to verify the effectiveness of our method, we selected the most commonly used traditional methods for comparison, including MHE [14] and 1D-GF [15]. At the same time, we also compare our method with already open-source deep learning methods, including SNRCNN [27], ICSRN [28], DLS-NUC [29], and SNRWDNN [30].

3.1. Network Model Training

We used TensorFlow software (San Francisco, CA, USA) to build the network model and used the adaptive momentum optimizer (AdamOptimizer) in TensorFlow to pre-train and train the network model on a single GPU NVIDIA Tesla V100S-PCIe-32GB (Santa Clara, CA, USA).
In the pre-training of the gain correction sub-network and the offset correction sub-network, we randomly initialized the parameters: the batch number was set to 16, the initial learning rate was set to 0.001, the learning rate was reduced to 0.0001 after 25 epochs, and the total number of training epochs was set to 50. In the training of the cascaded two-stage network, we initialized with the parameters obtained in the pre-training of the two sub-networks: the number of batches was set to 16, the initial learning rate was set to 0.0001, the learning rate was reduced to 0.00001 after 25 epochs, and the total number of training epochs was set to 50.

3.2. Quality Evaluation Metrics

In the experiments on simulated nonuniformity infrared dim small target images, we mainly compare the correction performance of each method for different intensity nonuniformity, and the influence of these methods on the dim small targets in nonuniformity infrared images. In the simulation experiments, we conducted a comparative analysis through five objective evaluation metrics, including root mean square error (RMSE) [40], peak signal-to-noise ratio (PSNR) [41], structural similarity (SSIM) [42], image roughness (IR) [43], and signal-to-clutter ratio (SCR) [44,45].
RMSE is the root mean square error of the clean reference image and the de-noised image, which is the error of image reconstruction. The smaller the RMSE, the better the performance of image reconstruction, which is defined as:
R M S E = 1 / W · H i = 1 W j = 1 H e i j r i j 2 ,
where W is the width of the image, H is the height of the image, e i j is the image to be evaluated output by the network model, r i j is the clean image for reference, and i and j are the pixel coordinates of the image.
PSNR is one of the most commonly used evaluation metrics for image reconstruction quality, which is calculated from RMSE. The larger the PSNR, the better the performance of image reconstruction, which is defined as:
P S N R = 20 × lg 255 / R M S E ,
Structural similarity is a metric to evaluate the similarity of the clean reference image and the de-noised image based on the characteristics of the human visual system, and mainly considers the three key features of image luminance, contrast, and structure. The value range of the structural similarity is (0, 1); the larger the value, the more similar the two images are, and the better the image reconstruction performance is. The definition is as follows:
S S I M r , e = 2 μ r μ e + c 1 2 σ r e + c 2 μ r 2 + μ e 2 + c 1 σ r 2 + σ e 2 + c 2 ,
where r and e are the image for reference and the image to be evaluated, respectively; μ r and μ e are the mean of r and e , respectively; σ r 2 and σ e 2 are the variance of r and e , respectively; σ r e is the covariance of r and e . Constraints c 1 = k 1 L 2 , c 2 = k 2 L 2 , L = 255 (for 8-bit grayscale images) are the dynamic range of image pixel values, and the default values for k 1 and k 2 are 0.01 and 0.03, respectively.
Image roughness is an important metric to measure image sharpness. The smaller the image roughness, the better the image quality, which is defined as:
I R = h e 1 + h T e 1 e 1 ,
where e is the reconstructed image to be evaluated, h = 1 , 1 is the horizontal mask, h T as the transpose of h is the vertical mask, the asterisk denotes discrete convolution,   1 denotes the L 1 norm, e 1 is the sum of all pixel values of the image, and h e 1 and h T e 1 are the sum of differences between pixels in the horizontal and vertical directions of the reconstructed image, respectively.
The signal-to-clutter ratio is an important metric to measure the difficulty of detecting dim small targets. The higher the signal-to-clutter ratio of dim small targets, the easier it is to be detected. Its definition is as follows:
S C R = μ t μ b σ b ,
where μ t is the average pixel value of the target, μ b is the average pixel value of the target neighborhood background, and σ b is the standard deviation of the pixel value of the target neighborhood background. Figure 6 shows the small target and its neighborhood background, in which the height h and width w of the small target are not greater than 5, and we set the width d of the neighborhood background as 5.
In the experiments on real nonuniformity infrared images, we mainly verified the correction performance of each method on real images and compared the effects of each method on texture details. Since there is no corresponding clean infrared image for the real nonuniformity infrared image, we can only use non-reference evaluation metrics. In the field of image processing, compared with the reference index, the non-reference index generally has certain defects. In addition to the above I R metrics, in order to comprehensively evaluate the denoising performance and the ability to preserve image details of each method, we also adopted the inverse coefficient of variation (ICV) [46,47] and the mean relative deviation (MRD)) [48]. However, all of these metrics are affected by factors other than algorithm performance. Therefore, we performed edge detection [49] on the images processed by each method to help us compare the impact of each method on image texture details more objectively.
The inverse coefficient of variation (ICV) is an indicator for evaluating the smoothness of an image homogeneous region, which can measure the destriping performance of each method. It is defined as:
I C V = μ h σ h ,
where μ h and σ h are the mean and standard deviation of pixel values in a homogeneous region, respectively. In general, the larger the ICV, the smoother the homogeneous region of the image, and the better the destriping performance.
In contrast to ICV, the mean relative deviation (MRD) is an indicator for evaluating the relative distortion of a sharp region, which is defined as:
M R D = 1 / W · H i = 1 W j = 1 H s i j s i j s i j ,
where s i j and s i j are the pixel pair before and after destriping in a sharp region. In general, the smaller the MRD, the less distortion in the sharp region of the image, and the better the ability to retain details.
However, although ICV and MRD are more targeted than IR metrics, these two metrics also have certain limitations due to the need for us to manually delineate the homogeneous and sharp region of the image.
Edge detection is to highlight the structural information by finding the points with obvious brightness changes in the image. In this paper, we use the Sobel operator for edge detection. We convolve the image to be detected with the Sobel operator to obtain the gradient values of each pixel in the horizontal and vertical directions, and then use the gradient of each pixel as the pixel value of the edge detection image, as shown in the formula:
G x x , y = e x , y g x G y x , y = e x , y g y E I x , y = G x 2 x , y + G y 2 x , y ,
where the Sobel operator is as follows:
g x = 1 0 1 2 0 2 1 0 1 ,   g y = 1 2 1 0 0 0 1 2 1 ,

3.3. Method Comparison on Simulated Data with Different Intensities of Nonuniformity

We selected one real, clean infrared small target image from the SIRST dataset [50]. As shown in Figure 7, the dim small target is located in the middle of the red box we marked.
We added different intensities of nonuniform noise to this image. As a low-intensity nonuniformity, its multiplicative noise g i obeys a uniform distribution of 1 0.05 , 1 + 0.05 , and its additive noise o i obeys a Gaussian distribution with a mean of 0 and a standard deviation of 5. As a medium-intensity nonuniformity, its multiplicative noise g i obeys a uniform distribution of 1 0.10 , 1 + 0.10 , and its additive noise o i obeys a Gaussian distribution with a mean of 0 and a standard deviation of 15. As a high-intensity nonuniformity, its multiplicative noise g i obeys a uniform distribution of 1 0.15 , 1 + 0.15 , and its additive noise o i obeys a Gaussian distribution with a mean of 0 and a standard deviation of 25.
The results of correction methods for different intensities of nonuniformity infrared dim small target images are shown as Figure 8, Figure 9 and Figure 10, respectively. Shown in Figure 8 are the correction results of different methods on a low-intensity nonuniformity infrared dim small target image.
Shown in Figure 9 are the correction results of different methods on a medium-intensity nonuniformity infrared dim small target image.
Shown in Figure 10 are the correction results of different methods on a high-intensity nonuniformity infrared dim small target image.
According to the evaluation metrics described in Section 3.2, we compare the correction performance of each method for different intensities of nonuniform noise, as shown in Table 1.
In order to more intuitively compare the generalization performance of each method for different intensities of nonuniformity, we created a line graph of each method regarding PSNR gain, as shown in Figure 11.

3.4. Method Comparison on Simulated Data with Different Backgrounds

We selected 10 real, clean infrared dim small target images from the SIRST dataset [50], measured the background complexity by the pixel value variance B var (dynamic range of 0–255), and named these images in increasing complexity as Test-1 to Test-10. We then added equal-intensity nonuniform noise to these images. The multiplicative noise g i obeys a uniform distribution of 1 0.12 , 1 + 0.12 , and its additive noise o i obeys a Gaussian distribution with a mean of 0 and a standard deviation of 12. Figure 12, Figure 13 and Figure 14 are the three most representative experimental results we selected.
  • Figure 12 is the nonuniformity correction results of Test-1 ( B var = 19 ).
  • Figure 13 is the nonuniformity correction results of Test-6 ( B var = 663 ).
  • Figure 14 is the nonuniformity correction results of Test-10 ( B var = 2880 ).
At the same time, we use the objective metrics described in Section 3.2 to quantitatively evaluate the experimental results, as shown in Table 2.
In order to compare the performance of each method on PSNR more intuitively, we created a line graph, as shown in Figure 15.
In order to verify the effectiveness of our method in solving the problem of image over-smoothing, we used the clean infrared image roughness as the abscissa reference and created a line graph for each method performance on the IR, as shown in Figure 16.
At the same time, in order to verify the friendliness of our method to dim small targets, we used the clean infrared image SCR as the abscissa reference and created a line graph for each method performance on the SCR, as shown in Figure 17.

3.5. Method Comparison on Real Data

We compared the methods using a real nonuniformity infrared image, which was captured by an uncooled long-wave infrared camera and especially has many texture details [29]. As shown in Figure 18a, we cropped out the (1:400, 1:480) area of the image as the real data 1, cropped out the (271:480, 481:640) area of the image as the real data 2, and the red dividing line in the figure is our cropping diagram. Figure 18b is the edge detection image, and the homogeneous region inside the blue box and the sharp region outside the blue box were used to calculate the evaluation metrics ICV and MRD.
Figure 19 shows the correction results of the real data 1 and the edge detection images of the correction results. Figure 20 shows the correction results of the real data 2 and the edge detection images of the correction results. In the experiment with real nonuniformity infrared images, we also compared the correction performance of each method according to the above evaluation metrics, as shown in Table 3.

4. Discussion

The experimental results show that our method has excellent generalization to different backgrounds, can effectively remove nonuniform noise of different intensities, and was effectively verified on real data. In addition, our method can avoid the image over-smoothing problem, and mechanically ensures that texture details including weak objects will not be removed in the process of nonuniformity correction.

4.1. Analysis of Simulation Experiment Results for Different Intensities of Nonuniformity

In the simulation experiments for different intensities of nonuniformity, MHE reduces the IR, but it performs poorly in the PSNR and SSIM metrics. The 1D-GF method can perform effective correction on low- and medium-intensity nonuniformity images, but there is still a certain degree of residual strip noise. In addition, it is difficult for the 1D-GF to meet the needs of practical application with the image with high-intensity nonuniform noise. As a preliminary attempt of a deep learning method in the field of image nonuniformity correction, the SNRCNN method can remove nonuniform noise to a certain extent, but the correction ability cannot meet the needs of practical applications. Compared with SNRCNN, the correction ability of ICSRN is slightly improved, but it can only meet the needs of low-intensity nonuniformity correction. The correction ability of the DLS-NUC is comparable to the traditional classical algorithm 1D-GF, which can basically meet the nonuniformity correction requirements. The SNRWDNN has good performance on images with different intensities of nonuniformity.
Compared to the above methods, our method has the best correction performance for nonuniform noise of different intensities. The PSNR of our corrected image can be improved by at least 15 dB compared to the PSNR of the uncorrected image, and the SSIM of the simulation experiments all reached above 0.995. In addition, it can be seen from Figure 11 that the greater the intensity of nonuniformity, the higher the PSNR gain of MHE and DLS-NUC, while the lower the PSNR gain of the SNRCNN, ICSRN, and 1D-GF. At the same time, SNRWDNN and our method do not change much in PSNR gain. In comparison, our method has the best generalization performance to different intensities of nonuniform noise among these methods.

4.2. Analysis of Simulation Experiment Results for Different Backgrounds

In the experiment of Section 3.4, we added the same intensity of nonuniform noise to 10 infrared dim small targets images with different backgrounds. From Table 2 and Figure 15, it can be seen that our method not only has the highest average PSNR, but also has the highest PSNR of the correction results for each image. At the same time, our method also shows excellent performance compared to other methods on the SSIM. More notably, the standard deviation of the PSNR in our experiments is also the smallest, which shows that our method has an excellent generalization ability to infrared images with different backgrounds. In contrast, the standard deviation of the PSNR of MHE and DLS-NUC is high, indicating that their correction performance is very easily affected by different backgrounds. Furthermore, as can be seen from Figure 16 and Figure 17, our method fits the baseline best in terms of IR and SCR, which demonstrates the effectiveness of our proposed mechanism based on noise parameter estimation. Compared with other methods for suppressing the image over-smoothing, our method fundamentally avoids the problem that dim small targets or texture details may be removed in the correction process through the innovation of the mechanism.

4.3. Analysis of Real Experimental Results

In the correction experiment for real nonuniformity infrared images, the MHE method, the GF method, and the SNRCNN method all have very obvious strip noise residuals. From the edge detection images of each correction result, other methods all have the phenomenon of losing texture details.
From the objective metrics in Table 3, compared with other methods, although our method performs poorly on IR, it is not far behind other methods. In the experiments of real data 1, DLS-NUC achieves the best performance on ICV, SNRWDNN achieves the best performance on MRD, and our method achieves a suboptimal performance on both metrics. At the same time, based on the correction results of real data 1 in Figure 19, it can be considered that our method achieved a very good balance on these two metrics. That is, our method can effectively remove strip noise and preserve image details well. In experiments on real data 2, we still achieve a suboptimal performance. However, on the MRD metric, both SNRCNN and ICSRN, which performed poorly in qualitative results, achieved top performance. We hypothesize that this is because the residual strip noise reduces the deviation between the correction result and the noisy image.

5. Conclusions

The main contribution of this paper is to propose a two-stage deep learning network based on a noise parameter estimation mechanism, which is able to perform nonuniformity correction on a single-frame infrared line-scan image. Our network model abandons the pixel-by-pixel estimation mechanism of the traditional end-to-end image reconstruction network, but estimates by column, so that when our network is applied to dim small target images, it can ensure that dim small targets are not removed as noise points. According to the infrared line-scan imaging mechanism, we produced a noise parameter estimation and image reconstruction dataset and trained our nonuniformity correction model with this dataset. In general, compared with existing methods, our method has many advantages, including excellent performance, intelligence, and friendliness to texture details and dim small targets. Of course, our model still has some room for improvement in real time. Therefore, in the future work, we plan to simplify our model by migrating learning to improve the processing speed of the model and obtain better practical value.

Author Contributions

Conceptualization, T.W.; methodology, T.W.; software, T.W.; validation, T.W. and Q.Y.; formal analysis, T.W.; investigation, T.W.; resources, F.C. and M.L.; data curation, T.W.; writing—original draft preparation, T.W.; writing—review and editing, T.W., Q.Y. and F.C.; visualization, T.W.; supervision, M.L.; project administration, Z.L. and W.A.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62001478.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The synthetic data underlying this article will be shared upon reasonable request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, S.; Yang, G.P.; Zhou, X. Line array time delay integral CCD sweep image non-uniformity correction method. Procedia Comput. Sci. 2020, 174, 216–223. [Google Scholar]
  2. Zhang, W.; Cong, M.; Wang, L. Algorithms for optical weak small targets detection and tracking: Review. In Proceedings of the International Conference on Neural Networks and Signal Processing, Istanbul, Turkey, 14–17 December 2003. [Google Scholar]
  3. Zhang, T.X.; Shi, C.C.; Li, J.J.; Liu, H.N.; Yuan, Y.J.; Zhou, Y. Overview of research on the adaptive algorithms for nonuniformity correction of Infrared Focal Plane Array. J. Infrared Millim. Waves 2007, 26, 409–413. [Google Scholar]
  4. Han, K.L.; He, C.F. A Nonuniformity Correction Algorithm for IRFPAs Based on Two Points and It’s Realization by DSP. Infrared Technol. 2007, 9, 541–544. [Google Scholar]
  5. Wang, J.; Hong, W. Non-uniformity correction for infrared cameras with variable integration time based on two-point correction. In Infrared Device and Infrared Technology; AOPC: Philadelphia, PA, USA, 2021. [Google Scholar]
  6. Scribner, D.A.; Sarkady, K.A.; Kruer, M.R.; Caulfield, J.T.; Hunt, J.D.; Herman, C. Adaptive nonuniformity correction for IR focal-plane arrays using neural networks. SPIE Proc. 1991, 1541, 100–109. [Google Scholar]
  7. Harris, J.G.; Chiang, Y.M. Nonuniformity correction using the constant-statistics constraint: Analog and digital implementations. SPIE Proc. 1997, 3061, 895–905. [Google Scholar]
  8. Torres, S.N.; Hayat, M.M. Kalman filtering for adaptive nonuniformity correction in infrared focal-plane arrays. J. Opt. Soc. Am. A 2003, 20, 470. [Google Scholar] [CrossRef] [Green Version]
  9. Zuo, C.; Chen, Q. Scene-based nonuniformity correction algorithm based on interframe registration. J. Opt. Soc. Am. A 2011, 28, 1164. [Google Scholar] [CrossRef]
  10. Li, Y.; Jin, W.; Zhu, J.; Zhang, X.; Li, S. An adaptive deghosting method in neural network-based infrared detectors nonuniformity correction. Sensors 2018, 18, 211. [Google Scholar] [CrossRef] [Green Version]
  11. Li, J.; Qin, H.; Yan, X.; Zeng, Q.; Yang, T. Temporal-spatial nonlinear filtering for infrared focal plane array stripe nonuniformity correction. Symmetry 2019, 11, 673. [Google Scholar] [CrossRef] [Green Version]
  12. Lu, C.H. Stripe non-uniformity correction of infrared images using parameter estimation. Infrared Phys. Technol. 2020, 107, 103313. [Google Scholar] [CrossRef]
  13. Seo, S.G.; Jeon, J.W. Real-time scene-based nonuniformity correction using feature pattern matching. In Proceedings of the 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM), Seoul, Korea, 4–6 January 2021. [Google Scholar]
  14. Tendero, Y.; Landeau, S.; Gilles, J. Non-uniformity correction of infrared images by Midway Equalization. Image Process. Line 2012, 2, 134–146. [Google Scholar] [CrossRef] [Green Version]
  15. Cao, Y.; Yang, M.Y.; Tisse, C.-L. Effective strip noise removal for low-textured infrared images based on 1-D guided filtering. IEEE Trans. Circ. Syst. Video Technol. 2016, 26, 2176–2188. [Google Scholar] [CrossRef]
  16. Zhang, T.; Li, X.; Li, J.; Xu, Z. CMOS fixed pattern noise elimination based on sparse unidirectional hybrid total variation. Sensors 2020, 20, 5567. [Google Scholar] [CrossRef] [PubMed]
  17. Song, Q.; Huang, Z.; Ni, H.; Bai, K.; Li, Z. Remote Sensing Images destriping with an enhanced low-rank prior and total variation regulation. Signal Image Video Process. 2022, 16, 1895–1903. [Google Scholar] [CrossRef]
  18. Wang, E.; Jiang, P.; Li, X.; Cao, H. Infrared Stripe Correction algorithm based on wavelet decomposition and total variation-guided filtering. J. Eur. Opt. Soc.-Rapid Publ. 2019, 16, 2971. [Google Scholar] [CrossRef]
  19. Shao, Y.; Sun, Y.; Zhao, M.; Chang, Y.; Zheng, Z.; Tian, C.; Zhang, Y. Infrared image stripe noise removing using least squares and gradient domain guided filtering. Infrared Phys. Technol. 2021, 119, 103968. [Google Scholar] [CrossRef]
  20. Li, M.; Nong, S.; Nie, T.; Han, C.; Huang, L.; Qu, L. A novel stripe noise removal model for infrared images. Sensors 2022, 22, 2971. [Google Scholar] [CrossRef]
  21. Li, M.; Nong, S.; Nie, T.; Han, C.; Huang, L. An infrared stripe noise removal method based on multi-scale wavelet transform and multinomial sparse representation. Comput. Intell. Neurosci. 2022, 2022, 1–18. [Google Scholar] [CrossRef]
  22. Huang, S.; Lu, T.; Lu, Z.; Rong, J.; Zhao, X.; Li, J. CMOS image sensor fixed pattern noise calibration scheme based on digital filtering method. Microelectron. J. 2022, 124, 105431. [Google Scholar] [CrossRef]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
  24. Xiao, C.; Yin, Q.; Ying, X.; Li, R.; Wu, S.; Li, M.; Liu, L.; An, W.; Chen, Z. DSFNet: Dynamic and static fusion network for moving object detection in satellite videos. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  25. Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R.; Noore, A. Deep face-representation learning for kinship verification. Deep Learn. Biometr. 2018, 130, 127–152. [Google Scholar]
  26. Ullah, F.U.; Ullah, A.; Khan, N.; Lee, M.Y.; Rho, S.; Baik, S.W. Deep learning-assisted short-term power load forecasting using deep convolutional LSTM and stacked GRU. Complexity 2022, 2022, 1–15. [Google Scholar] [CrossRef]
  27. Kuang, X.; Sui, X.; Chen, Q.; Gu, G. Single infrared image stripe noise removal using deep convolutional networks. IEEE Photon. J. 2017, 9, 1–13. [Google Scholar] [CrossRef]
  28. Xiao, P.; Guo, Y.; Zhuang, P. Removing stripe noise from infrared cloud images via deep convolutional networks. IEEE Photon. J. 2018, 10, 1–14. [Google Scholar] [CrossRef]
  29. He, Z.; Cao, Y.; Dong, Y.; Yang, J.; Cao, Y.; Tisse, C.-L. Single-image-based nonuniformity correction of uncooled long-wave infrared detectors: A deep-learning approach. Appl. Opt. 2018, 57, D162. [Google Scholar] [CrossRef]
  30. Guan, J.; Lai, R.; Xiong, A. Wavelet Deep Neural Network for stripe noise removal. IEEE Access 2019, 7, 44544–44554. [Google Scholar] [CrossRef]
  31. Chang, Y.; Yan, L.; Liu, L.; Fang, H.; Zhong, S. Infrared Aerothermal nonuniform correction via deep multiscale residual network. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1120–1124. [Google Scholar] [CrossRef]
  32. Guan, J.; Lai, R.; Xiong, A.; Liu, Z.; Gu, L. Fixed pattern noise reduction for infrared images based on cascade residual attention CNN. Neurocomputing 2020, 377, 301–313. [Google Scholar] [CrossRef] [Green Version]
  33. Xu, K.; Zhao, Y.; Li, F.; Xiang, W. Single infrared image stripe removal via deep multi-scale dense connection convolutional neural network. Infrared Phys. Technol. 2022, 121, 104008. [Google Scholar] [CrossRef]
  34. Li, T.; Zhao, Y.; Li, Y.; Zhou, G. Non-uniformity correction of infrared images based on improved CNN with long-short connections. IEEE Photon. J. 2021, 13, 1–13. [Google Scholar] [CrossRef]
  35. Zhang, S.; Sui, X.; Yao, Z.; Gu, G.; Chen, Q. Research on nonuniformity correction based on Deep Learning. In Infrared Device and Infrared Technology; AOPC: Philadelphia, PA, USA, 2021. [Google Scholar]
  36. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft Coco: Common Objects in Context. In Proceedings of the Computer Vision–ECCV, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
  37. Li, S.; Huo, L. Remote sensing image change detection based on fully convolutional network with Pyramid Attention. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Kuala Lampur, Malaysia, 17–22 July 2021. [Google Scholar]
  38. Ma, X.; Yang, Z. A new multi-scale backbone network for object detection based on asymmetric convolutions. Sci. Prog. 2021, 104, 003685042110113. [Google Scholar] [CrossRef]
  39. Jebadurai, J.; Jebadurai, I.J. Super-resolution of digital images using CNN with Leaky Relu. Int. J. Recent Technol. Eng. 2019, 8, 210–212. [Google Scholar]
  40. Pezoa, J.E.; Hayat, M.M.; Torres, S.N.; Saifur Rahman, M. Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors. J. Opt. Soc. Am. A 2006, 23, 1282. [Google Scholar] [CrossRef]
  41. Liu, Y.; Zhu, H.; Zhao, Y. Nonuniformity correction algorithm based on infrared focal plane array readout architecture. Opt. Precis. Eng. 2008, 1, 128–133. [Google Scholar]
  42. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  43. Hayat, M.M.; Torres, S.N. Statistical algorithm for nonuniformity correction in focal-plane arrays. Appl. Opt. 1999, 38, 772. [Google Scholar] [CrossRef] [Green Version]
  44. Gao, C.Q.; Zhang, T.Q.; Li, Q. Small infrared target detection using sparse ring representation. IEEE Aerosp. Electr. Syst. Mag. 2012, 27, 21–30. [Google Scholar]
  45. Li, M.; Zhang, T.X.; Zuo, Z.R.; Sun, X.C.; Yang, W.D. Novel dim target detection and estimation algorithm based on double threshold partial differential equation. Opt. Eng. 2006, 45, 090502. [Google Scholar] [CrossRef]
  46. Rakwatin, P.; Takeuchi, W.; Yasuoka, Y. Stripe noise reduction in MODIS data by combining histogram matching with facet filter. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1844–1856. [Google Scholar] [CrossRef]
  47. Kang, Y.; Pan, L.; Sun, M.; Liu, X.; Chen, Q. Destriping High-resolution satellite imagery by improved moment matching. Int. J. Remote Sens. 2017, 38, 6346–6365. [Google Scholar] [CrossRef]
  48. Jia, J.; Wang, Y.; Cheng, X.; Yuan, L.; Zhao, D.; Ye, Q.; Zhuang, X.; Shu, R.; Wang, J. Destriping algorithms based on statistics and spatial filtering for visible-to-thermal infrared pushbroom hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4077–4091. [Google Scholar] [CrossRef]
  49. Waghule, D.R.; Ochawar, R.S. Overview on edge detection methods. In Proceedings of the 2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies, Nagpur, India, 9–11 January 2014. [Google Scholar]
  50. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2021. [Google Scholar]
Figure 1. The specific operation of producing the datasets.
Figure 1. The specific operation of producing the datasets.
Remotesensing 14 05056 g001
Figure 2. A multi-scale feature extraction unit.
Figure 2. A multi-scale feature extraction unit.
Remotesensing 14 05056 g002
Figure 3. The gain correction sub-network structure.
Figure 3. The gain correction sub-network structure.
Remotesensing 14 05056 g003
Figure 4. The offset correction sub-network structure.
Figure 4. The offset correction sub-network structure.
Remotesensing 14 05056 g004
Figure 5. The two-stage deep learning network structure of gain and offset correction.
Figure 5. The two-stage deep learning network structure of gain and offset correction.
Remotesensing 14 05056 g005
Figure 6. Small target and its neighborhood background.
Figure 6. Small target and its neighborhood background.
Remotesensing 14 05056 g006
Figure 7. A real, clean infrared small target image.
Figure 7. A real, clean infrared small target image.
Remotesensing 14 05056 g007
Figure 8. Comparison of methods using low-intensity nonuniformity infrared dim small target image: (a) infrared dim small target image with low-intensity nonuniform noise added; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Figure 8. Comparison of methods using low-intensity nonuniformity infrared dim small target image: (a) infrared dim small target image with low-intensity nonuniform noise added; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Remotesensing 14 05056 g008
Figure 9. Comparison of methods using medium-intensity nonuniformity infrared dim small target images: (a) infrared dim small target image with medium-intensity nonuniform noise added; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Figure 9. Comparison of methods using medium-intensity nonuniformity infrared dim small target images: (a) infrared dim small target image with medium-intensity nonuniform noise added; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Remotesensing 14 05056 g009
Figure 10. Comparison of methods using high-intensity nonuniformity infrared dim small target images: (a) infrared dim small target image with high-intensity nonuniform noise added; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Figure 10. Comparison of methods using high-intensity nonuniformity infrared dim small target images: (a) infrared dim small target image with high-intensity nonuniform noise added; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Remotesensing 14 05056 g010
Figure 11. The PSNR gain of each method on images with different intensities of nonuniformity. The greater the PSNR along the abscissa direction, the lower the intensity of image nonuniformity.
Figure 11. The PSNR gain of each method on images with different intensities of nonuniformity. The greater the PSNR along the abscissa direction, the lower the intensity of image nonuniformity.
Remotesensing 14 05056 g011
Figure 12. The nonuniformity correction results of Test-1: (a) the real, clean infrared dim small target image ( B var = 19 ), the dim small target is located in the middle of the red box we manually marked; (b) infrared dim small target image with nonuniform noise added; (c) MHE; (d) 1D-GF; (e) SNRCNN; (f) ICSRN; (g) DLS-NUC; (h) SNRWDNN; (i) ours.
Figure 12. The nonuniformity correction results of Test-1: (a) the real, clean infrared dim small target image ( B var = 19 ), the dim small target is located in the middle of the red box we manually marked; (b) infrared dim small target image with nonuniform noise added; (c) MHE; (d) 1D-GF; (e) SNRCNN; (f) ICSRN; (g) DLS-NUC; (h) SNRWDNN; (i) ours.
Remotesensing 14 05056 g012
Figure 13. The nonuniformity correction results of Test-6: (a) the real, clean infrared dim small target image ( B var = 663 ), the dim small target is located in the middle of the red box we manually marked; (b) infrared dim small target image with nonuniform noise added; (c) MHE; (d) 1D-GF; (e) SNRCNN; (f) ICSRN; (g) DLS-NUC; (h) SNRWDNN; (i) ours.
Figure 13. The nonuniformity correction results of Test-6: (a) the real, clean infrared dim small target image ( B var = 663 ), the dim small target is located in the middle of the red box we manually marked; (b) infrared dim small target image with nonuniform noise added; (c) MHE; (d) 1D-GF; (e) SNRCNN; (f) ICSRN; (g) DLS-NUC; (h) SNRWDNN; (i) ours.
Remotesensing 14 05056 g013
Figure 14. The nonuniformity correction results of Test-10: (a) the real, clean infrared dim small target image ( B var = 2880 ), the dim small target is located in the middle of the red box we manually marked; (b) infrared dim small target image with nonuniform noise added; (c) MHE; (d) 1D-GF; (e) SNRCNN; (f) ICSRN; (g) DLS-NUC; (h) SNRWDNN; (i) ours.
Figure 14. The nonuniformity correction results of Test-10: (a) the real, clean infrared dim small target image ( B var = 2880 ), the dim small target is located in the middle of the red box we manually marked; (b) infrared dim small target image with nonuniform noise added; (c) MHE; (d) 1D-GF; (e) SNRCNN; (f) ICSRN; (g) DLS-NUC; (h) SNRWDNN; (i) ours.
Remotesensing 14 05056 g014aRemotesensing 14 05056 g014b
Figure 15. Line graph of each method performance on PSNR. The abscissa is the variance of the image background pixel value, and the ordinate is the difference between the corrected image PSNR and the nonuniformity image PSNR.
Figure 15. Line graph of each method performance on PSNR. The abscissa is the variance of the image background pixel value, and the ordinate is the difference between the corrected image PSNR and the nonuniformity image PSNR.
Remotesensing 14 05056 g015
Figure 16. The line graph of each method performance on the IR. The abscissa is the clean image roughness, and the ordinate is the corrected image roughness. The better the fitting with the baseline, the lower the possibility of image over-smoothing.
Figure 16. The line graph of each method performance on the IR. The abscissa is the clean image roughness, and the ordinate is the corrected image roughness. The better the fitting with the baseline, the lower the possibility of image over-smoothing.
Remotesensing 14 05056 g016
Figure 17. The line graph of each method performance on the SCR. The abscissa is the clean image SCR, and the ordinate is the corrected image SCR. The better the fitting with the baseline, the lower the possibility of removing dim small targets as noise points.
Figure 17. The line graph of each method performance on the SCR. The abscissa is the clean image SCR, and the ordinate is the corrected image SCR. The better the fitting with the baseline, the lower the possibility of removing dim small targets as noise points.
Remotesensing 14 05056 g017
Figure 18. Real nonuniformity infrared image and its edge detection image: (a) real nonuniformity infrared image, upper left region as real data 1, bottom right region as real data 2; (b) the edge detection image.
Figure 18. Real nonuniformity infrared image and its edge detection image: (a) real nonuniformity infrared image, upper left region as real data 1, bottom right region as real data 2; (b) the edge detection image.
Remotesensing 14 05056 g018
Figure 19. The correction results of the real data 1 and the edge detection images of the correction results. The red circle is our manual annotation of texture details in the image: (a) real data 1; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Figure 19. The correction results of the real data 1 and the edge detection images of the correction results. The red circle is our manual annotation of texture details in the image: (a) real data 1; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Remotesensing 14 05056 g019aRemotesensing 14 05056 g019b
Figure 20. The correction results of the real data 2 and the edge detection images of the correction results. The red circle is our manual annotation of texture details in the image: (a) real data 2; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Figure 20. The correction results of the real data 2 and the edge detection images of the correction results. The red circle is our manual annotation of texture details in the image: (a) real data 2; (b) MHE; (c) 1D-GF; (d) SNRCNN; (e) ICSRN; (f) DLS-NUC; (g) SNRWDNN; (h) ours.
Remotesensing 14 05056 g020aRemotesensing 14 05056 g020b
Table 1. Evaluation metric results of the methods on simulated data with different intensities of nonuniform noise.
Table 1. Evaluation metric results of the methods on simulated data with different intensities of nonuniform noise.
ExperimentMethodsRMSEPSNR (dB)SSIMIRSCR
Low-Intensity NonuniformityNoise Image5.7732.90120.73090.07062.5708
MHE [14]53.4913.56520.77120.03813.1419
1D-GF [15]1.5144.55290.99650.01573.4791
SNRCNN [27]2.2141.24920.97410.02053.0936
ICSRN [28]1.6243.91660.99370.01433.6239
DLS-NUC [29]3.7736.61040.96430.01453.5481
SNRWDNN [30]1.8742.70370.99060.01573.6770
Ours1.0347.84250.99930.01503.7283
Medium-Intensity
Nonuniformity
Noise Image16.8423.60590.29040.18851.323
MHE [14]51.0413.97240.75290.04243.3484
1D-GF [15]5.4033.48410.96230.02173.4443
SNRCNN [27]12.2826.34380.48950.11831.9014
ICSRN [28]8.7729.26710.75300.04513.2835
DLS-NUC [29]5.2133.80140.94900.01833.4608
SNRWDNN [30]3.2138.00570.98640.01683.8070
Ours2.4540.36450.99780.01563.7163
High-Intensity NonuniformityNoise Image25.6219.960.15590.29140.7753
MHE [14]56.0213.160.72180.04443.4024
1D-GF [15]6.0632.480.91370.03343.4181
SNRCNN [27]21.7821.370.23820.22730.8014
ICSRN [28]16.0124.050.40090.11551.1388
DLS-NUC [29]7.9330.140.89220.02722.7439
SNRWDNN [30]4.7234.660.97960.01913.8752
Ours3.8336.460.99530.01703.5277
Table 2. Evaluation metric results of the methods on simulated data with different backgrounds.
Table 2. Evaluation metric results of the methods on simulated data with different backgrounds.
ExperimentMethodsRMSEPSNR (dB)SSIMIRSCR
Test - 1   ( B var = 19 )Noise Image14.7324.76830.29900.16911.1610
MHE [14]71.1511.08730.53750.05302.3561
1D-GF [15]3.7836.57510.97570.01374.2120
SNRCNN [27]10.0928.05080.55310.09461.6311
ICSRN [28]7.3030.86670.77910.03742.9876
DLS-NUC [29]3.2637.85330.97560.01185.4496
SNRWDNN [30]2.3240.81980.99090.01046.1065
Ours1.4744.80260.99840.00996.7038
Test - 6   ( B var = 663 )Noise Image12.6326.10420.38940.22181.6516
MHE [14]75.0210.62720.66750.05182.2701
1D-GF [15]3.3937.51670.97990.03183.2332
SNRCNN [27]8.0430.02430.67370.11392.0920
ICSRN [28]4.9134.31050.90330.04293.0031
DLS-NUC [29]4.0136.06830.97410.02853.5229
SNRWDNN [30]2.9038.89490.98170.02863.5026
Ours1.9842.18470.99730.02713.7157
Test - 10   ( B var = 2880 )Noise Image15.3924.38590.46160.18481.0346
MHE [14]12.5426.16270.91940.06770.6328
1D-GF [15]9.2328.82610.94910.06000.8926
SNRCNN [27]11.4426.95950.65760.12271.1425
ICSRN [28]8.5829.46480.83420.07311.2652
DLS-NUC [29]16.0024.04610.79210.05211.2424
SNRWDNN [30]7.5230.60260.94000.06050.8147
Ours2.1241.61900.99580.05870.8182
Test-1–10 AverageNoise Image15.1724.560.35620.17911.0014
MHE [14]45.9016.390.73600.06222.0396
1D-GF [15]4.8934.820.96670.03112.3881
SNRCNN [27]10.7127.640.58240.10621.3462
ICSRN [28]7.7530.590.78270.05152.0673
DLS-NUC [29]8.7531.100.89670.02802.5928
SNRWDNN [30]3.8936.870.97630.02932.6957
Ours1.8742.900.99700.02752.8335
Test-1–10 Standard DeviationNoise Image1.64580.94190.05780.07280.4267
MHE [14]21.0835.84960.10080.02850.9088
1D-GF [15]1.80512.76070.01780.02371.2596
SNRCNN [27]1.67871.37230.06440.04230.5564
ICSRN [28]1.77582.08710.08870.02291.0465
DLS-NUC [29]6.36835.32260.09300.02261.4446
SNRWDNN [30]1.46692.98250.01410.02501.5914
Ours0.36151.91410.00120.02291.6913
Table 3. Evaluation metrics results of the methods on real data.
Table 3. Evaluation metrics results of the methods on real data.
MethodsReal Data 1Real Data 2
IRICVMRDIRICVMRD
Noise Image0.13092.0711 0.29072.4919
MHE [14]0.08591.99080.06470.17722.40261.9515
1D-GF [15]0.07862.09770.06470.15572.60510.2139
SNRCNN [27]0.08032.09420.05010.16512.58850.1627
ICSRN [28]0.05372.10010.06760.10972.5880.2167
DLS-NUC [29]0.05912.14530.0930 0.11422.67360.3349
SNRWDNN [30]0.07902.08970.06480.15732.60130.2228
Ours0.07922.10170.06490.15742.62470.2303
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, T.; Yin, Q.; Cao, F.; Li, M.; Lin, Z.; An, W. Noise Parameter Estimation Two-Stage Network for Single Infrared Dim Small Target Image Destriping. Remote Sens. 2022, 14, 5056. https://doi.org/10.3390/rs14195056

AMA Style

Wang T, Yin Q, Cao F, Li M, Lin Z, An W. Noise Parameter Estimation Two-Stage Network for Single Infrared Dim Small Target Image Destriping. Remote Sensing. 2022; 14(19):5056. https://doi.org/10.3390/rs14195056

Chicago/Turabian Style

Wang, Teliang, Qian Yin, Fanzhi Cao, Miao Li, Zaiping Lin, and Wei An. 2022. "Noise Parameter Estimation Two-Stage Network for Single Infrared Dim Small Target Image Destriping" Remote Sensing 14, no. 19: 5056. https://doi.org/10.3390/rs14195056

APA Style

Wang, T., Yin, Q., Cao, F., Li, M., Lin, Z., & An, W. (2022). Noise Parameter Estimation Two-Stage Network for Single Infrared Dim Small Target Image Destriping. Remote Sensing, 14(19), 5056. https://doi.org/10.3390/rs14195056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop