Next Article in Journal
Estimating Crop Seed Composition Using Machine Learning from Multisensory UAV Data
Next Article in Special Issue
Dithered Depth Imaging for Single-Photon Lidar at Kilometer Distances
Previous Article in Journal
A Low-Altitude Remote Sensing Inspection Method on Rural Living Environments Based on a Modified YOLOv5s-ViT
Previous Article in Special Issue
Remote Sensing Image Information Quality Evaluation via Node Entropy for Efficient Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure Tensor-Based Infrared Small Target Detection Method for a Double Linear Array Detector

1
Institute of Spacecraft System Engineering, China Academy of Space Technology, Beijing 100094, China
2
Institute of Remote Sensing Satellite, China Academy of Space Technology, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(19), 4785; https://doi.org/10.3390/rs14194785
Submission received: 4 August 2022 / Revised: 20 September 2022 / Accepted: 21 September 2022 / Published: 25 September 2022

Abstract

:
The paper focuses on the mathematical modeling of a new double linear array detector. The special feature of the detector is that image pairs can be generated at short intervals in one scan. After registration and removal of dynamic cloud edges in each image, the image differentiation-based change detection method in the temporal domain is proposed to combine with the structure tensor edge suppression method in the spatial domain. Finally, experiments are conducted, and our results are compared with theoretic analyses. It is found that a high signal-to-clutter ratio (SCR) of camera input is required to obtain an acceptable detection rate and false alarm rate in real scenes. Experimental results also show that the proposed cloud edge removal solution can be used to successfully detect targets with a very low false alarm rate and an acceptable detection rate.

1. Introduction

Dim small target detection is a major problem in numerous fields, such as infrared search and track (IRST) systems and external intrusion warnings [1,2,3]. Since the imaging distance is long in these applications, the target usually occupies only one or a few pixels [4,5,6], and there is insufficient texture and shape information for target detection [7,8,9]. Furthermore, the intensity value of the infrared target is very low due to reflection, refraction, the sensor’s aperture diffraction effects and geometric aberrations [10,11,12,13]. Therefore, it is difficult to separate infrared small targets from complex backgrounds.

1.1. Related Works

Existing infrared target detection approaches can be divided into spatial, temporal, and spatio-temporal detection methods. Most spatial detection methods use spatial filtering techniques, and they are usually based on the assumption that the target has a larger intensity value than the background. However, this assumption does not always hold in real scenes [14,15,16,17]. The temporal detection methods usually use the temporal profiles of each pixel in a sequence of infrared images to extract the small target of interest. They have a good detection performance when a small target appears in slowly evolving backgrounds [18,19,20,21]. However, these methods often consume more time than single-frame detection methods. The spatio-temporal detection methods are complementary to the singular spatial or temporal detection methods [22,23,24]. They use features in both spatial (e.g., the gray difference feature) and temporal (e.g., the motion difference feature) domains to completely separate targets from clutter.
Several optical systems have been proposed to detect small infrared small targets over the past few decades [25,26,27,28]. They can be divided into two classes: scanning camera-based optical systems and staring camera-based optical systems. Scanning cameras have a relatively wide field of view and are suitable for early warning for large areas [29]. However, imaging in this way has a high time delay integration (TDI) in adjacent frames. Consequently, it is difficult to perform data association, and the time for target discovery is long. On the other hand, the staring camera is usually used for target tracking as its imaging size is usually small [30], which makes it not suitable for searching in early warning.

1.2. Contributions

In order to overcome these limitations of traditional optical systems, we propose to use a double linear array detector to detect targets with cross-pixel moving. For a double linear array detector, two images (an image pair) are generated when the detector scans from the top to the bottom only once. Figure 1a shows an IR image pair acquired by a double linear array detector with a slowly changing cloud clutter. It not only reserves the wide field of view of traditional scanning systems but also reserves the short-time intervals in adjacent frames of staring systems. A double linear array detector has the following three advantages. First, it reserves the wide field of view of traditional scanning systems. Second, its exposure time in each pixel is longer than traditional scanning systems. Third, the interval between adjacent frames is shorter than traditional scanning systems, which makes it easy to perform data association subsequently estimate the velocities and relative positions of cross-pixel moving targets.
Our other task is to automatically detect targets with cross-pixel moving in image pairs acquired by a double linear array detector. Considering the special spatial arrangement and imaging modes of the detector (as analyzed in Section 2), the image pair is almost observed from the same solar angle and atmospheric conditions, and the slow change in cloud background can be almost negligible at a short interval. Therefore, the image differentiation-based change detection method is suitable for the detection of targets after image registration. Figure 1b shows the detection results after image differentiation. We can see the positive gray-scale value and the negative gray-scale value of a candidate target produced from an image pair in the filtered result, namely, positive and negative target pairs in our paper. The mathematical model of image differentiation is:
D x i j = x i j ( t 2 ) x i j ( t 1 )
where x ( t 1 ) represents the image acquired by the first linear array and x ( t 2 ) represents the image acquired by the second linear array. i and j indicates the location of the pixel. D x i j represents the residuals after image differentiation. If the grayscale value of the candidate target in x ( t 1 ) is positive, the grayscale value of the candidate target in x ( t 2 ) is negative after image differentiation. We call them positive and negative target pairs in this paper.
Since false alarms after image differentiation are mainly caused by cloud edges [31,32], edge suppression has been found to be a useful spatial method complementary to our temporal change detection method [33,34,35]. The structure tensor has been widely used for cloud edge suppression in recent years. Dai et al. [23,24,36,37] allocated the structure tensor as an adaptive weighting parameter to suppress strong cloud edges. Liu et al. [38] introduced the gradient direction diversity (GDD) method to suppress sharp cloud edges. The GDD measure is also inspired by the structure tensor. Li et al. [39] used the local steering kernel to encode the infrared image patch, as it can represent different intrinsic structures in different image regions (e.g., the cloud edge region, the flat region, the textural clutter region and the small target region). Thus, the structure tensor is used for cloud edge removal in our paper.
The overview of our proposed method is as follows.
  • The structure tensor is used to detect infrared small targets, which is used as an adaptive weighting parameter to suppress strong cloud edges.
  • Considering that using information of image sequences requires more prior information and a large amount of data processing, the temporal image differentiation filter is used to extract target pairs using movement information of the target.
  • Adaptive thresholding-based constant false alarm (H-CFAR) is performed to obtain candidate targets, and data association is performed to extract positive and negative target pairs.
The rest of this paper is organized as follows. Section 2 analyzes the optical path and mathematical model of the double linear array. Section 3 presents the target detection model in detail. Section 4 tests the performance of our proposed method. The paper is concluded in Section 5.

2. The Double Linear Array Detector

For a double linear array detector, the region of interest in the instantaneous field of view ( I F O V ) can be measured twice in the same period of time once the detector scans from the top to the bottom. The optical path of a double linear array detector is shown in Figure 2. The device consists of a scanning mechanism, an optical system and a focal plane. The scanning mechanism mainly includes a pendulum mirror and a driving shaft, as wel as the focal plane constructed with two linear arrays arranged in parallel, as shown in the right part of Figure 2. Incident light containing the radiation energy information of targets and the background is first projected to the pendulum mirror; the driving shaft is then used to rotate the pendulum mirror. After several reflections and refractions in the optical system, the light finally converges to the focal plane to generate an image pair of the scene.
For a scanning system, the ground sample distance ( G S D ) determines the maximum spatial resolution of a camera and the minimum detectable velocity of a target. In addition, G S D mainly depends on the instantaneous field of view ( I F O V ) in practice. The I F O V is the angular cone of visibility of the camera and determines the area on the Earth’s surface that can be seen from a given altitude at a particular moment [40]. The geometry of the detecting system, including G S D , I F O V , camera height (H), pixel size (d), and the focus of the optical system (f) is shown in Figure 3, and the relationship between them can be expressed as:
G S D = 2 H tan ( I F O V 2 ) = H d f
Considering that our detection method is based on two frames, and it has to use the distance constraint to associate the positive and negative target pairs, the following data association constraints are derived.
The target velocity in the image plane v p i x e l is used to predict the target velocity v in real scenes. Since the dual linear array can only detect cross-pixel moving targets, the detectable velocity in the focal plane is:
d Δ t < v p i x e l < l Δ t
where d is the pixel size, l is the width of the linear array, and Δ t is the time interval of linear arrays.
Then, the range of the detectable target velocity in real scenes is:
G S D Δ t < v < l Δ t · G S D d
The left and right sides of Equation (4) are the minimum and maximum detectable velocity of the target in real scenes, which are denoted by v m i n and v m a x , respectively.
From Equations (3) and (4), we can obtain the distance constraints as follows:
v m i n · Δ t G S D < Δ D < v m a x · Δ t G S D
where Δ D is the target moving distance in linear arrays.

3. Target Detection Model

The proposed small target detection method for a double linear array detector is shown in Figure 4. We first use the structure tensor method to suppress cloud edges. Then, the temporal image differentiation filter is used to extract target pairs using motion information of the target. After background suppression, adaptive thresholding-based constant false alarm (H-CFAR) is performed to obtain candidate targets. Finally, data association is performed to extract positive and negative target pairs using the constraints given in Equation (5).
The structure tensor is proposed based on the edge shock filter and variational functionals [41,42,43]. It turns out to be very effective for the enhancement of corner structures and presents different characteristics on homogeneous regions, edges, and texture regions of an image [44,45]. Therefore, it is used in our infrared small target detection method for cloud edge suppression in a single image.
The structure tensor is essentially a steering matrix [46,47]. It describes the local structural information about the image, and it can be represented as:
C i = x i Ω i I 2 x i 1 I x 1 I x i 2 I x i 1 I x i 2 I 2 x i 2
where I represents the image and x i = ( x i 1 , x i 2 ) represents the two-dimensional coordinate vector of the central pixel in a rectangular window Ω i .
The steering matrix captures the principal directions of local texture from the gradient distribution in a small neighborhood (mostly 5 × 5 [48]). Therefore, the structure tensor C i can be first calculated by G i T G i with:
G i = Z x 1 ( x 1 ) Z x 2 ( x 1 ) Z x 1 ( x P ) Z x 2 ( x P )
where Z x 1 ( · ) and Z x 2 ( · ) denote the first derivatives along the horizontal and vertical axes, respectively, and P is the number of pixels in the local window Ω i . However, since it is difficult to calculate the gradient distribution, the covariance matrix C i can be estimated by singular value decomposition [22,49] as:
C i = γ i U θ i Λ i U θ i T
where γ i is the scaling parameter. It is large in homogeneous regions but small in textured regions. θ i is the rotation parameter; it defines the dominant orientation angle, U θ i , as a rotation matrix. Λ i is the elongation matrix.
γ i = s 1 s 2 + λ M 1 2
U θ i = cos θ i sin θ i sin θ i cos θ i
Λ i = σ i 0 0 σ i 1
σ i = s 1 + λ s 2 + λ
where σ i is the elongation parameter. λ = 1 ; λ = 10 1 . The structure tensor can be calculated using Equations (9)–(12).
The eigenvalues of singular value decomposition is denoted as λ 1 and λ 2 . They can be used as two features to describe the local structural information [50]. The larger λ 1 is then λ 2 ; the measurement region is more likely a cloud edge region. Therefore, the cloud edge suppression measure can be defined as follows:
K = exp ( h · ( λ 1 λ 2 ) )
Figure 5 shows the cloud edge suppression results on images with three different shapes of clouds. It demonstrates that the proposed structure tensor-based measure achieves good performance in cloud edge suppression.

Temporal Differentiation of Image Pairs

Assuming that the noise in image pairs follows a Gaussian distribution N ( 0 , σ 2 ) and the registration error is zero, then the residual after image differentiation follows a Gaussian distribution N ( 0 , 2 σ 2 ) . Since the grayscale value of the residual image can be positive or negative, we use a bilateral filter to extract candidate points, the positive threshold t h _ p is used to extract positive candidate points, and the negative threshold t h _ n is used to extract negative candidate points. If both the positive and negative grayscale values of the target exceed the defined thresholds, the data association step is followed (according to Equation (5)). Finally, the detection rate p d and false detection rate p f can be calculated by Equations (14) and (15).
p d = p d + · p d p d + = Q t h _ p T m a x μ B G 2 σ B G p d = 1 Q t h _ n μ B G T m a x 2 σ B G
where p d + and p d are the target detection rate with a positive gray value and the target detection rate with a negative gray value, respectively. T m a x is the highest intensity value in the target region, and μ B G and σ B G are the average and the standard deviation of intensity values in a residual image. Q ( x ) is a right-tailed distribution function, as defined in Equation (16).
p f = N · p f + · p f p f + = Q t h _ p 2 σ B G p f = 1 Q t h _ n 2 σ B G
where p f + and p f are the target false alarm rate with a positive gray value and the target false alarm rate with a negative gray value, respectively. N is the number of pixels that are possibly associated.
Q x = x + 1 2 π σ B G 2 e x p ξ μ B G 2 2 σ B G 2 d ξ
To ensure that both positive and negative targets can be detected, the signal-to-clutter ratio ( S C R ) is defined as:
S C R = | T m a x μ B G | σ B G
Combining Equations (14)–(17), the relationship between the detection rate, false rate and S C R can be expressed as:
Q 1 p f / N 1 2 Q 1 ( p d 1 2 ) = S C R 2
The theoretic receiver operating characteristic (ROC) curves validated through Monto Carlo simulations for the double linear array detector are analyzed as shown in Figure 6. It can be seen that the detection probability becomes higher as the SNR increases. Specifically, When the SNR is 6.1 and the false alarm rate is 1 × 10 4 , our double linear array detector can achieve a detection rate of 97 % ; when the SNR is above 6.1 and the false alarm rate is 1 × 10 5 , our double linear array detector can achieve a detection rate of 93 % .

4. Experimental Results and Discussions

4.1. Simulation Scenes

In this section, experiments are conducted to test the performance of our method. Given an image, another image was generated with one pixel jittering in an arbitrary direction to simulate the image pair produced by the double linear array detector. Then, simulated targets were added into each image, the target position and motion direction were randomly determined, the intensity of the targets was determined according to a specific S C R , and the target velocity was set to 2∼3 km/s. Next, the camera resolution was set to 1 km × 1 km, and the interval of an image pair was set to 2 s. Finally, considering that the target can move along the diagonal direction and the horizontal direction, the distance constraint for a target pair after clutter suppression is set to 2∼7 pixels, as shown in Figure 7.

4.2. Experiments on Simulation Scenes

Three experiments were conducted on scenes with different shapes of cloud (i.e., Figure 8a–d, ragged cloud scenes; Figure 8e–h, strong cloud scenes; Figure 8i–l, fluffy cloud scenes), as shown in the first column of Figure 8. The second column in Figure 8 shows the filtered results after image differentiation, and the third column shows the filtered results after cloud edge suppression and image differentiation. It can be seen from the third column of Figure 8 that false alarms in dynamic cloud edges have been significantly reduced. Finally, accurate association results are obtained, as shown in the fourth column of Figure 8.

4.3. ROC Curves Evaluation

The ROC curves obtained through Monto Carlo simulations were used to test our proposed spatio-temporal method for a double linear array detector. An ROC curve represents the probability of detection P d as a function of the false alarm rate P f . P d and P f can be calculated as:
P d = n t n c
P f = n f n
where n t , n c , n f and n represent the number of detected true pixels, ground-truth target pixels, false alarm pixels and the total number of image pixels, respectively.
As shown in Figure 9, at the cost of an increase in the false alarm rate, the detection probability is increased under a certain S C R . When S C R is increased, the detection probability is increased under a certain false alarm rate. It can be observed that the detection rate and the false alarm rate are a pair of contradictory variables. Furthermore, the theoretical ROC results are better than the results achieved by other two methods in real scenes. That is because the residual after background suppression does not completely follow a Gaussian distribution.
Table 1 shows the probability of detection in three experiments under different S C R s and P f . Table 2 shows the false alarm rates in three experiments under different S C R s and P d . We can see that with the cloud edge suppression method, the detection rate is significantly improved, and the false alarm rate is largely reduced.

4.4. Evaluation of the Proposed Detection Framework

The data association results for positive and negative targets under different operating conditions are shown in Figure 10. From Figure 10a–p, we can see that after image differentiation and cloud edge suppression, the target pairs can be correctly associated, and no false detection is found in the association results. From Figure 10q–t, we find that one target can associate with two or more targets under a few conditions. To address this problem, we have to extend the dual linear array to a multi-linear array in future, and the false association can be removed by the direction of the target motion trajectory. Furthermore, the input SCR of the camera can be improved to handle this problem.

5. Conclusions

This paper has presented a complete framework for small infrared target detection using a double linear array detector. First, a new double linear array detector was modeled to generate image pairs at short intervals. Second, considering the limitations of singular spatial or temporal detection methods, an image differentiation-based change detection method in the temporal domain was proposed combined with the structure tensor edge suppression method in the spatial domain. The experimental results showed that targets can be extracted accurately with a very low false alarm rate and an acceptable detection rate.

Author Contributions

Conceptualization, J.G. and Z.P.; methodology, J.G.; software, J.G.; validation, J.G. and J.Y.; formal analysis, J.G.; investigation, J.G.; resources, L.W.; data curation, Z.P.; writing—original draft preparation, J.G.; writing—review and editing, J.G.; visualization, J.G.; supervision, J.G.; project administration, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, M.; Dong, L.; Ma, D.; Xu, W. Infrared target detection in marine images with heavy waves via local patch similarity. Infrared Phys. Technol. 2022, 125, 104283. [Google Scholar] [CrossRef]
  2. Rawat, S.S.; Verma, S.K.; Kumar, Y. Infrared small target detection based on non-convex triple tensor factorisation. IET Image Process. 2021, 15, 556–570. [Google Scholar] [CrossRef]
  3. Zhu, Q.; Zhu, S.; Liu, G.; Peng, Z. Infrared Small Target Detection Using Local Feature-Based Density Peaks Searching. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6507805. [Google Scholar] [CrossRef]
  4. He, X.; Ling, Q.; Zhang, Y.; Lin, Z.; Zhou, S. Detecting Dim Small Target in Infrared Images via Sub-Pixel Sampling Cuneate Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 3189225. [Google Scholar] [CrossRef]
  5. Lu, R.; Yang, X.; Li, W.; Fan, J.; Li, D.; Jing, X. Robust infrared small target detection via multidirectional derivative-based weighted contrast measure. IEEE Geosci. Remote Sens. Lett. 2020, 19, 7000105. [Google Scholar] [CrossRef]
  6. Yang, X.; Wu, T.; Wang, N.; Huang, Y.; Song, B.; Gao, X. HCNN-PSI: A hybrid CNN with partial semantic information for space target recognition. Pattern Recognit. 2020, 108, 107531. [Google Scholar] [CrossRef]
  7. Bai, X.h.; Xu, S.w.; Guo, Z.x.; Shui, P.l. Floating Small Target Detection Based on the Dual-polarization Cross-time-frequency Distribution in Sea Clutter. Digital Signal Process. 2022, 129, 103625. [Google Scholar] [CrossRef]
  8. Liu, T.; Yang, J.; Li, B.; Xiao, C.; Sun, Y.; Wang, Y.; An, W. Nonconvex Tensor Low-Rank Approximation for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5614718. [Google Scholar] [CrossRef]
  9. Gao, J.; Guo, Y.; Lin, Z.; An, W.; Li, J. Robust infrared small target detection using multiscale gray and variance difference measures. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 5039–5052. [Google Scholar] [CrossRef]
  10. Huang, Z.; Zhang, Y.; Yue, X.; Li, X.; Fang, H.; Hong, H.; Zhang, T. Joint horizontal-vertical enhancement and tracking scheme for robust contact-point detection from pantograph-catenary infrared images. Infrared Phys. Technol. 2020, 105, 103156. [Google Scholar] [CrossRef]
  11. Gao, Z.; Dai, J.; Xie, C. Dim and small target detection based on feature mapping neural networks. J. Vis. Commun. Image Represent. 2019, 62, 206–216. [Google Scholar] [CrossRef]
  12. Ding, L.; Xu, X.; Cao, Y.; Zhai, G.; Yang, F.; Qian, L. Detection and tracking of infrared small target by jointly using SSD and pipeline filter. Digit. Signal Process. 2021, 110, 102949. [Google Scholar] [CrossRef]
  13. Wang, N.; Li, B.; Wei, X.; Wang, Y.; Yan, H. Ship detection in spaceborne infrared image based on lightweight CNN and multisource feature cascade decision. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4324–4339. [Google Scholar] [CrossRef]
  14. Li, Y.; Li, Z.; Shen, Y.; Guo, Z. Infrared Small Target Detection Via Center-surround Gray Difference Measure with Local Image Block Analysis. IEEE Trans. Aerosp. Electron. Syst. 2022. [Google Scholar] [CrossRef]
  15. Zuo, Z.; Tong, X.; Wei, J.; Su, S.; Wu, P.; Guo, R.; Sun, B. AFFPN: Attention Fusion Feature Pyramid Network for Small Infrared Target Detection. Remote Sens. 2022, 14, 3412. [Google Scholar] [CrossRef]
  16. Zhang, T.; Peng, Z.; Wu, H.; He, Y.; Li, C.; Yang, C. Infrared small target detection via self-regularized weighted sparse model. Neurocomputing 2021, 420, 124–148. [Google Scholar] [CrossRef]
  17. Wang, H.; Li, H.; Zhou, H.; Chen, X. Low-altitude infrared small target detection based on fully convolutional regression network and graph matching. Infrared Phys. Technol. 2021, 115, 103738. [Google Scholar] [CrossRef]
  18. Chen, G.; Wang, W.; Tan, S. IRSTFormer: A Hierarchical Vision Transformer for Infrared Small Target Detection. Remote Sens. 2022, 14, 3258. [Google Scholar] [CrossRef]
  19. Qin, Y.; Bruzzone, L.; Gao, C.; Li, B. Infrared small target detection based on facet kernel and random walker. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7104–7118. [Google Scholar] [CrossRef]
  20. Zhao, M.; Li, L.; Li, W.; Tao, R.; Li, L.; Zhang, W. Infrared small-target detection based on multiple morphological profiles. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6077–6091. [Google Scholar] [CrossRef]
  21. Yang, P.; Dong, L.; Xu, W. Detecting Small Infrared Maritime Targets Overwhelmed in Heavy Waves by Weighted Multidirectional Gradient Measure. IEEE Geosci. Remote Sens. Lett. 2021, 19, 3080389. [Google Scholar] [CrossRef]
  22. Zhang, T.; Wu, H.; Liu, Y.; Peng, L.; Yang, C.; Peng, Z. Infrared small target detection based on non-convex optimization with Lp-norm constraint. Remote Sens. 2019, 11, 559. [Google Scholar] [CrossRef]
  23. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Attentional local contrast networks for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9813–9824. [Google Scholar] [CrossRef]
  24. Zhou, F.; Wu, Y.; Dai, Y. Infrared small target detection via incorporating spatial structural prior into intrinsic tensor sparsity regularization. Digit. Signal Process. 2021, 111, 102966. [Google Scholar] [CrossRef]
  25. Tian, Y.; Liu, J.; Zhu, S.; Xu, F.; Bai, G.; Liu, C. Ship Detection in Visible Remote Sensing Image Based on Saliency Extraction and Modified Channel Features. Remote Sens. 2022, 14, 3347. [Google Scholar] [CrossRef]
  26. Deng, L.; Zhang, J.; Xu, G.; Zhu, H. Infrared small target detection via adaptive M-estimator ring top-hat transformation. Pattern Recognit. 2021, 112, 107729. [Google Scholar] [CrossRef]
  27. Hou, Q.; Wang, Z.; Tan, F.; Zhao, Y.; Zheng, H.; Zhang, W. RISTDnet: Robust infrared small target detection network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 3050828. [Google Scholar] [CrossRef]
  28. Shahraki, H.; Aalaei, S.; Moradi, S. Infrared small target detection based on the dynamic particle swarm optimization. Infrared Phys. Technol. 2021, 117, 103837. [Google Scholar] [CrossRef]
  29. Zhao, M.; Li, W.; Li, L.; Ma, P.; Cai, Z.; Tao, R. Three-order tensor creation and tucker decomposition for infrared small-target detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5000216. [Google Scholar] [CrossRef]
  30. Peršić, J.; Petrović, L.; Marković, I.; Petrović, I. Spatiotemporal multisensor calibration via gaussian processes moving target tracking. IEEE Trans. Robot. 2021, 37, 1401–1415. [Google Scholar] [CrossRef]
  31. Cui, H.; Li, L.; Liu, X.; Su, X.; Chen, F. Infrared Small Target Detection Based on Weighted Three-Layer Window Local Contrast. IEEE Geosci. Remote Sens. Lett. 2021, 19, 3133649. [Google Scholar] [CrossRef]
  32. Xue, W.; Qi, J.; Shao, G.; Xiao, Z.; Zhang, Y.; Zhong, P. Low-Rank Approximation and Multiple Sparse Constraint Modeling for Infrared Low-Flying Fixed-Wing UAV Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4150–4166. [Google Scholar] [CrossRef]
  33. Stojnić, V.; Risojević, V.; Muštra, M.; Jovanović, V.; Filipi, J.; Kezić, N.; Babić, Z. A method for detection of small moving objects in UAV videos. Remote Sens. 2021, 13, 653. [Google Scholar] [CrossRef]
  34. Aghaziyarati, S.; Moradi, S.; Talebi, H. Small infrared target detection using absolute average difference weighted by cumulative directional derivatives. Infrared Phys. Technol. 2019, 101, 78–87. [Google Scholar] [CrossRef]
  35. Pang, D.; Shan, T.; Li, W.; Ma, P.; Liu, S.; Tao, R. Infrared dim and small target detection based on greedy bilateral factorization in image sequences. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3394–3408. [Google Scholar] [CrossRef]
  36. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 950–959. [Google Scholar]
  37. Zhou, F.; Wu, Y.; Dai, Y.; Ni, K. Robust infrared small target detection via jointly sparse constraint of l 1/2-metric and dual-graph regularization. Remote Sens. 2020, 12, 1963. [Google Scholar] [CrossRef]
  38. Liu, D.; Cao, L.; Li, Z.; Liu, T.; Che, P. Infrared Small Target Detection Based on Flux Density and Direction Diversity in Gradient Vector Field. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2018, 2528–2554. [Google Scholar] [CrossRef]
  39. Li, Y.; Zhang, Y. Robust infrared small target detection using local steering kernel reconstruction. Pattern Recognit. 2018, 77, 113–125. [Google Scholar] [CrossRef]
  40. Li, C.; Chen, N.; Zhao, H.; Yu, T. Multiple-beam lidar detection technology. In Proceedings of the Seventh Asia Pacific Conference on Optics Manufacture and 2021 International Forum of Young Scientists on Advanced Optical Manufacturing (APCOM and YSAOM 2021), SPIE, Hong Kong, China, 13–16 August 2022; Volume 12166, pp. 1773–1778. [Google Scholar]
  41. Liu, H.K.; Zhang, L.; Huang, H. Small target detection in infrared videos based on spatio-temporal tensor model. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8689–8700. [Google Scholar] [CrossRef]
  42. Pang, D.; Shan, T.; Li, W.; Ma, P.; Tao, R.; Ma, Y. Facet derivative-based multidirectional edge awareness and spatial–temporal tensor model for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5001015. [Google Scholar] [CrossRef]
  43. Zhang, P.; Zhang, L.; Wang, X.; Shen, F.; Pu, T.; Fei, C. Edge and corner awareness-based spatial–temporal tensor model for infrared small-target detection. IEEE Trans. Geosci. Remote Sens. 2020, 59, 10708–10724. [Google Scholar] [CrossRef]
  44. Hu, Y.; Ma, Y.; Pan, Z.; Liu, Y. Infrared Dim and Small Target Detection from Complex Scenes via Multi-Frame Spatial–Temporal Patch-Tensor Model. Remote Sens. 2022, 14, 2234. [Google Scholar] [CrossRef]
  45. Sun, Y.; Yang, J.; An, W. Infrared dim and small target detection via multiple subspace learning and spatial-temporal patch-tensor model. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3737–3752. [Google Scholar] [CrossRef]
  46. Pang, D.; Ma, P.; Shan, T.; Li, W.; Tao, R.; Ma, Y.; Wang, T. STTM-SFR: Spatial–Temporal Tensor Modeling with Saliency Filter Regularization for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  47. Kong, X.; Yang, C.; Cao, S.; Li, C.; Peng, Z. Infrared small target detection via nonconvex tensor fibered rank approximation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–21. [Google Scholar] [CrossRef]
  48. Biswas, S.K.; Milanfar, P. Linear support tensor machine with LSK channels: Pedestrian detection in thermal infrared images. IEEE Trans. Image Process. 2017, 26, 4229–4242. [Google Scholar] [CrossRef]
  49. Guan, X.; Zhang, L.; Huang, S.; Peng, Z. Infrared small target detection via non-convex tensor rank surrogate joint local contrast energy. Remote Sens. 2020, 12, 1520. [Google Scholar] [CrossRef]
  50. Deng, L.; Song, J.; Xu, G.; Zhu, H. When Infrared Small Target Detection Meets Tensor Ring Decomposition: A Multiscale Morphological Framework. IEEE Trans. Aerosp. Electron. Syst. 2022, 3162–3176. [Google Scholar] [CrossRef]
Figure 1. An example of image pair and positive and negative target pairs. (a) An image pair with slowly changing cloud clutter. (b) Local detection results after image differentiation.
Figure 1. An example of image pair and positive and negative target pairs. (a) An image pair with slowly changing cloud clutter. (b) Local detection results after image differentiation.
Remotesensing 14 04785 g001
Figure 2. The optical path of a double linear array detector.
Figure 2. The optical path of a double linear array detector.
Remotesensing 14 04785 g002
Figure 3. Geometry of the detecting system.
Figure 3. Geometry of the detecting system.
Remotesensing 14 04785 g003
Figure 4. Overall flowchart for a double linear array detector.
Figure 4. Overall flowchart for a double linear array detector.
Remotesensing 14 04785 g004
Figure 5. Illustrations of cloud edge suppression based on structure tensor measures. (a) Ragged cloud edge; (b) strong cloud edge; (c) fluffy cloud edge.
Figure 5. Illustrations of cloud edge suppression based on structure tensor measures. (a) Ragged cloud edge; (b) strong cloud edge; (c) fluffy cloud edge.
Remotesensing 14 04785 g005
Figure 6. The theoretic ROC curves for the double linear array detector.
Figure 6. The theoretic ROC curves for the double linear array detector.
Remotesensing 14 04785 g006
Figure 7. Schematic diagram of distance constraint for a target pair. (a) Horizontal direction, (b) Vertical direction, (c) Diagonal direction.
Figure 7. Schematic diagram of distance constraint for a target pair. (a) Horizontal direction, (b) Vertical direction, (c) Diagonal direction.
Remotesensing 14 04785 g007
Figure 8. An example of results achieved by the proposed small target detection methods on three images. The red and blue boxes in each image represent the candidate points passing the positive and negative thresholds (as mentioned in Section 3), respectively. (ad) Ragged clouds; (eh) strong clouds; (il) fluffy clouds.
Figure 8. An example of results achieved by the proposed small target detection methods on three images. The red and blue boxes in each image represent the candidate points passing the positive and negative thresholds (as mentioned in Section 3), respectively. (ad) Ragged clouds; (eh) strong clouds; (il) fluffy clouds.
Remotesensing 14 04785 g008
Figure 9. ROC curves achieved by three methods under different SCRs. (a) Performance achieved by temporal image differentiation. (b) Performance achieved by spatial cloud edge suppression and temporal image differentiation. (c) Performance achieved by theoretic analysis.
Figure 9. ROC curves achieved by three methods under different SCRs. (a) Performance achieved by temporal image differentiation. (b) Performance achieved by spatial cloud edge suppression and temporal image differentiation. (c) Performance achieved by theoretic analysis.
Remotesensing 14 04785 g009
Figure 10. Thedata association results with targets in different positions in various complex backgrounds. (ap) target pairs were associated correctly, (qt) one target was associated with two or more targets.
Figure 10. Thedata association results with targets in different positions in various complex backgrounds. (ap) target pairs were associated correctly, (qt) one target was associated with two or more targets.
Remotesensing 14 04785 g010
Table 1. P d achieved by three methods under different S C R s and P f .
Table 1. P d achieved by three methods under different S C R s and P f .
P f = 5 × 10 6 P f = 10 5
SCRTemporalSpatial-TemporalTheoryTemporalSpatial-TemporalTheory
611.39%54.40%88.98%44.03%63.93%91.90%
765.53%74.16%99.02%70.67%81.06%99.38%
884.93%86.53%99.97%88.39%92.74%99.98%
‘Temporal’ stands for the temporal image differentiation method; ‘Spatial-temporal’ stands for the method combining the spatial cloud edge suppression and the temporal image differentiation method.
Table 2. P f achieved by three methods under different S C R s and P d .
Table 2. P f achieved by three methods under different S C R s and P d .
P d = 85 % P d = 90 %
SCRTemporalSpatial-TemporalTheoryTemporalSpatial-TemporalTheory
6 3.30 × 10 4 2.32 × 10 4 3.00 × 10 5 8.82 × 10 4 5.14 × 10 4 6.00 × 10 6
7 4.34 × 10 5 2.10 × 10 5 2.00 × 10 7 6.88 × 10 4 6.16 × 10 5 4.00 × 10 8
8 4.97 × 10 6 4.28 × 10 6 8.00 × 10 10 1.40 × 10 5 8.08 × 10 6 5.00 × 10 9
‘Temporal’ stands for the temporal image differentiation method; ‘Spatial-temporal’ stands for the method combining the spatial cloud edge suppression and the temporal image differentiation method.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, J.; Wang, L.; Yu, J.; Pan, Z. Structure Tensor-Based Infrared Small Target Detection Method for a Double Linear Array Detector. Remote Sens. 2022, 14, 4785. https://doi.org/10.3390/rs14194785

AMA Style

Gao J, Wang L, Yu J, Pan Z. Structure Tensor-Based Infrared Small Target Detection Method for a Double Linear Array Detector. Remote Sensing. 2022; 14(19):4785. https://doi.org/10.3390/rs14194785

Chicago/Turabian Style

Gao, Jinyan, Luyuan Wang, Jiyang Yu, and Zhongshi Pan. 2022. "Structure Tensor-Based Infrared Small Target Detection Method for a Double Linear Array Detector" Remote Sensing 14, no. 19: 4785. https://doi.org/10.3390/rs14194785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop