Next Article in Journal
Research on Smart Tourism Oriented Sensor Network Construction and Information Service Mode
Previous Article in Journal
Generalized Scale Factor Calibration Method for an Off-Axis Digital Image Correlation-Based Video Deflectometer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System

Information Science and Engineering Department, Xinjiang University, Urumqi 830017, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 10009; https://doi.org/10.3390/s222410009
Submission received: 16 September 2022 / Revised: 8 December 2022 / Accepted: 15 December 2022 / Published: 19 December 2022
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The human visual system (HVS) mechanism has been successfully introduced into the field of infrared small target detection. However, most of the current detection algorithms based on the mechanism of the human visual system ignore the continuous direction information and are easily disturbed by highlight noise and object edges. In this paper, a multi-scale strengthened directional difference (MSDD) algorithm is proposed. It is mainly divided into two parts: local directional intensity measure (LDIM) and local directional fluctuation measure (LDFM). In LDIM, an improved window is used to suppress most edge clutter, highlights, and holes and enhance true targets. In LDFM, the characteristics of the target area, the background area, and the connection between the target and the background are considered, which further highlights the true target signal and suppresses the corner clutter. Then, the MSDD saliency map is obtained by fusing the LDIM map and the LDFM map. Finally, an adaptive threshold segmentation method is employed to capture true targets. The experiments show that the proposed method achieves better detection performance in complex backgrounds than several classical and widely used methods.

1. Introduction

Infrared imaging system have been widely used in civil fields such as diseased cell diagnosis, industrial flaw detection, and agricultural and industrial detection [1,2,3,4,5]. It is worth noting that the application value of Infrared imaging system in military fields such as military reconnaissance, early warning, guidance, and video surveillance is more obvious [4]. The infrared search and track (IRST) system is one of the core components. The IRST system refers to the system that detects the target that radiates infrared energy from the infrared image and tracks and predicts the trajectory of the target [6]. Among them, infrared small target detection and tracking is one of the core technologies of the IRST system.
However, in most practical infrared small target imaging systems, because the size of the target to be detected is very small or too far away from the detector, the target to be detected in the system output image is very small (typically no more than 80 pixels based on SPIE definition [7]) and lacks color and texture features [8,9]. Small targets in real scenes meet the following conditions: that is, the target area has observable discontinuity compared with the surrounding background area, the number of pixels in the target area is small, the contrast between the target and the background is low, the background is complex, the texture information is lacking, etc. The difficulties and challenges of many detections have attracted more and more attention from researchers at home and abroad.
At present, many infrared small target detection algorithms have been proposed at home and abroad. In general, the existing infrared small target detection algorithms can be roughly divided into single-frame algorithms and multi-frame algorithms [10]. Due to the need for early warning and the good potential of single-frame detection algorithms for real-time applications [11], this paper only focuses on single-frame-based detection algorithms. Next, we will give a brief overview of HVS-based small target detection methods and other single-frame small target detection methods.
HVS-based small target detection methods. Theoretical mechanisms such as local contrast, visual saliency map, multi-feature fusion, and multi-scale have become the new theoretical basis for infrared small target detection. In recent years, mechanisms of the human visual system have been successfully introduced into the field of infrared small target detection [12,13,14,15]. Theoretical mechanisms such as local contrast, visual saliency map, multi-feature fusion, and multi-scale have become new theoretical bases for infrared small target detection. Chen et al. proposed the local contrast measure (LCM) algorithm [12], which uses the current central region with the surrounding neighborhood for contrast measurement to obtain the contrast factor, thus enhancing the target and suppressing the background. However, this method is not suitable for detecting dark targets and has limited ability to suppress noise and background. Based on LCM, Han et al. proposed the improved LCM (ILCM) algorithm [16], which uses the subblock average as a parameter to better suppress random point noise, but true small targets may also be smoothed. Inspired by biological vision mechanisms, Wei et al. proposed the multi-scale patch-based contrast measure (MPCM) algorithm [17], which defines a local contrast measure based on patch differences for background suppression and target enhancement. Although this method is able to detect both bright and dark targets in IR images, it is not robust enough for thick clutter. Then, the novel LCM (NLCM) algorithm proposed by Tan et al. and the weighted local difference measurement (WLDM) algorithm proposed by Qin et al. [18,19] were combined, which combines the advantages of local differential contrast and local contrast to enhance the target but does not effectively suppress high brightness backgrounds. To solve this problem, Han et al. proposed the relative LCM (RLCM) algorithm [20]; however, it is more sensitive to scattering noise. Recently, Han et al. proposed the weighted strengthened local contrast measurement (WSLCM) algorithm [21], which uses the idea of matched filter and background estimation to enhance the target and suppress the background and uses a weighting function to adjust the final result, which has a better detection performance, but the time cost is high and is not suitable for real-time detecting.
Other small target detection methods. In the early days, researchers mostly worked on filter-based methods. According to the shape of the target or the background, a specific filter is constructed to achieve the purpose of enhancing the target signal and suppressing the background. For example, the top-hat filter [22], the maximum mean/maximum median filter [23], and an improved anisotropic partial difference filter [24], which can enhance the target and suppress the complex background, but the robustness is not good. Some high-order filters, such as a Laplacian of Gaussian filter [25] and bilateral filter [26], have also been designed and improved by researchers for small target detection. These algorithms are simple in design and fast in calculations. However, these methods suffer from a high false positive rate when the image signal-to-clutter ratio (SCR) is low or when the target shape is heterogeneous, thus failing to detect real targets correctly. Moreover, many scholars regard infrared small target images as the superposition of low-rank components and sparse components and propose many infrared small target detection algorithms based on robust principal component analysis (RPCA) [27]. Hu et al. [28] proposed a small target algorithm based on saliency and the principal component analysis. Cao et al. [29] proposed a small target algorithm based on a probabilistic principal component analysis (PPCA), which maps the input vector of the image to the subspace by calculating the PPCA parameters. The distance between the original vector and the reconstructed vector indicates whether the input vector is a small target or not. Gao et al. [1] proposed an infrared patch-image model (IPI), which transformed the small target detection problem into an optimization problem of recovering a low-rank sparse matrix. Generally, this type of algorithm has good robustness and a high detection rate, but it is slow in processing large-scale images and has poor real-time performance. In addition, methods based on deep learning are becoming increasingly popular research directions. Inspired by the application of generative confrontation network (GAN) in the field of unsupervised learning, Wang et al. [30] proposed a deep learning framework to balance target miss detection (MD) and false alarm (FA). Zhao et al. [31] proposed an algorithm that uses the GAN model to autonomously learn small target features, and constructs a five-layer discriminator to enhance the data fitting ability of the generator. The literature [32,33] used convolutional neural network (CNN) to propose a new infrared image enhancement method by highlighting the target and suppressing the background clutter. Generally, this type of method extracts the features of small targets in a self-learning manner to distinguish the background, trying to get rid of the tediousness of manually extracting small target features, and can improve the detection accuracy of small targets to a certain extent. However, due to the lack of large and diverse training data, such methods must generate a large dataset to simulate the properties of infrared images, including various forms of targets and backgrounds. Therefore, deep learning based methods are currently challenging.
Building on the previous work, this paper proposes a new method to enhance the directional difference. The algorithm makes full use of the anisotropy of the true target, and takes into account the features of the true target itself, the background neighborhood features, and the features between the two. Extensive data experiments show that the proposed method outperforms existing algorithms in detecting complex backgrounds. Furthermore, the method is robust to different target shapes, target sizes, and noise types.
This paper has three contributions.
  • Improved the previous scan window, the center pixel of the window does not participate in the calculation and can effectively deal with high-brightness pixel-level noise (PNHB).
  • Using the new scanning window and the anisotropy of the small target itself, a local directional intensity measure is proposed.
  • Considering the features of the true target itself, the features of the background neighborhood and the features between them, LDFM is proposed.
The article is organized as follows: In Section 2, the relative work is presented. The proposed method and its various parts are described in detail in Section 3. In Section 4, the experimental results are given, comparing the proposed method with other methods. In Section 5, the analysis and discussion of the algorithm are presented. The article ends in Section 6.

2. Related Work

Most of the current detection algorithms based on the mechanisms of the human visual system ignore the continuous direction information, which is a very potentially valuable information. Recently, Saed Moradi et al. [34] used a concept similar to the average absolute gray difference [35] to construct a new algorithm for directional small target detection called absolute directional mean difference (ADMD).
In the ADMD, first, a double nested window is defined as shown in Figure 1a, where T represents the target block, and B represents the eight background cells. The main idea of the ADMD algorithm is as follows:
D k = ( m 0 ( i , j ) m k ( i , j ) ) 2 × F ( m 0 ( i , j ) m k ( i , j ) )   k = 1 , 2 8
where m 0 represents average gray value of the target cell T, and   m k represents the average gray value of the kth background cell. F ( · ) is a function, as follows:
F ( x ) = { 1                                     x 0 0                                     x < 0
suppression of the negative region generated in the calculation using the F function. Since true small targets are usually brighter than their background neighbors. Therefore, ADMD is defined as follows:
ADMD ( i , j ) = min { D 1 ( i , j ) , D 2 ( i , j ) D 8 ( i , j ) }
In general, the true small target area is brighter than the surrounding environment. This means that any of the selected contrast values are larger, while the non-real target area does not have this property. This definition implies the ability to enhance the target and suppress the background.
However, this method does not fully consider the anisotropy of the target point. Corner points and PNHB will have a huge impact on the algorithm, and the enhancement and background suppression of the true target are not ideal. The task goal of the algorithm proposed in this paper is to improve the ADMD algorithm, which can effectively deal with complex noise points.

3. Materials and Methods

The flowchart of the proposed algorithm is shown in Figure 2, which mainly consists of two parts, LDIM and LDFM. First, the raw image is calculated by LDIM to obtain candidate points. Second, LDFM is used to correct wrong candidate points and enhance true small targets. Then, the LDIM map and LDFM map are fused to obtain the final saliency map (SM). Then, we extend the algorithm to multi-scale. Finally, the target is extracted by a threshold operation.
Figure 1. (a) Double-nested detection window. (b) Improved double-nested detection window. (c) Situations that the algorithm needs to handle. (d) Gaussian shape.
Figure 1. (a) Double-nested detection window. (b) Improved double-nested detection window. (c) Situations that the algorithm needs to handle. (d) Gaussian shape.
Sensors 22 10009 g001
Figure 2. Flowchart of our MSDD small target detection method.
Figure 2. Flowchart of our MSDD small target detection method.
Sensors 22 10009 g002

3.1. Local Directional Intensity Measure

In general, as shown in Figure 1d, true small targets have Gaussian-shaped features, and their gradient directions are omnidirectional—that is, the intensity decays toward the surroundings. True small targets have stronger brightness than their background neighborhood and form a higher contrast with their background neighborhood.
As shown in Figure 1b, an improved double-nested window is designed. The window is divided into 9 cells, where 0 cells are the target region, and the remaining cells are background neighborhoods. Note that the pixel position of the center point of the target area does not participate in the calculation, which can effectively avoid PNHB, as shown in Figure 1c.
Given an infrared image, candidate regions satisfying the above properties can be obtained by calculations.
D ( x , y ) = max { min ( m 0 m i ) , 0 }   i = 1 , 2 8 .
m i = 1 N i j = 1 N i G j i   i = 0 , 1 8 .
where ( x , y ) is represents the center point of the target area, m i is the average intensity value of the ith cell, N i is the number of pixels of the ith cell, and   G j i is the intensity value of the jth pixel in the ith cell. The   min ( · ) and max ( · ) are the minimum and maximum operations, respectively. The local directional intensity measure is then obtained as follows.
LDIM ( x , y ) = D ( x , y ) 2
where LDIM is the square of D. This is done to enhance the true targets.

3.2. Local Directional Fluctuation Measure

The target area, the neighborhood background, and the target-neighborhood background fluctuations should all be taken into account. Firstly, consider the fluctuation of the target-neighborhood background. In Figure 3, the three neighborhoods and the central area are taken as a piece of new area, and four blocks with directionality can be obtained after area division.
The fluctuation of each block is obtained by calculations:
σ block ( x , y ) = min ( σ i , σ 2 , σ 3 , σ 4 )
σ i = 1 N i × j = 0 N i ( I j i M i ) 2         i = 1 , 2 , 3 , 4 .
M i = 1 N i k = 1 N i I k i
where M i is the average intensity value of the ith block. Then, the target area and the neighborhood background are calculated separately.
σ center ( x , y ) = σ cell 0 ( x , y ) = 1 N cell 0 × j = 0 N cell 0 ( I j M cell 0 ) 2
σ bg ( x , y ) = mean ( σ cell i )         i = 1 , 2 8 .
σ cell i ( x , y ) = 1 N cell i × j = 1 N cell i ( I j M cell i ) 2
where mean ( · ) is the mean operation. Note that the calculation here is for each cell. Finally, LDFM is obtained.
LDFM ( x , y ) = σ block ( x , y ) × σ center ( x , y ) max { σ bg ( x , y ) 2 , ζ }
where ζ is a constant to prevent the denominator from being zero and is set to 0.01 in this paper.

3.3. Small Target Detection Using

According to the discussion in the previous section, it can be known that the LDIM map can use the characteristics of small target areas and edge areas to obtain candidate target areas in the original image and can effectively deal with edge clutter and PNHB to enhance the true target. Then, the LDFM image can be used to further suppress the corner clutter and enhance the true target. Therefore, SDD is defined as the mapping of LDIM weighted LDFM, which can greatly improve the reliability of target detection and effectively suppress the background:
SDD ( x , y ) = LDIM ( x , y ) × LDFM ( x , y )
In the real case, since the size of the true small target is not fixed in the IR map, it is necessary to detect the target using multi-scale operations. The proposed method can be easily extended to a suitable detection range. First, the cell size is set to different scale values. Then, the SDD at each scale is calculated, and finally, the final multi-scale SDD is achieved by the maximum operation:
MSDD ( x , y ) = max ( SDD ( s ) ( x , y ) )       s = 1 , 2 n .
where s represents the sth scale, and n represents the total number of scales.

3.4. Threshold Operation

In this paper, we use SDD to calculate each pixel in the original image from top to bottom and from left to right and finally get a new matrix called saliency map (SM). In SM, the true small target is the most significant, so it can be extracted by using the threshold. The threshold operation in this paper is defined as:
Th = λ × SM max + ( 1 λ ) × SM mean
where SM max and SM mean are the maximum and mean values of SM, respectively. λ is an experimental constant, and its value is between 0 and 1. The experiments show that λ is the most suitable between 0.4 and 0.6. If the pixel intensity in the SM map is greater than Th, it is divided into target pixels.

4. Experimental Results

In this section, we test the proposed method on four infrared sequences with different background clutter, as well as open datasets, and compare the performance with six related baseline methods. The relevant performance metrics are also given in this section to verify the effectiveness and robustness of the proposed algorithm. All experiments were done with MATLAB r2016a on a computer with a 16 GB RAM and 2.50 GHz Intel i5-7300HQ processor.

4.1. Experimental Settings

4.1.1. Related Metrics

We use background suppression factor (BSF) and signal-to-clutter ratio gain (SCRG) as metrics to evaluate the clutter suppression ability of the algorithm [35,36,37]. The definition of specific experimental metrics is as follows:
BSF = σ in σ out SCRG = SCR out SCR in SCR = | m t m b | σ b
where σ in and σ out are the standard deviations of the original image and the significant map, respectively. SCR in and SCR out are the signal-to-clutter ratio levels of the original image and the salient map, respectively. m t and m b denote the mean values of the target area and the surrounding background area, respectively. σ b represents the standard deviation of the background neighborhood. In this paper, the target area is the area around the center of the object, and the background neighborhood is the 15 × 15 neighborhood around the center area of the object, excluding the object area. Specifically, BSF represents the degree of background clutter and noise after image processing. Specifically, BSF represents the degree of background clutter and noise after image processing. In the original image, especially the IR image with a complex background, true small targets are often submerged in it, making it difficult to be detected. Such a complex background produces a high standard deviation σ in . After processing, the complex background should be suppressed, tend to be flat, and, finally, get a lower standard deviation σ out . Therefore, the larger the ratio of σ in to σ out , the better the background suppression effect, and the easier the detection of small targets. SCRG can indicate the degree of enhancement of the true target after processing. Usually, through the processed IR image, the intensity value of the true small target is enhanced, and the intensity value of the background neighborhood is reduced and tends to be flat. Therefore, higher values of SCRG and BSF indicate better performance in target enhancement and background suppression, respectively.
In addition, in order to better evaluate the detection accuracy of the algorithm, a threshold within a specific range is used to segment the saliency map to obtain the true positive rate (TPR) and false positive rate (FPR) and then use TPR and FPR to define the subject operating characteristics (ROC) curve [35,36,37]. The specific formulas of TPR and FPR are given as follows:
TPR = the   number   of   detected   true   targets total   number   of   real   targets
FPR = the   number   of   detected   false   targets total   number   of   pixels   in   the   whole   image
In the ROC curve, the more the curve is shifted to the upper left corner, the better the detection performance is, and the more it is shifted to the lower right corner, the weaker the detection performance is.

4.1.2. Test Datasets and Baseline Method

The experiment uses five datasets to evaluate the performance of the algorithm, including four consecutive sequences and a set of single-frame infrared images. Among them, sequence datasets 1–4 are shown in [37,38,39]. Sequence 1 is a sequence of pictures with a single small target, and there are more PNHBs in the background. The infrared image of sequence 2 is heavily polluted by noise. In sequence 3, fliers were submerged in a complex background. In sequence 4, objects fly through the sky and buildings under overexposed conditions. For the last set of single-frame infrared images, we use the open dataset SIRST initiated by Dai et al. [40]. This dataset is the first to explicitly build an open single-frame dataset by selecting only one representative image from the sequence [40]. It is worth noting that SIRST contains small targets of different sizes, different types, different brightness, and different backgrounds. Small target types include aerial objects, ships, vehicles, etc. The background of the small target includes clouds, ground, rivers, buildings, etc. In general, the five datasets selected can test the detection ability of the algorithm and the robustness of the algorithm. Other details of the dataset are shown in Table 1.
To better evaluate the performance of the proposed method, some classical, as well as newer infrared small target detection algorithms, are selected for comparison in this paper, including LCM [12], MPCM [17], RLCM [20], ADMD [34], TLLCM [36], and VAR-DIFF [37].

4.2. Comparison to Baseline Methods

We selected one representative image in each dataset. Using the baseline method and the method proposed in this paper to compare, and the final calculation result is shown in Figure 4. In each dataset, the size of small targets is not fixed, and the background is complex, accompanied by varying degrees of noise. The first image contains more PNHB and sharp edges, which cause some baseline methods to fail to detect correctly, and more noise remains. The second image contains more target-like points, and the true small targets have a small contrast with their background neighborhoods. The true small targets are submerged in the background, and most baseline methods cannot handle the target-like points. The background of the third image is complex, but the true small target has a large contrast with its background neighborhood, so most of the baseline methods can effectively detect the true small target. The fourth picture is brighter overall, and there are buildings, and some baseline methods fail. The fifth picture contains complex buildings with many corners, and the true small target has a small contrast with its background neighborhood. Except for the algorithm proposed in this paper, all baseline methods cannot be detected normally. In a word, the algorithms proposed in this paper can effectively capture true small targets, and the effect is better than the baseline method.
As shown in Table 2 and Table 3, the proposed algorithm has good performance. Among the four sequences and SIRST, our method is the best in terms of the SCRG compared to the baseline methods. In terms of the BFS, the proposed algorithm performs well in sequence 1, sequence 2, and SIRST and is lower than VAR-DIFF in sequence 3 and sequence 4. This shows that the proposed algorithm has better target enhancement ability and background suppression ability than the other algorithms.
In addition, in order to show that the algorithm proposed in this paper has a good detection ability, we use the receiver operating characteristic (ROC) curve to conduct experiments. In Figure 5, the ROC curves for the four sequences and SIRST using the baseline method and our method are shown. It can be seen that the algorithm proposed in this paper performs well in sequence 1, sequence 2, sequence 4, and SIRST, and the rest of the baseline methods will be affected by different degrees of background clutter, resulting in algorithm instability. In sequence 3, the proposed algorithm and the VAR-DIFF and RLCM baseline methods perform well.

5. Discussion

Different degrees of detection interference that may be encountered need to be discussed. Next, we discuss the different cases when the window is on the true target, pure background, background edge, corner edge, and PNHB.
  • If ( x , y ) is true target center, since the true small targets usually has a large positive contrast to its neighborhood, its D will be large, so its LDIM will be large. Meanwhile, its σ block and σ center will be large, and σ bg will be small, so its LDFM will be large. Therefore, the MSDD will be large.
  • If ( x , y ) is pure background, since the pixel intensity values of such area are not very different, its D will be so small as to be close to 0, so its LDIM will be small. Meanwhile, its σ block σ center σ bg , so its LDFM is approximately equal to 1. Therefore, the MSDD will be small.
  • If ( x , y ) is background edge, since such regions are usually directional locally, its D will be small, so its LDIM will be small. Meanwhile, its σ block σ center σ bg , so its LDFM is approximately equal to 1. Therefore, the MSDD will be small.
  • If ( x , y ) is a corner edge, such areas often appear at the edges of clouds or buildings. Particularly, these regions tend to have large positive contrasts with certain neighborhoods. However, in LDIM, the computation is directional, so the LDIM will be small. Meanwhile, σ block is also a directional calculation. Therefore, its σ block will be small. Overall, the final MSDD corner will be smaller than the MSDD real   target .
  • If ( x , y ) is a PNHB, although it has high brightness characteristics, its size tends to be a single pixel. Since the center point of the newly constructed target window does not participate in the calculations, the proposed algorithm is able to handle this special case. The specific calculation results in this case requires a specific area, and we can refer to the above situation.
From the above discussion, it can be seen that the algorithm proposed in this paper is robust and can effectively deal with different interference situations.

6. Conclusions

This paper proposes a detection algorithm for infrared small targets using local direction differences for target enhancement and background suppression, considering the target features, background neighborhood features, and the relationship between the two and then performing multi-scale fusion calculations, and, finally, using the threshold operation to get the true target. The experimental results show that the algorithm has good detection performance for small true infrared targets of different sizes, types, and brightness, and has achieved satisfactory results in the detection rate, signal-to-noise ratio gain, and background suppression.

Author Contributions

Conceptualization, Y.Z. (Yuye Zhang) and Y.Z. (Ying Zheng); methodology, Y.Z. (Yuye Zhang); software, Y.Z. (Yuye Zhang); validation, Y.Z. (Yuye Zhang), Y.Z. (Ying Zheng) and X.L.; formal analysis, Y.Z. (Ying Zheng); investigation, Y.Z. (Ying Zheng); resources, Y.Z. (Ying Zheng); data curation, Y.Z. (Ying Zheng); writing—original draft preparation, Y.Z. (Ying Zheng); writing—review and editing, Y.Z. (Yuye Zhang); visualization, Y.Z. (Yuye Zhang); supervision, X.L.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Xinjiang Natural Science Foundation, grant number 2020D01C026, and National Science Foundation of China, grant number U1911401 and grant number 61433012.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

I am thankful to my mentor and colleagues for their help and support. In particular, many thanks to colleague Ruichen Ding for his help in collecting materials, datasets, and revising the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A.G. Infrared patch-image model for small target detection in a single image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef] [PubMed]
  2. Cui, Z.; Yang, J.; Li, J.; Jiang, S. An infrared small target detection framework based on local contrast method. Measurement 2016, 91, 405–413. [Google Scholar] [CrossRef] [Green Version]
  3. Bi, Y.; Bai, X.; Jin, T.; Guo, S. Multiple feature analysis for infrared small target detection. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1333–1337. [Google Scholar] [CrossRef]
  4. Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Entropy-based window selection for detecting dim and small infrared targets. Pattern Recognit. 2017, 61, 66–77. [Google Scholar] [CrossRef]
  5. Gao, J.; Lin, Z.; An, W. Infrared small target detection using a temporal variance and spatial patch contrast filter. IEEE Access 2019, 7, 32217–32226. [Google Scholar] [CrossRef]
  6. Srivastava, H.B.; Limbu, Y.B.; Saran, R.; Kumar, A. Airborne infrared search and track systems. Def. Sci. J. 2007, 57, 739. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, S.; Huang, F.; Liu, B.; Yu, H.; Chen, Y. Infrared dim target detection method based on the fuzzy accurate updating symmetric adaptive resonance theory. J. Vis. Commun. Image Represent. 2019, 60, 180–191. [Google Scholar] [CrossRef]
  8. Yang, C.; Ma, j.; Qi, S.; Tian, J.; Zheng, S.; Tian, X. Directional support value of Gaussian transformation for infrared small target detection. Appl. Opt. 2015, 54, 2255–2265. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, P.; Wang, X.; Wang, X.; Fei, C.; Guo, Z. Infrared small target detection based on spatial-temporal enhancement using quaternion discrete cosine transform. IEEE Access 2019, 7, 54712–54723. [Google Scholar] [CrossRef]
  10. Nie, J.; Qu, S.; Wei, Y.; Zhang, L.; Deng, L. An Infrared Small Target Detection Method Based on Multiscale Local Homogeneity Measure. Infrared Phys. Technol. 2018, 90, 186–194. [Google Scholar] [CrossRef]
  11. Shao, X.; Fan, H.; Lu, G.; Xu, J. An improved infrared dim and small target detection algorithm based on the contrast mechanism of human visual system. Infrared Phys. Technol. 2012, 55, 403–408. [Google Scholar] [CrossRef]
  12. Chen, C.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 574–581. [Google Scholar] [CrossRef]
  13. Ma, J.; Zhao, J.; Ma, Y.; Tian, J. Non-rigid visible and infrared face registration via regularized Gaussian fields criterion. Pattern Recognit. 2015, 48, 772–784. [Google Scholar] [CrossRef]
  14. Han, J.; Ma, Y.; Huang, J.; Mei, X.; Ma, J. An infrared small target detecting algorithm based on human visual system. IEEE Geosci. Remote Sens. Lett. 2016, 13, 452–456. [Google Scholar] [CrossRef]
  15. Qiang, W.; Hua-Kai, L. An Infrared Small Target Fast Detection Algorithm in the Sky Based on Human Visual System. In Proceedings of the 2018 4th Annual International Conference on Network and Information Systems for Computers (ICNISC), Wuhan, China, 20–22 April 2018; pp. 176–181. [Google Scholar]
  16. Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A robust infrared small target detection algorithm based on human visual system. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar]
  17. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  18. Qin, Y.; Li, B. Effective infrared small target detection utilizing a novel local contrast method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1890–1894. [Google Scholar] [CrossRef]
  19. Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Small infrared target detection based on weighted local difference measure. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4204–4214. [Google Scholar] [CrossRef]
  20. Han, J.; Liang, K.; Zhou, B.; Zhu, X.; Zhao, J.; Zhao, L. Infrared small target detection utilizing the multiscale relative local contrast measure. IEEE Geosci. Remote Sens. Lett. 2018, 15, 612–616. [Google Scholar] [CrossRef]
  21. Han, J.; Moradi, S.; Faramarzi, I.; Zhang, H.; Zhao, Q.; Zhang, X.; Li, N. Infrared small target detection based on the weighted strengthened local contrast measure. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1670–1674. [Google Scholar] [CrossRef]
  22. Rivest, J.F.; Fortin, R. Detection of dim targets in digital infrared imagery by morphological image processing. Opt. Eng. 1996, 35, 1886–1893. [Google Scholar] [CrossRef]
  23. Deshpande, S.D.; Er, M.H.; Venkateswarlu, R.; Chan, P. Max-mean and max-median filters for detection of small targets. In Proceedings of the SPIE’s International Symposium on Optical Science, Engineering, and Instrumentation, Denver, CO, USA, 4 October 1999; pp. 74–83. [Google Scholar]
  24. Zhang, B.; Zhang, T.; Cao, Z.; Zhang, K. Fast new small-target detection algorithm based on a modified partial differential equation in infrared clutter. Opt. Eng. 2007, 46, 106401. [Google Scholar] [CrossRef]
  25. Kim, S.; Lee, J. Scale invariant small target detection by optimizing signal-to-clutter ratio in heterogeneous background for infrared search and track. Pattern Recognit. 2012, 45, 393–406. [Google Scholar] [CrossRef]
  26. Bae, T.W.; Sohng, K.I. Small target detection using bilateral filter based on edge component. J. Infrared Millim. Terahertz Waves 2010, 31, 735–743. [Google Scholar] [CrossRef]
  27. Dai, Y.; Wu, Y.; Song, Y. Infrared small target and background separation via column-wise weighted robust principal component analysis. Infrared Phys. Technol. 2016, 77, 421–430. [Google Scholar] [CrossRef]
  28. Hu, T.; Zhao, J.J.; Cao, Y.; Wang, F.L.; Yang, J. Infrared small target detection based on saliency and principle component analysis. J. Infrared Millim. Waves 2010, 29, 303–306. [Google Scholar]
  29. Cao, Y.; Liu, R.M.; Yang, J. Infrared small target detection using PPCA. Int. J. Infrared Millim. Waves 2008, 29, 385–395. [Google Scholar] [CrossRef]
  30. Wang, H.; Zhou, L.; Wang, L. Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8509–8518. [Google Scholar]
  31. Zhao, B.; Wang, C.; Fu, Q.; Han, Z. A novel pattern for infrared small target detection with generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4481–4492. [Google Scholar] [CrossRef]
  32. Fan, Z.; Bi, D.; Xiong, L.; Ma, S.; He, L.; Ding, W. Dim infrared image enhancement based on convolutional neural network. Neurocomputing 2018, 272, 396–404. [Google Scholar] [CrossRef]
  33. Zhao, D.; Zhou, H.; Rang, S.; Jia, X. An adaptation of CNN for small target detection in the infrared. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 669–672. [Google Scholar]
  34. Moradi, S.; Moallem, P.; Sabahi, M.F. Fast and robust small infrared target detection using absolute directional mean difference algorithm. Signal Process. 2020, 117, 107727. [Google Scholar] [CrossRef]
  35. Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Infrared small-target detection using multiscale gray difference weighted image entropy. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 60–72. [Google Scholar] [CrossRef]
  36. Han, J.; Moradi, S.; Faramarzi, I.; Liu, C.; Zhang, H.; Zhao, Q. A local contrast method for infrared small-target detection utilizing a tri-layer window. IEEE Geosci. Remote Sens. 2019, 17, 1822–1826. [Google Scholar] [CrossRef]
  37. Nasiri, M.; Chehresa, S. Infrared small target enhancement based on variance difference. Infrared Phys. Technol. 2017, 82, 107–119. [Google Scholar] [CrossRef]
  38. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 950–959. [Google Scholar]
  39. Han, J.; Xu, Q.; Moradi, S.; Fang, H.; Yuan, X.; Qi, Z.; Wan, J. A Ratio-Difference Local Feature Contrast Method for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  40. Li, Y.; Li, Z.; Li, W.; Liu, Y. Infrared Small Target Detection Based on Gradient-Intensity Joint Saliency Measure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7687–7699. [Google Scholar] [CrossRef]
Figure 3. Newly divided into four directional blocks structure. (a) Upper left subblock. (b) Upper right subblock. (c) Lower right subblock. (d) Lower left subblock.
Figure 3. Newly divided into four directional blocks structure. (a) Upper left subblock. (b) Upper right subblock. (c) Lower right subblock. (d) Lower left subblock.
Sensors 22 10009 g003
Figure 4. Processing results of different algorithms. (a) Original image, (b) LCM, (c) MPCM, (d) TLLCM, (e) RLCM, (f) ADMD, (g) VAR-DIFF, and (h) proposed method. Note that the gray values of all images are normalized to the [0–255] interval.
Figure 4. Processing results of different algorithms. (a) Original image, (b) LCM, (c) MPCM, (d) TLLCM, (e) RLCM, (f) ADMD, (g) VAR-DIFF, and (h) proposed method. Note that the gray values of all images are normalized to the [0–255] interval.
Sensors 22 10009 g004aSensors 22 10009 g004b
Figure 5. ROC curves of the four sequences and SIRST.
Figure 5. ROC curves of the four sequences and SIRST.
Sensors 22 10009 g005
Table 1. Information of the test datasets.
Table 1. Information of the test datasets.
DatasetsFramesResolutionTarget SizeTarget DetailsBackground Details
Seq-1 [39]100320 × 2405 × 5 to 7 × 7Keeping little motionMultiple PNHB
Small in sizeHeavy noise
Seq-2 [38,39]100320 × 2405 × 5 to 7 × 7Keeping motionComplex clouds
Low SCR valueHeavy noise
Seq-3 [37]100256 × 2565 × 5 to 7 × 7Keeping motionMultiple complex objects
Irregular shapeHeavy noise
Seq-4 [38]100256 × 2393 × 3 to 7 × 7Keeping motionMultiple buildings
Low SCR valueHeavy noise
SIRST [40]427Variety3 × 3 to 11 × 11 VarietyVariety
Table 2. SCRG of different algorithms.
Table 2. SCRG of different algorithms.
DatasetsLCMMPCMRLCM TLLCM VAR-DIFFADMDProposed
Seq-12.96787.15584.95493.4675143.239945.8036220.5620
Seq-22.26374.09814.87887.998140.280590.1688174.6367
Seq-32.83714.480210.09505.722765.757265.5795208.1402
Seq-41.28354.48982.12852.934235.193319.594797.7643
SIRST2.04523.77683.89553.154047.382642.0477100.4070
Table 3. BFS of different algorithms.
Table 3. BFS of different algorithms.
DatasetsLCMMPCMRLCMTLLCMVAR-DIFFADMDProposed
Seq-11.29856.50793.11042.106132.719293.1016653.6033
Seq-21.32307.33663.52131.947554.699225.6519222.1676
Seq-32.301410.85823.86823.5872179.838534.4299132.4181
Seq-41.26414.38563.49042.7104695.036344.2576425.8237
SIRST1.46728.22383.94802.62381.5051 × 103189.70671.6933 × 103
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Zheng, Y.; Li, X. Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System. Sensors 2022, 22, 10009. https://doi.org/10.3390/s222410009

AMA Style

Zhang Y, Zheng Y, Li X. Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System. Sensors. 2022; 22(24):10009. https://doi.org/10.3390/s222410009

Chicago/Turabian Style

Zhang, Yuye, Ying Zheng, and Xiuhong Li. 2022. "Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System" Sensors 22, no. 24: 10009. https://doi.org/10.3390/s222410009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop