Next Article in Journal
Electronics and Detectors for the Stellar Intensity Interferometer of the ASTRI Mini-Array Telescopes
Previous Article in Journal
Robust Point Cloud Registration Network for Complex Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maritime Infrared Small Target Detection Based on the Appearance Stable Isotropy Measure in Heavy Sea Clutter Environments

1
School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
2
Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(24), 9838; https://doi.org/10.3390/s23249838
Submission received: 29 October 2023 / Revised: 29 November 2023 / Accepted: 11 December 2023 / Published: 15 December 2023
(This article belongs to the Section Optical Sensors)

Abstract

:
Infrared small target detection plays a crucial role in maritime security. However, detecting small targets within heavy sea clutter environments remains challenging. Existing methods often fail to deliver satisfactory performance in the presence of substantial clutter interference. This paper analyzes the spatial–temporal appearance characteristics of small targets and sea clutter. Based on this analysis, we propose a novel detection method based on the appearance stable isotropy measure (ASIM). First, the original images are processed using the Top-Hat transformation to obtain the salient regions. Next, a preliminary threshold operation is employed to extract the candidate targets from these salient regions, forming a candidate target array image. Third, to distinguish between small targets and sea clutter, we introduce two characteristics: the gradient histogram equalization measure (GHEM) and the local optical flow consistency measure (LOFCM). GHEM evaluates the isotropy of the candidate targets by examining their gradient histogram equalization, while LOFCM assesses their appearance stability based on local optical flow consistency. To effectively combine the complementary information provided by GHEM and LOFCM, we propose ASIM as a fusion characteristic, which can effectively enhance the real target. Finally, a threshold operation is applied to determine the final targets. Experimental results demonstrate that our proposed method exhibits superior comprehensive performance compared to baseline methods.

1. Introduction

The Infrared (IR) Search and Track (IRST) system is widely applied in maritime patrols, border surveillance, and maritime rescue operations, making it a critical device in the field of maritime security. IR small target detection is a key technology of the IRST system [1]. The timely detection and localization of small targets on the sea surface, such as distant ships, are essential for mission success and platform safety. Therefore, IR small target detection technology holds significant relevance to maritime security, attracting extensive research efforts. Researchers face common challenges in most scenarios. For instance, small targets occupy only a limited number of pixels, lacking distinct contour and texture features. Numerous detection methods have been proposed to address these difficulties and achieved good outcomes.
However, new challenges arise when heavy clutter, such as waves, sun glints, and island edges, is present on the sea surface. Sea clutter can cause dramatic fluctuations in the background, leading to reduced local contrast of small targets and potential missed detections [2]. Additionally, certain sea clutter exhibits high brightness and shares similar appearance characteristics with small targets, making it challenging to suppress and resulting in serious false alarms. Existing algorithms do not perform well when faced with these issues. As a result, maritime IR small target detection in heavily cluttered environments remains challenging and requires further in-depth research.

1.1. Related Work

According to the basic principle, the existing detection methods can be divided into four categories: methods based on background subtraction, methods based on the human visual system(HVS), methods based on optimization, and methods based on deep learning.
The first category is the methods based on background subtraction, which predicts the background image and then subtract the background from the original image to obtain the target image. Classical background subtraction methods include the Top-Hat transformation [3], the Max-mean/Max-median filtering [4], the two-dimensional least mean square filtering (TDLMS) [5], and the Low-pass filter (LPF). Many modifications to the classic methods have been proposed and are widely applied. The histogram rightwards cyclic shift binarization (HRCSB) combines background subtraction with histogram curve transformation [6]. The new white top-hat (NWTH) transformation improves the detection performance by structure element construction and operation reorganization [7]. Double-layer two dimensional least mean square filter uses different filter settings for background suppression and target enhancement [8]. These methods assume that the background is flat. They work well in simple backgrounds, but they cannot cope with complex backgrounds because the heterogeneous regions in complex backgrounds will bring difficulties to background prediction.
The second category is the methods based on target enhancement, which can directly suppress the background and enhance the target by filtering. The difference of Gaussian (DoG) [9] and the Laplacian of Gaussian (LoG) [10] are two classic methods considering the Gaussian model of small targets’ gray distribution. Inspired by the human visual system (HVS) mechanism, researchers developed many detection methods using the local contrast or gray difference [11]. The calculation of these methods usually takes the form of center-surround difference. For example, the average absolute gray difference (AAGD) is a typical center-surround difference method, but AAGD does not utilize directional information, so it cannot suppress clutter edges [12]. The absolute average difference with cumulative directional derivatives (AADCDD) improved AAGD by introducing direction information [13]. Absolute directional mean difference (ADMD) also used direction information to suppress structural backgrounds [14]. The local contrast measure (LCM) proposed a classic nine-cell sliding window [15]. Currently, many improvements in LCM have been developed. For example, the improved LCM (ILCM) [16] and the novel LCM (NLCM) [17] divided the image into many sub-blocks to reduce computation. The multi-scale patch-based contrast measure (MPCM) adopted the product of the differences in opposite directions [18]. The tri-layer LCM (TLLCM) adopted a tri-layer nested window [19]. The homogeneity-weighted LCM (HWLCM) combined the LCM with the homogeneity of the cells [20]. The weighted strengthened local contrast measure (WSLCM) combined the strengthened LCM and the weighting function [21]. The above methods belong to the spatial filtering methods, which are easy to implement and fast to calculate, but they did not take advantage of the target’s motion information. Considering the gray fluctuation caused by the target motion in the time domain, researchers developed many spatial–temporal filtering methods. The spatial–temporal local contrast filter (STLCF) [22] and the spatial–temporal local contrast map (STLCM) [23] calculated the temporal and spatial local contrast of moving small targets separately and then fused them by multiplication. The spatial–temporal local difference measure (STLDM) directly detected targets in the 3-D spatial–temporal domain [24]. The novel spatiotemporal saliency method (NSTSM) is proposed for LSS IR target detection, which utilizes the variance characteristics and gray intensity characteristics of target pixels and background pixels in the spatiotemporal domain [25]. These methods based on target enhancement can effectively suppress the heterogeneous regions in complex backgrounds, but they cannot deal with the clutter similar to the real small target.
The third category is the methods based on optimization, which transforms the small target detection problem into an optimization problem of recovering sparse and low-rank matrices. The IR patch-image (IPI) model assumed that the background matrix is low rank and the target matrix is sparse, and then restores the target image by optimization [26]. Then, this model was extensively researched, and many researchers have proposed improved methods based on the IPI model. Dai et al. proposed the weighted IR patch-image (WIPI) model to solve the problem of excessive target shrinkage, in which the target likelihood coefficient is designed as the weight of the target patch-image [27]. Dai et al. also proposed the non-negative IR patch-image model based on partial sum minimization of singular values (NIPPS) to suppress strong edges better, which replaces the kernel norm in the IPI model with the partial sum of singular values [28]. Like NIPPS, Zhao et al. also tried to improve the sparse term in the IPI model and proposed the method based on non-convex optimization with Lp-norm constraint (NOLC), which replaces the nuclear norm with the Lp-norm [29]. Considering that the single subspace assumption is not suitable for the estimation of complex backgrounds, many researchers proposed models based on multiple subspaces. He et al. proposed the low-rank and sparse representation (LRSR) model, which constructs over-complete dictionaries for sparse representation of small targets [30]. Wang et al. proposed the stable multi-subspace learning (SMSL), which adopts the subspace learning strategy to improve the ability to resist complex background and noise [31]. To make the most of prior information, Dai et al. extended the IPI model to the tensor field and proposed the IR patch-tensor (IPT) model [32]. Then, a large number of methods based on the IPT model began to appear. The partial sum of tensor nuclear norm (PSTNN) can achieve very fast computing speed [33]. The non-convex tensor rank surrogate joint local contrast energy (NTRS) utilized a non-convex tensor rank surrogate merging tensor nuclear norm and the Laplace function for background patch constraint [34]. Many researchers have extended the tensor model to the spatial–temporal domain, such as the multiple subspace learning and spatial–temporal IPT (MSL-STIPT [35]), the spatial–temporal tensor model (STTM) [36], and the novel spatial–temporal tensor model with saliency filter regularization (STTM-SFR) [37]. These optimization-based methods can continuously improve performance through model refinement. However, their assumption of low-rank background makes them highly demanding of background uniformity, and some clutter also confirms the assumption of local sparse. Meanwhile, they are generally time-consuming due to iterative operations.
The fourth category is the methods based on deep learning, which first trains the model to mine image features using a large number of data sets and then detect small targets using the trained model. Convolutional neural network (CNN) is a commonly used network that is beneficial to learning infrared image hierarchical features. Zhao et al. proposed the novel lightweight convolutional neural network called the TBC-Net, which consists of two modules: target extraction and semantic constraints [38]. With these two modules, TBC-Net can add high-level semantic constraint information on images into training. Since pooling layers in CNN can cause the targets in deep layers to be missed, Li et al. proposed a dense nested attention network (DNA-Net), which can preserve the target deep features through progressive interaction between the high-level and low-level features [39]. Considering that CNN cannot capture large-scale dependencies, many researchers have begun to use transformer based on self-attention mechanisms. Liu et al. proposed an IR small-dim target detection method with the transformer, which uses a feature enhancement module to improve feature learning for small-dim targets [40]. Chen et al. proposed a hierarchical vision transformer-based method called the IRSTFormer, specifically used for small target detection in large-size images [41]. Due to the small size and few features of IR small targets, it is difficult for pure data-driven methods to achieve high performance. Researchers have tried to combine neural networks with traditional small-target models. Dai et al. proposed a novel model-driven deep network, which introduces the local contrast and extends the application of traditional small target features in the field of neural networks [42].
The performance of deep learning-based methods relies heavily on a large number of training samples. However, obtaining an ample amount of training samples for IR small target detection is challenging in military scenarios, which hinders the application of deep learning approaches. Therefore, there is a need for further development of models based on small samples.
While the aforementioned existing methods demonstrate good performance in many scenarios, they struggle to handle heavy sea clutter environments. Extensive clutter on the sea surface can significantly interfere with the detection process. Some clutter may exhibit a very similar appearance to real small targets, posing difficulties for existing methods to suppress them effectively.

1.2. Motivation

At a long imaging distance, the projection of a small target on the camera focal plane typically covers only a few pixels or even less than one pixel. However, due to scattering, diffraction, and focusing effects, the emitted IR radiation from the target undergoes diffusion, resulting in an image spot that is larger than the target’s physical size. Although this spot cannot be perfectly circular, it approximates isotropy to some extent. The IR radiation of the target and the properties of the optical system will not change dramatically, allowing for the appearance of the target spot to remain relatively stable over a short period of time. Figure 1a illustrates an IR image containing a small target and heavy sea clutter, with the small target marked by a red box. The first line of Figure 1b shows the local images of the small target captured in five consecutive frames. It can be observed that the appearance of the small target exhibits little variation over these five frames and consistently maintains an approximate isotropic appearance. The combination of isotropy in the spatial domain and stability in the temporal domain is referred to as the Appearance Stable Isotropy.
Sea clutter primarily consists of waves and sun glints. The IR radiation emitted by the waves gives rise to heterogeneous regions in the background, with its appearance being influenced by the shape of the waves. Sun glints, on the other hand, are bright spots formed due to sunlight reflecting off the sea surface, and their appearance depends on local reflective surfaces. The sea surface itself exhibits significant randomness, resulting in irregular shapes and varying sizes of clutter. In an image, both isotropic and anisotropic clutter can coexist simultaneously. Furthermore, the dynamic nature of the sea surface leads to continuous deformation of clutter. An initially isotropic clutter can rapidly transform into an anisotropic one within a short duration. Consequently, sea clutter does not maintain a consistent appearance and lacks the characteristic of Appearance Stable Isotropy.
In Figure 1a, two examples of clutter are marked with yellow boxes, labeled as Clutter A and Clutter B, respectively. The second and third lines of Figure 1b display the local images of Clutter A and B, respectively, captured over five consecutive frames. It can be observed that Clutter A initially appears isotropic in the first frame but undergoes subsequent changes in shape. By the fifth frame, Clutter A has become significantly anisotropic. On the other hand, Clutter B consistently demonstrates anisotropy.
The above analysis and examples demonstrate substantial differences between small targets and sea clutter in terms of ASI. Leveraging this distinction can effectively differentiate between them.
Based on the above analysis, this paper proposes a detection method that utilizes the appearance stable isotropy measure (ASIM). The contributions of this paper can be summarized as follows:
(1)
The Gradient Histogram Equalization Measure (GHEM) is proposed to effectively characterize the spatial isotropy of local regions. It aids in distinguishing small targets from anisotropic clutter.
(2)
The Local Optical Flow Consistency Measure (LOFCM) is proposed to assess the temporal stability of local regions. It facilitates the differentiation of small targets from isotropic clutter.
(3)
By combining GHEM, LOFCM, and Top-Hat, ASIM is developed as a comprehensive characteristic for distinguishing between small targets and different types of sea clutter. We also construct an algorithm based on ASIM for IR small target detection in heavy sea clutter environments.
(4)
Experimental results validate the superior performance of the proposed method compared to the baseline methods in heavy sea clutter environments.
The remainder of this paper is organized as follows: Section 2 presents the proposed method, detailing its key components. Subsequently, in Section 3, comprehensive experimental results and analysis are provided. Finally, this paper is concluded in Section 4.

2. Proposed Method

Figure 2 shows the flowchart of the proposed method. Firstly, the original images are processed using the Top-Hat transformation to obtain the salient regions. Then, a preliminary threshold operation is performed to extract candidate targets from the salient regions. The characteristic neighborhoods of these candidate targets are then scaled and stitched to form a candidate target array. Third, the oriented gradient histogram of the candidate targets is computed, and GHEM is used to characterize their isotropy. At the same time, the local optical flow vectors of the candidate targets are calculated, and LOFCM is used to characterize their appearance stability. Fourth, ASIM is obtained by fusing GHEM, LOFCM, and Top-Hat image. ASIM serves as an effective comprehensive feature for distinguishing between small targets and various types of sea clutter. Finally, by applying a threshold operation on ASIM, the true target can be extracted.

2.1. Candidate Target Extraction

Both small targets and sea clutter are salient regions in IR images, characterized by their higher brightness than the surrounding background. In this paper, the Top-Hat transformation is utilized to extract these salient regions as candidate targets. As a background subtraction method, the Top-Hat transformation can effectively preserve the grayscale distribution of the salient areas, facilitating the subsequent description of their appearance features.
The Top-Hat transformation is defined as:
T ( I ) = I ( I S ) ,
I S = ( I S ) S ,
where I is the original image; T ( I ) is the Top-Hat image; S is the structure element; ∘ represents the morphology open operation; and ⊕ and ⊖ represent expansion and corrosion, respectively.
The size of the structure element should be adaptable to the salient regions [7]. If the structure element is too small to cover the candidate targets, it may result in the loss of some target pixels. Conversely, if the structure element is too large, background suppression will be incomplete, increasing the difficulty of subsequent processing. The ideal structure element should be slightly larger than the candidate targets. According to the definition provided by the Society of Photo-Optical Instrumentation Engineers (SPIE), small targets typically range in size from 2 × 2 to 9 × 9 [15]. Although sea clutter has a wider range of sizes, only small ones can easily be mistaken for small targets. Large-sized clutter is easy to suppress since it responds poorly to small target detection algorithms. Therefore, in this paper, the size of the structure element is set to 11 × 11, slightly larger than the 9 × 9 size of small targets, as shown in Figure 3.
Figure 4a shows the Top-Hat image of Figure 1a. In Figure 4a, homogeneous backgrounds are suppressed while salient regions are retained. Then, the salient regions are extracted using a preliminary threshold segmentation.
T H Top Hat = μ Top Hat + k Top Hat × σ Top Hat
where T H Top Hat is the threshold; μ Top Hat and σ Top Hat are the mean and standard deviation of the Top-Hat image, respectively; k Top Hat is the coefficient. Since the brightness of the target may be lower than many sea clutter, the threshold can be set to a small value to ensure that the target can be retained. In this paper, k Top Hat is recommended to be set to 1 according to development experience. The binary image BW can be obtained using the preliminary threshold segmentation.
BW = 1 T T H Top Hat 0 T < T H Top Hat
The binary image is shown in Figure 4b. There are many connected domains in Figure 4b, each representing a salient region. These salient regions include small targets and sea clutter. According to the analysis of the small target size mentioned above, only connected domains with areas ranging from 2 × 2 to 9 × 9 are considered candidate targets, while the others are disregarded. The coordinates of the candidate target are represented by the centroid of the connected domain.
( x , y ) = ( i = 1 n p x i × T ( x i , y i ) i = 1 n p T ( x i , y i ) , i = 1 n p y i × T ( x i , y i ) i = 1 n p T ( x i , y i ) )
where ( x , y ) is the coordinates of the centroid; n p and ( x i , y i ) are the number and coordinates of the pixels in the connected domain, respectively.
Assuming the presence of N candidate targets, their sizes randomly range from 2 × 2 to 9 × 9, which is still a wide range. When candidate targets possess too few pixels, it can lead to inaccurate calculations of appearance features. Hence, it is necessary to scale the candidate targets in order to adjust their size appropriately. The region within the minimum enclosing square of the connected domain is defined as the characteristic neighborhood of the candidate target, which will be subsequently scaled to 11 × 11 using bilinear interpolation. The equal aspect ratio scaling ensures that the appearance feature of the candidate targets remains unchanged. If the candidate target is 9 × 9, its minimum enclosing square would coincidentally be 11 × 11, and no further scaling is needed.
After the scaling is completed, all candidate targets are sorted based on their maximum values in the Top-Hat image. The characteristic neighborhoods of these candidate targets are then concatenated into a square image named the candidate target array image. The concatenation is performed in a top-to-bottom and left-to-right order. If the number of candidate targets is not a perfect square, any empty positions in the array will be filled with zero matrices. Figure 4c illustrates the candidate target array image extracted from Figure 4b. Additionally, a position mapping function is employed to record the correspondence between the positions in the candidate target array image and the original image.
P o r i g i n a l = M ( P a r r a y )

2.2. Gradient Histogram Equalization Measure (GHEM)

The histogram of oriented gradients (HOG) is a popular feature descriptor extensively employed in computer vision for target detection and recognition tasks. HOG describes the appearance features of local regions by quantifying the distribution of pixel gradients and orientations. In this paper, the HOG technique is utilized to analyze the appearance features of candidate targets. Specifically, if a candidate target exhibits isotropy, the gradients within its characteristic neighborhood should be evenly distributed across different directions. In contrast, anisotropic candidate targets will have gradients concentrated in certain directions. Consequently, the gradient histogram equalization measure (GHEM) is proposed to measure the isotropy level of candidate targets.
The gradient and orientation of the candidate target array are calculated as follows:
A x ( x , y ) = A ( x , y ) A ( x 1 , y ) ,
A y ( x , y ) = A ( x , y ) A ( x , y 1 ) ,
G ( x , y ) = A x 2 ( x , y ) + A y 2 ( x , y ) ,
θ ( x , y ) = arctan ( A y ( x , y ) A x ( x , y ) ) ,
where A ( x , y ) is the pixel value in the candidate target array image; A x and A y are horizontal and vertical gradients, respectively; G is the magnitude of the gradient; θ is the gradient direction; and its range is [ 0 ,   180 ] after taking the absolute value.
Next, Figure 5a shows the process of constructing the oriented gradient histogram. The range of [ 0 ,   180 ] is divided equally into nine parts, each covering a range of 20 degrees. As a result, an empty histogram with nine bins is created to accommodate the distribution of gradient values. Within the characteristic neighborhood, each pixel has a gradient magnitude value and a corresponding gradient direction. The gradient value will be assigned to the corresponding bin based on the corresponding direction value. For example, in Figure 5a, the pixel marked by the blue circle has a direction of 0 degrees. Thus, its gradient magnitude of 2 is filled into the bin of 0 degrees. The pixel marked by the red circle has a direction of 25 degrees, which falls between the adjacent bins of 20 degrees and 40 degrees. In this case, its gradient magnitude of 4 is distributed to these two bins based on linear distance weighting. Since 25 degrees is closer to 20 degrees, a larger portion is assigned to the bin representing 20 degrees. After all pixels’ gradients within the characteristic neighborhood are assigned, the final values of the nine bins are computed as the ratio of the accumulated gradient value in each bin to the overall sum of gradient values across all bins.
b i = G i i = 1 9 G i
where G i Is the gradient value assigned to the i-th bin; b i is the final value of the i-th bin.
Figure 5b shows the gradient histogram of the first candidate target in Figure 4c. Figure 5c is the visual result of HOG. In Figure 5c, each line segment represents the bin whose direction is perpendicular to the line segment, and the length of the line segment represents the bin’s value. Visual HOG can intuitively display the gradient distribution of candidate targets.
Then, the GHEM of candidate targets is defined as:
GHEM = ( 1 σ b σ max ) 2 ,
where σ b is the standard deviation of the nine bins; σ max is the possible maximum value of σ b ; and the square in the formula is to enhance the differentiation. The value of σ max are the solution of the following nonlinear programming problem.
min f ( x ) = 1 8 i = 1 9 ( b i b ¯ i ) 2 s . t . 0 b i 1 i = 1 9 b i = 1
where b ¯ i is the average value of b i . The practical meaning of this problem is to find the maximum standard deviation in the range of b i . We can obtain that σ max is equal to 0.3333 by solving this problem. If a candidate target is ideally isotropic, all bins will be equal, its standard deviation will be 0 and its GHEM will be 1. If one bin of a candidate target is 1 and the other bins are 0, such as a step edge region, its standard deviation is 0.3333 and its GHEM is 0.
Figure 6a shows two characteristic neighborhoods containing a small target and an anisotropic clutter, respectively, along with their visual HOG. In the visual HOG of the small target, all bins are close, indicating a balanced distribution of the gradients in different directions. The balanced HOG is consistent with the isotropic appearance of the small target. On the other hand, in the visual HOG of the anisotropic clutter, several bins are notably longer than the others, indicating a concentration of gradients in specific directions. The GHEM of the small target and clutter in Figure 6a are 0.9570 and 0.3081, respectively. The small target has higher GHEM than the clutter, which reflects the effectiveness of GHEM in distinguishing between small targets and anisotropic clutter.

2.3. Local Optical Flow Consistency Measure (LOFCM)

The optical flow method is a technique used to estimate the motion information of pixels between consecutive frames. It is commonly applied in various fields, such as target tracking, motion analysis, and 3D reconstruction. Optical flow refers to the motion vectors of pixels. As discussed in Section 1.2, a small target has a stable appearance, which means that the relative positions of its different parts will remain fixed over a short period (similar to a rigid body). Therefore, all pixels of a small target will possess consistent motion vectors. Conversely, sea clutter demonstrates an unstable appearance, with its shape constantly changing. As a result, some pixels inevitably have different displacements from others. Therefore, the consistency of the optical flow vectors of a candidate target can be used to characterize its appearance stability. In this regard, we propose the local optical flow consistency measure (LOFCM) based on the Lucas–Kanade (L-K) method.
The L-K method is a simple and classic optical flow algorithm [43]. The application of L-K method requires the following three basic conditions:
(1)
Brightness constancy: The gray value of a pixel does not change over time.
(2)
Small motion: The displacement of a pixel is small, and the passage of time cannot cause drastic changes in the pixel position.
(3)
Local spatial consistency: The relative positions of neighboring pixels do not change.
Based on the above conditions, the optical flow calculation formula of the L-K method can be derived.
A ( x , y , t ) is used to represent the value of pixel ( x , y ) in the candidate target array of frame t. ( Δ x , Δ y ) is a displacement of the pixel in time Δ t . According to Condition 1, the values of ( x , y ) before and after the displacement should be equal, so there are
A ( x , y , t ) = A ( x + Δ x , y + Δ y , t + Δ t ) .
The right-hand side of the formula can be expanded in the Taylor series.
A ( x + Δ x , y + Δ y , t + Δ t ) = A ( x , y , t ) + A x Δ x + A y Δ y + A t Δ t + ε
where ε is a higher-order infinitesimal. According to Condition 2, the displacement ( Δ x , Δ y ) is small, so ε can be ignored, and we can obtain
A x Δ x + A y Δ y + A t Δ t = 0 A x Δ x Δ t + A y Δ y Δ t + A t = 0 .
The above formula can be abbreviated as
A x v x + A y v y + A t = 0 ,
where A x , A y and A t are the partial derivatives of the candidate target array concerning x, y, and t; ( v x , v y ) is the optical flow vector. Based on Condition 3, we can assume that the pixels in a window have the same optical flow. Then we have
A x 1 A x 2 A x n A y 1 A y 2 A y n v x v y = A t 1 A t 2 A t n ,
where the n in the superscript is the number of pixels in the window. Equation (18) can be abbreviated as
K v = b ,
where K is the matrix composed of [ A x i ,   A y i ] ; b is the vector composed of A t i . The optical flow vector can be solved by the least square method.
v = ( K T K ) 1 K T ( b )
After combining the items, Equation (20) can be expressed as
v = v x v y = i = 1 n A x i 2 i = 1 n A x i A y i i = 1 n A x i A y i i = 1 n A y i 2 1 i = 1 n A x i A t i i = 1 n A y i A t i .
A ( t ) is used to represent the candidate target array of frame t, and the local regions of each candidate target in the Top-Hat transformation of frame t 1 and frame t + 1 are also scaled and stitched to form two array images, represented by A ( t 1 ) and A ( t + 1 ) , respectively. The time domain gradient is calculated as follows.
A t = A ( t ) A ( t 1 )
A t + = A ( t + 1 ) A ( t )
where A t and A t + are the time domain gradients along the negative and positive directions of the time axis, respectively. A x and A y can be calculated using Equations (7) and (8). Substitute A x , A y , A t , and A t + into Equation (21), and we can obtain
v = i = 1 n A x i 2 i = 1 n A x i A y i i = 1 n A x i A y i i = 1 n A y i 2 1 i = 1 n A x i A t i i = 1 n A y i A t i ,
v + = i = 1 n A x i 2 i = 1 n A x i A y i i = 1 n A x i A y i i = 1 n A y i 2 1 i = 1 n A x i A t i + i = 1 n A y i A t i + ,
where v and v + are the optical flow vectors on both sides of A ( n ) in the time domain. The final optical flow vector is the mean of v and v + .
v = 1 2 ( v + v + )
Each characteristic neighborhood is divided as shown in Figure 7. C0 is the 9 × 9 window within the characteristic neighborhood. The optical flow vector of C0 is represented by v 0 , which can be calculated using Equations (22)–(26). The number of pixels involved in the calculation, namely n in Equation (21), is equal to 81. C0 can be further subdivided into cells of 3 × 3, each cell containing 9 pixels. The surrounding eight cells are labeled as C1 to C8. Similarly, the optical flow vectors of these cells, denoted as v 0 v 8 , can also be calculated using Equations (22)–(26). Here, each cell employs 9 pixels for the calculation. Figure 4e shows the optical flow vectors of the candidate targets in Figure 4c. To enhance the visual effect, all optical flow vectors have been magnified threefold in size.
The LOFCM of candidate targets is defined as:
LOFCM = 1 i = 1 8 ( v i v 0 ) 2 8 v 0 2 + ε .
where ε is set to 0.001 to prevent the denominator from being 0.
It should be noted that Condition 3 of the L-K method requires that the relative positions of adjacent pixels do not change, which means that the adjacent pixels must maintain consistent motion. In the calculation of the optical flow vectors, Equation (18) is built based on Condition 3. When all the pixels involved in Equation (18) have the same displacement, the [ v x ,   v y ] T calculated by solving Equation (18) will represent the true displacement for each pixel. However, as previously discussed, the shape change of sea clutter will cause certain pixels to move inconsistently with others, so sea clutter does not satisfy Condition 3. For sea clutter pixels, the pixels involved in Equation (18) may have different displacements, so the calculated [ v x ,   v y ] T is not the true displacement of each pixel. In this case, the result of solving Equation (18) by the least square method represents the average motion trend of these pixels, which is equivalent to the displacement of the cell. As can be seen from Equation (27), the calculation of LOFCM only relies on the displacements of the nine cells, and it does not require exact displacement values for every pixel. Therefore, although sea clutter does not meet Condition 3 of the L-K method, the calculation process of LOFCM described in Equations (22)–(27) is still applicable to all candidate targets.
In the characteristic neighborhood of the candidate target, the window C0 encompasses all the pixels belonging to the candidate target. The optical flow vector v 0 reflects the overall movement trend of these pixels and can be considered as the actual displacement of the candidate target. Each cell within the characteristic neighborhood only contains a subset of pixels located on the edges of the candidate target. As a result, the optical flow vectors v 0 v 8 can only reflect the movement trend of a single part of the candidate target. The difference between v 0 v 8 and v 0 indicates the relative displacement of the individual parts to the whole target, namely the deformation. If the candidate target remains entirely undeformed, v 0 v 8 will equal v 0 , resulting in a LOFCM value of 1. However, if the candidate target exhibits deformation, its LOFCM will be less than 1. If the numerator in Equation (27) exceeds the denominator, signifying that the average deformation vector of the eight cells exceeds the total displacement, the LOFCM value will be less than 0.
Figure 6b shows the characteristic neighborhoods of a small target and an isotropic clutter, along with their optical flow vectors. On the whole, the optical flow vectors of the small target have a similar direction, indicating that the stable appearance ensures all parts of the small target keep moving consistently. In contrast, the optical flow vectors of the clutter lack consistency, which is attributed to its ongoing deformation process. Although the current appearance of the clutter is isotropic, it cannot be maintained indefinitely. Over time, the clutter will transition from isotropic to anisotropic, thereby exhibiting differences from the small target. However, we can distinguish between the small target and the isotropic clutter in the current moment through optical flow. In essence, LOFCM converts the spatial features that would only appear in the future into spatial–temporal features that are exploitable in the present moment. In Figure 6b, the LOFCM values of the small target and the clutter are 0.7540 and 0.0235, respectively. The small target has a higher LOFCM value than that of the clutter, reflecting the effectiveness of LOFCM in distinguishing between small targets and isotropic clutter.

2.4. Appearance Stable Isotropy Measure

The location mapping function in Equation (6) is used to map the GHEM and LOFCM of each characteristic neighborhood back to the original image. GHEM and LOFCM play a major role in distinguishing anisotropic and isotropic clutter, respectively, so we fuse GHEM and LOFCM as weighting coefficients for Top-Hat images. Finally, ASIM is defined as
ASIM = T × GHEM × LOFCM GHEM > 0 . 5 , LOFCM > 0 . 1 0 else .
The classification conditions in the above formula are derived from statistical experience during the development process. Generally, the GHEM of a small target will not be lower than 0.5, and its LOFCM will not be lower than 0.1. The calculation process of ASIM is shown in Algorithm 1.
Small target has high GHEM and LOFCM, so it will be retained in ASIM. Sea clutter is difficult to have both high GHEM and LOFCM, so it will be suppressed in ASIM. Figure 4f shows the ASIM of Figure 4a, and most of the sea clutter has been suppressed. Compared with the original image with a heavy sea clutter environment, ASIM is more conducive to extracting small targets. In the subsequent processing, we can extract the target by a simple threshold segmentation.
T H ASIM = μ ASIM + k ASIM × σ ASIM
where T H ASIM is the segmentation threshold of ASIM; μ ASIM and σ ASIM are the mean and standard deviation of ASIM, respectively; and k ASIM is the coefficient. The value of k ASIM will determine the threshold level. Increasing k ASIM will lead to a decrease in both the false alarm rate and detection rate, while decreasing k ASIM will result in an increase in both of them. It should be noted that different tasks prioritize either the detection rate or the false alarm rate depending on their specific requirements. For example, in military defense applications, achieving a high detection rate is typically of utmost importance due to the potential catastrophic consequences of missing the target. In the civil field, the false alarm rate is usually required to be at a low level to save resources and time costs. This paper adopts the detection rate priority strategy. According to the development experience, k ASIM is set as 0.5, which can ensure that no target is missed on all experimental sequences.
Algorithm 1 ASIM.
Input: frame t 1 , t, and t + 1
Output: ASIM image
1:
Calculate the Top-Hat transformation of the input images using Equations (1) and (2), where the Top-Hat image of frame t is represented by T;
2:
Extract candidate targets from T according to Equations (3)–(5);
3:
The characteristic neighborhoods of the candidate targets in the Top-Hat transformation of the input images are scaled and stitched to obtain the array images A ( t 1 ) , A ( t ) , and A ( t + 1 ) ;
4:
Calculate G and θ of the pixels in A ( t ) according to Equations (7)–(10);
5:
Calculate the oriented gradient histogram of each candidate target according to Figure 5a and Equation (11);
6:
GHEM = ( 1 σ b σ max ) 2 ;
7:
Calculate the the optical flow vectors v 0 v 8 of each candidate target in A ( t ) according to Equations (20)–(24);
8:
LOFCM = 1 i = 1 8 ( v i v 0 ) 2 8 v 0 2 ;
9:
ASIM = T × GHEM × LOFCM GHEM > 0 . 5 , LOFCM > 0 . 1 0 else .

3. Experiments

This paper organized experiments to evaluate the performance of the proposed method. There are 12 sequences selected as datasets, denoted as Seq.1–Seq.12, respectively. Figure 8 shows the first frame of each sequence. In Figure 8, all small targets are marked with a red box, and their enlarged views are placed in the bottom left corner of the images. The background of these sequences is the sea surface with a considerable amount of clutter. Detailed information of the datasets is provided in Table 1. To objectively measure the performance of the proposed method, nine classic or advanced detection methods are selected as baseline methods, including LCM [15], IPI [26], TLLCM [19], WSLCM [21], NOLC [29], ADMD [14], STLCF [22], STLCM [23], and MSL-STIPT [35]. The experiments were conducted on a computer with an Intel Core i7-6700 CPU (Intel, Santa Clara, CA, USA) and 16 GB of memory, and the relevant code was implemented on MATLAB 2016a.

3.1. Evaluation Metrics

In this paper, the signal-to-clutter ratio (SCR) gain (SCRG), background suppression factor (BSF), and receiver operating characteristic (ROC) curves are used as evaluation indexes. BSF and SCRG measure the background suppression and target enhancement capabilities of detection methods, respectively [11]. BSF, SCR, and SCRG are defined as
BSF = σ in σ out + ε ,   SCR = I T μ B σ B ,   SCRG = SC R out SC R in + ε ,
where σ in and σ out are the standard deviations of the input image and the output image, respectively. I T is the intensity of the target; μ B and σ B are the mean and standard deviation of the target neighborhood pixels (excluding the target pixels), respectively; SC R in and SC R out are the SCR of the input image and output image, respectively. ε is set to 0.001 to prevent the denominator from being 0.
ROC curve describes the relationship between the detection rate (DR) and the false alarm rate (FAR). A high ROC curve indicates that the detection method can achieve a high detection rate under the same false alarm rate. DR and FAR are defined as
DR = number of detected true targets total number of true targets ,
FAR = number of pixels in false alarms total number of pixels in the whole image .

3.2. Experimental Results

First, Figure 9 and Figure 10 show the processing results of different methods on the 12 experimental sequences, along with their corresponding 3D surface plots. In the result images, the target positions are marked with red rectangles, while zoomed-in views of these marked regions are presented in the bottom left corner. In the 3D surface plots, peaks belonging to the targets are also marked with red rectangles. These processing results can intuitively demonstrate the ability of different methods in background suppression and target enhancement.
LCM, IPI, NOLC, HWLCM, TLLCM, ADMD, and STLCF can enhance the targets to some extent; however, their effectiveness in suppressing clutter is limited. There are a large number of peaks in their 3D plots, which correspond to residual clutter. STLCM and MSL-STIPT exhibit better background suppression effects with fewer residual clutter peaks observed in their 3D plots. However, both STLCM and MSL-STIPT mistakenly suppressed the real targets in many sequences. Among all the methods, the proposed method in this paper shows the best overall performance in terms of background suppression and target enhancement. Our method can effectively enhance the targets while achieving relatively little clutter residue.
To quantitatively compare the background suppression and target enhancement abilities of different methods, Table 2 provides their BSF and SCRG. Each value in Table 2 is the average calculated over all images within the corresponding sequence. The first to third places for each sequence are highlighted in bold red, blue, and green, respectively. Among the baseline methods, excluding STLCM and MSL-STIPT, the BSF values are generally low, with most not exceeding 20. Although STLCM and MSL-STIPT perform better than other baseline methods in terms of BSF, they still fall short compared to our proposed method. LCM, HWLCM, TLLCM, STCLF, and STLCM have low SCRG across all sequences. IPI, NOLC, ADMD, and MSL-STIPT have very high SCRG on some sequences but very low SCRG on others, indicating unstable performance across different sequences. The proposed method achieves the highest BSF on Seq.1, 2, 3, 4, 6, 7, 8, 9, and 12, surpassing the second-ranked method by 10.92%, 25.65%, 3.29%, 1.55%, 18.88%, 22.77%, 13.81%, and 23.67%, respectively. In terms of SCRG, our proposed method has the highest values for Seq.1–10, surpassing the second-ranked method by 28.84%, 426,692.00%, 296.98%, 130.74%, 191.13%, 90.54%, 3053.18%, 2881.76%, 28.79%, and 82,512.27%, respectively. Additionally, the BSF and SCRG values of our method rank among the top three on all sequences, demonstrating its superior performance in terms of both background suppression and target enhancement.
The performance of different methods in terms of SCRG and BSF can be attributed to their underlying principles. LCM, HWLCM, TLLCM, and ADMD are methods based on central-surround differences, while they can enhance small targets to some extent, they struggle with suppressing sea clutter, which also exhibits high local contrast. Among these methods, ADMD performs relatively better in SCRG. This is because ADMD zeros out the pixels in the target neighborhood during computation, which can significantly improve the SCR of the target when there is no other strong clutter within the target neighborhood (e.g., Seq3, 4, 6, 8, and 9). IPI and NOLC are based on the sparsity model of small targets. They can improve the target SCR by effectively separating the small targets from the background. However, they show poor background suppression effects as many instances of sea clutter also satisfy the assumption of local sparsity, leading to mistaken extraction of clutter as targets. STLCF and STLCM utilize spatiotemporal filtering, primarily measuring the saliency of local regions in spatial and temporal domains. However, since both small targets and sea clutter are bright and moving, they generate strong spatiotemporal saliency for both. STLCM imposes shape constraints on salient regions, thus performing slightly better overall than STLCF. MSL-STIPT is based on the modified spatial–temporal tensor model, wherein the constraints are overly strict, making it easy to suppress both targets and clutter simultaneously. In contrast, our proposed method takes full advantage of the distinctions between small targets and clutter in spatiotemporal features, thereby achieving superior overall performance in target enhancement and background suppression compared to baseline methods.
Secondly, to evaluate the comprehensive detection performance of different methods, this paper presents their ROC curves, as shown in Figure 11. These ROC curves are obtained by adjusting the segmentation threshold from high to low, while the ROC curve of our method may initially be lower than some other methods in the high threshold stage (e.g., on Seq. 1 compared to HWLCM), it should be noted that very high thresholds are typically not employed in practice since most detection tasks prioritize a high detection rate. At the low threshold stage, our method exhibits significant advantages over the baseline methods. Table 3 shows the detection rates of different methods when the false alarm rate is 10 3 , and the highest detection rate on each sequence is marked in bold red. When the false alarm rate is 10 3 , the detection rate of our method is the highest on most sequences and reached 1 on Seq.3, 5, 6, 7, 8, 9, 11, and 12. On Seq.4, although the detection rate of our method is temporarily lower than that of TLLCM and HWLCM, it can be seen from the ROC curves of Seq.4 that the detection rate of our method is still the first to reach 1. The above results show that our method can achieve a higher detection rate with the same false alarm rate compared to the baseline methods.
Table 4 shows the false alarm rates of different methods at a detection rate of 1, with the lowest false alarm rate marked in bold red for each sequence. Our method has the lowest false alarm rate across all sequences, except for Seq.9. On Seq.9, while MSL-STIPT has a slightly lower false alarm rate, the difference is not significant compared to our method. Furthermore, MSL-STIPT demonstrates poor performance on other sequences. These results show that our method can achieve a lower false alarm rate at the same detection rate. The analysis of the ROC curve, along with Table 3 and Table 4, leads to the conclusion that our method exhibits superior detection performance compared to the baseline methods.
Finally, Table 5 provides the average computational time for different methods on Seq.1. Generally, methods based on spatial or spatiotemporal filtering exhibit low computational time, such as LCM, HWLCM, ADMD, STLCF, and STLCM. Although TLLCM is also a spatial filtering method, its computation involves numerous sorting operations, resulting in a higher time consumption. Methods based on optimization, including IPI, NOLC, and MSL-STIPT, typically demonstrate high computational overhead due to iterative operations. The heavy computational burden limits their practical application. Our method ranks fifth among all methods. Although LCM, ADMD, STLCF, and STLCM are faster than our method, they cannot achieve both fast speed and good detection performance simultaneously. Considering the comprehensive performance of our method in terms of BSF, SCRG, ROC curves, and computational time, it can be concluded that our method outperforms the baseline methods in heavy sea clutter environments.

4. Conclusions

This paper analyzes the differences in appearance between IR small targets and sea clutter in spatial and temporal domains. Based on this analysis, a detection method utilizing the appearance stable isotropy measure (ASIM) is proposed. First, the Top-Hat transform is employed to extract the salient regions and generate the candidate target array image. Then, two novel features, namely GHEM and LOFCM, are proposed to characterize the isotropy and stability of the candidate targets’ appearance, respectively. Finally, this paper fuses GHEM and LOFCM and uses threshold segmentation to determine the final target. Experiments are conducted on sequences with heavy sea clutter environments to evaluate the performance of the proposed method. The experimental results show that the proposed method performs better than the baseline methods in comprehensive detection performance.

Author Contributions

Conceptualization, F.W. and W.Q.; methodology, F.W.; software, F.W.; validation, F.W.; formal analysis, Y.Q. and C.M.; investigation, H.Z. and J.W.; resources, W.Q., M.W. and K.R.; data curation, F.W.; writing—original draft preparation, F.W.; writing—review and editing, W.Q.; visualization, F.W.; supervision, W.Q.; project administration, W.Q.; funding acquisition, W.Q., M.W. and K.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 62175111, in part by the Natural Science Foundation of Jiangsu Province under Grant BK20200487, in part by the Fundamental Research Funds for the Central Universities under Grant 30923011015, and in part by the National Natural Science Foundation of China under Grant 62001234.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, H.; Qian, W.; Wan, M.; Zhang, K.; Wang, F.; Kong, X.; Chen, Q.; Lu, D. Robust noise hybrid active contour model for infrared image segmentation using orientation column filters. J. Mod. Opt. 2023, 70, 483–502. [Google Scholar] [CrossRef]
  2. Lu, Y.; Dong, L.; Zhang, T.; Xu, W. A robust detection algorithm for infrared maritime small and dim targets. Sensors 2020, 20, 1237. [Google Scholar] [CrossRef]
  3. Tom, V.T.; Peli, T.; Leung, M.; Bondaryk, J.E. Morphology-based algorithm for point target detection in infrared backgrounds. In Proceedings of the Signal and Data Processing of Small Targets 1993, International Society for Optics and Photonics, Orlando, FL, USA, 12–14 April 1993; Volume 1954, pp. 2–11. [Google Scholar]
  4. Deshpande, S.D.; Er, M.H.; Venkateswarlu, R.; Chan, P. Max-mean and max-median filters for detection of small targets. In Proceedings of the Signal and Data Processing of Small Targets 1999, International Society for Optics and Photonics, Denver, CO, USA, 19–23 July 1999; Volume 3809, pp. 74–83. [Google Scholar]
  5. Hadhoud, M.M.; Thomas, D.W. The two-dimensional adaptive LMS (TDLMS) algorithm. IEEE Trans. Circuits Syst. 1988, 35, 485–494. [Google Scholar] [CrossRef]
  6. Wang, B.; Dong, L.; Zhao, M.; Xu, W. Fast infrared maritime target detection: Binarization via histogram curve transformation. Infrared Phys. Technol. 2017, 83, 32–44. [Google Scholar] [CrossRef]
  7. Bai, X.; Zhou, F.; Xie, Y. New class of top-hat transformation to enhance infrared small targets. J. Electron. Imaging 2008, 17, 030501. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Li, L.; Xin, Y. Infrared small target detection based on adaptive double-layer TDLMS filter. Acta Photonica Sin. 2019, 48, 0910001. [Google Scholar] [CrossRef]
  9. Wang, X.; Lv, G.; Xu, L. Infrared dim target detection based on visual attention. Infrared Phys. Technol. 2012, 55, 513–521. [Google Scholar] [CrossRef]
  10. Kim, S.; Yang, Y.; Lee, J.; Park, Y. Small target detection utilizing robust methods of the human visual system for IRST. J. Infrared Millim. Terahertz Waves 2009, 30, 994–1011. [Google Scholar] [CrossRef]
  11. Wang, X.; Lu, R.; Bi, H.; Li, Y. An Infrared Small Target Detection Method Based on Attention Mechanism. Sensors 2023, 23, 8608. [Google Scholar] [CrossRef]
  12. Moradi, S.; Moallem, P.; Sabahi, M.F. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm. Infrared Phys. Technol. 2018, 89, 387–397. [Google Scholar] [CrossRef]
  13. Aghaziyarati, S.; Moradi, S.; Talebi, H. Small infrared target detection using absolute average difference weighted by cumulative directional derivatives. Infrared Phys. Technol. 2019, 101, 78–87. [Google Scholar] [CrossRef]
  14. Moradi, S.; Moallem, P.; Sabahi, M.F. Fast and robust small infrared target detection using absolute directional mean difference algorithm - ScienceDirect. Signal Process. 2020, 177, 107727. [Google Scholar] [CrossRef]
  15. Chen, C.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 574–581. [Google Scholar] [CrossRef]
  16. Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A robust infrared small target detection algorithm based on human visual system. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar]
  17. Qin, Y.; Li, B. Effective infrared small target detection utilizing a novel local contrast method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1890–1894. [Google Scholar] [CrossRef]
  18. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  19. Han, J.; Moradi, S.; Faramarzi, I.; Liu, C.; Zhao, Q. A Local Contrast Method for Infrared Small-Target Detection Utilizing a Tri-Layer Window. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1822–1826. [Google Scholar] [CrossRef]
  20. Du, P.; Hamdulla, A. Infrared small target detection using homogeneity-weighted local contrast measure. IEEE Geosci. Remote Sens. Lett. 2019, 17, 514–518. [Google Scholar] [CrossRef]
  21. Han, J.; Moradi, S.; Faramarzi, I.; Zhang, H.; Zhao, Q.; Zhang, X.; Li, N. Infrared small target detection based on the weighted strengthened local contrast measure. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1670–1674. [Google Scholar] [CrossRef]
  22. Deng, L.; Zhu, H.; Tao, C.; Wei, Y. Infrared moving point target detection based on spatial–temporal local contrast filter. Infrared Phys. Technol. 2016, 76, 168–173. [Google Scholar] [CrossRef]
  23. Zhao, B.; Xiao, S.; Lu, H.; Wu, D. Spatial-temporal local contrast for moving point target detection in space-based infrared imaging system. Infrared Phys. Technol. 2018, 95, 53–60. [Google Scholar] [CrossRef]
  24. Du, P.; Hamdulla, A. Infrared moving small-target detection using spatial–temporal local difference measure. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1817–1821. [Google Scholar] [CrossRef]
  25. Pang, D.; Shan, T.; Ma, P.; Li, W.; Liu, S.; Tao, R. A novel spatiotemporal saliency method for low-altitude slow small infrared target detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  26. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A.G. Infrared patch-image model for small target detection in a single image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef]
  27. Dai, Y.; Wu, Y.; Song, Y. Infrared small target and background separation via column-wise weighted robust principal component analysis. Infrared Phys. Technol. 2016, 77, 421–430. [Google Scholar] [CrossRef]
  28. Dai, Y.; Wu, Y.; Song, Y.; Guo, J. Non-negative infrared patch-image model: Robust target-background separation via partial sum minimization of singular values. Infrared Phys. Technol. 2017, 81, 182–194. [Google Scholar] [CrossRef]
  29. Zhang, T.; Wu, H.; Liu, Y.; Peng, L.; Yang, C.; Peng, Z. Infrared small target detection based on non-convex optimization with Lp-norm constraint. Remote Sens. 2019, 11, 559. [Google Scholar] [CrossRef]
  30. He, Y.; Li, M.; Zhang, J.; An, Q. Small infrared target detection based on low-rank and sparse representation. Infrared Phys. Technol. 2015, 68, 98–109. [Google Scholar] [CrossRef]
  31. Wang, X.; Peng, Z.; Kong, D.; He, Y. Infrared dim and small target detection based on stable multisubspace learning in heterogeneous scene. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5481–5493. [Google Scholar] [CrossRef]
  32. Dai, Y.; Wu, Y. Reweighted infrared patch-tensor model with both nonlocal and local priors for single-frame small target detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3752–3767. [Google Scholar] [CrossRef]
  33. Zhang, L.; Peng, Z. Infrared Small Target Detection Based on Partial Sum of the Tensor Nuclear Norm. Remote Sens. 2019, 11, 382. [Google Scholar] [CrossRef]
  34. Guan, X.; Zhang, L.; Huang, S.; Peng, Z. Infrared small target detection via non-convex tensor rank surrogate joint local contrast energy. Remote Sens. 2020, 12, 1520. [Google Scholar] [CrossRef]
  35. Sun, Y.; Yang, J.; An, W. Infrared Dim and Small Target Detection via Multiple Subspace Learning and Spatial-Temporal Patch-Tensor Model. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3737–3752. [Google Scholar] [CrossRef]
  36. Liu, H.K.; Zhang, L.; Huang, H. Small Target Detection in Infrared Videos Based on Spatio-Temporal Tensor Model. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8689–8700. [Google Scholar] [CrossRef]
  37. Pang, D.; Ma, P.; Shan, T.; Li, W.; Tao, R.; Ma, Y.; Wang, T. STTM-SFR: Spatial–Temporal Tensor Modeling with Saliency Filter Regularization for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  38. Zhao, M.; Cheng, L.; Yang, X.; Feng, P.; Liu, L.; Wu, N. TBC-Net: A real-time detector for infrared small target detection using semantic constraint. arXiv 2019, arXiv:2001.05852. [Google Scholar]
  39. Li, B.; Xiao, C.; Wang, L.; Wang, Y.; Lin, Z.; Li, M.; An, W.; Guo, Y. Dense nested attention network for infrared small target detection. arXiv 2021, arXiv:2106.00487. [Google Scholar] [CrossRef]
  40. Liu, F.; Gao, C.; Chen, F.; Meng, D.; Zuo, W.; Gao, X. Infrared Small and Dim Target Detection with Transformer Under Complex Backgrounds. IEEE Trans. Image Process. 2023, 32, 5921–5932. [Google Scholar] [CrossRef]
  41. Chen, G.; Wang, W.; Tan, S. Irstformer: A hierarchical vision transformer for infrared small target detection. Remote Sens. 2022, 14, 3258. [Google Scholar] [CrossRef]
  42. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Attentional local contrast networks for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9813–9824. [Google Scholar] [CrossRef]
  43. Xin, J.; Cao, X.; Xiao, H.; Liu, T.; Liu, R.; Xin, Y. Infrared Small Target Detection Based on Multiscale Kurtosis Map Fusion and Optical Flow Method. Sensors 2023, 23, 1660. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic diagram of an IR image and typical salient regions. (a) An IR image with a small target and heavy sea clutter. (b) Local images of the marked regions within five consecutive frames.
Figure 1. Schematic diagram of an IR image and typical salient regions. (a) An IR image with a small target and heavy sea clutter. (b) Local images of the marked regions within five consecutive frames.
Sensors 23 09838 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Sensors 23 09838 g002
Figure 3. Structure element. The value of the effective pixels are 1, marked in red. The value of the invalid pixels are 0, marked in white.
Figure 3. Structure element. The value of the effective pixels are 1, marked in red. The value of the invalid pixels are 0, marked in white.
Sensors 23 09838 g003
Figure 4. Intermediate results of the proposed method. ( a ) Top-Hat image. ( b ) Binary image. ( c ) Candidate target array. ( d ) Visual HOG of the candidate target array. ( e ) Optical flow vectors of candidate target array. ( f ) ASIM image.
Figure 4. Intermediate results of the proposed method. ( a ) Top-Hat image. ( b ) Binary image. ( c ) Candidate target array. ( d ) Visual HOG of the candidate target array. ( e ) Optical flow vectors of candidate target array. ( f ) ASIM image.
Sensors 23 09838 g004
Figure 5. Related schematic diagrams of HOG. ( a ) Schematic diagram of HOG calculation. The two pixels marked with blue and red circles are used as examples. ( b ) A calculation result of HOG. ( c ) Schematic diagram of visual HOG.
Figure 5. Related schematic diagrams of HOG. ( a ) Schematic diagram of HOG calculation. The two pixels marked with blue and red circles are used as examples. ( b ) A calculation result of HOG. ( c ) Schematic diagram of visual HOG.
Sensors 23 09838 g005
Figure 6. Comparison of HOG and optical flow between small target and sea clutter. ( a ) Visual HOG of a small target and an anisotropic clutter. ( b ) Visual optical flow vectors of a small target and an isotropic clutter. The green arrows represent the optical flow vectors.
Figure 6. Comparison of HOG and optical flow between small target and sea clutter. ( a ) Visual HOG of a small target and an anisotropic clutter. ( b ) Visual optical flow vectors of a small target and an isotropic clutter. The green arrows represent the optical flow vectors.
Sensors 23 09838 g006
Figure 7. Schematic diagram of characteristic neighborhood division.
Figure 7. Schematic diagram of characteristic neighborhood division.
Sensors 23 09838 g007
Figure 8. First frame of the experimental sequences. The small targets are marked with red boxes, and their enlarged views are placed in the left-bottom corner of the images.
Figure 8. First frame of the experimental sequences. The small targets are marked with red boxes, and their enlarged views are placed in the left-bottom corner of the images.
Sensors 23 09838 g008
Figure 9. The resulting images of different methods on Seq.1–6. The target area in each resulting image is marked with a red box and its enlarged view is placed in the left-bottom corner.
Figure 9. The resulting images of different methods on Seq.1–6. The target area in each resulting image is marked with a red box and its enlarged view is placed in the left-bottom corner.
Sensors 23 09838 g009
Figure 10. The resulting images of different methods on Seq.7–12. The target area in each resulting image is marked with a red box and its enlarged view is placed in the left-bottom corner.
Figure 10. The resulting images of different methods on Seq.7–12. The target area in each resulting image is marked with a red box and its enlarged view is placed in the left-bottom corner.
Sensors 23 09838 g010
Figure 11. Receiver operating characteristic (ROC) curves of different methods.
Figure 11. Receiver operating characteristic (ROC) curves of different methods.
Sensors 23 09838 g011
Table 1. Details of the experimental sequences.
Table 1. Details of the experimental sequences.
SequenceImage SizeFrame NumberTarget TypeTarget Size
Seq.1512 × 6401000Signal light5 × 4
Seq.2512 × 6401000Signal light3 × 3
Seq.3512 × 6401000Ship9 × 9
Seq.4512 × 64097Ship9 × 7
Seq.5512 × 640400Signal light5 × 5
Seq.6512 × 640500Ship7 × 7
Seq.7512 × 6401000Ship5 × 5
Seq.8512 × 6401000Signal light7 × 7
Seq.9512 × 6401000Signal light6 × 5
Seq.10512 × 640500Ship5 × 6
Seq.11512 × 640500Signal light4 × 4
Seq.12512 × 640500Ship6 × 6
Table 2. Background suppression factor (BSF) and signal-to-clutter ratio gain (SCRG) of different methods.
Table 2. Background suppression factor (BSF) and signal-to-clutter ratio gain (SCRG) of different methods.
SequenceLCMIPINOLCHWLCMTLLCMADMDSTLCFSTLCMMSL-STIPTASIM (Ours)
BSFSeq.12.355.96034.678611.576513.210416.414213.132317.157214.233719.0300
Seq.24.302911.49573.949018.130112.836814.68177.927712.515422.409528.1576
Seq.34.425815.330511.10064.711914.807216.376919.211032.443319.369425.6798
Seq.40.981610.89625.21721.54515.27827.61943.66527.38665.297311.2543
Seq.54.243113.11913.66016.622419.058914.34815.240226.957936.247220.5585
Seq.64.097518.41309.974411.616219.189316.399410.547434.618532.430635.1543
Seq.73.362512.18704.46446.914513.113711.90825.035317.931715.403821.3181
Seq.81.58283.91143.48616.45516.674613.00416.362015.205014.892518.6672
Seq.94.34579.33115.934026.039112.561713.708811.395731.739515.662736.1225
Seq.101.167120.07626.25413.15705.47645.28781.799016.180040.040117.3463
Seq.111.17768.10432.780146.46769.348211.56044.532810.084514.883218.8061
Seq.122.93238.67635.90268.501314.561310.34768.502919.901917.416024.6129
SCRGSeq.11.369511,297.015,147.05.052820.12390.78851.790811.75360.002819,515.0
Seq.20.73276.67381.90792.94363.057916.54672.99565.31320.0009770,620.0
Seq.31.20004.77642.99481.27344.12112929.90.78237.64650.008211,631.0
Seq.40.90252.03181.87201.6600103.53364017.033.976929.62370.000349268.8
Seq.51.387019,649.03.99102.58196.7425683.26964.538618.08940.004457,204.0
Seq.62.76733924.221.67969.277959.99302897.80.659650.12900.91727477.0
Seq.70.97801756.73.07604.61409.54891.64255.467120.50760.285855,392.0
Seq.80.70101.14651.22711.46072.53131315.81.05930.23340.753739,234.0
Seq.93.55579.01372.92275.190832.283212,069.03.07638.439916,002.020,609.0
Seq.101.582283.36075.73124.912912.493153.27294.820395.20620.567878,652.0
Seq.112.55518366.63.34825.284348.1007570.84052.493543.99500.07597859.3
Seq.122.325518.59723.54145.95376.75239164.92.907914.67230.05407665.2
The red bold, blue, and green numbers represent the first, second, and third place on each sequence, respectively.
Table 3. Detection rates of different methods when the false alarm rate is 10 3 .
Table 3. Detection rates of different methods when the false alarm rate is 10 3 .
SequenceLCMIPINOLCHWLCMTLLCMADMDSTLCFSTLCMMSL-STIPTASIM (Ours)
Seq.10000.715000000.07390.7475
Seq.200.49430.39730.16500.62950000.16840.9201
Seq.300.07240.34520.40490.50030.54150.56070.66220.27141
Seq.40.10290.56330.49000.68920.74770.222900.031600.5379
Seq.500.34370.42340.33930.53500.6957000.32311
Seq.600.29560.36260.86680.580000.19350.41970.28141
Seq.70000.66270.450600.41210.34740.16811
Seq.80.17160.51230.40160.53420.79250.63230.353500.23591
Seq.90.22100.74230.675610.86000.75810.53240.644811
Seq.100.50160.91140.72250.71000.857400.68610.96820.40550.9951
Seq.110.17730.63370.41690.32000.6825000.09660.25751
Seq.1200.61490.61300.84570.73750.57800.61300.48350.17231
The red bold number represents the maximum value on each sequence.
Table 4. False alarm rates of different methods when the detection rate is 1.
Table 4. False alarm rates of different methods when the detection rate is 1.
SequenceLCMIPINOLCHWLCMTLLCMADMDSTLCFSTLCMMSL-STIPTASIM (Ours)
Seq.10.99170.02270.11740.38230.061810.6471110.0017
Seq.20.96020.01270.156010.07620.03770.65310.060810.0021
Seq.30.99970.00690.04360.71300.03120.01780.44400.294910.0003
Seq.410.00440.06870.79140.03710.03870.42330.148010.0035
Seq.50.99950.00720.15520.71000.07930.05730.83770.101110.0009
Seq.610.01050.09570.37810.04960.02640.84110.016110.0004
Seq.710.00910.16440.61120.06800.05120.73530.101510.0005
Seq.80.99120.01760.07910.39800.04390.03050.59180.030210.0004
Seq.90.99900.01310.07740.00040.02570.01430.40580.01370.00030.0004
Seq.1010.00350.07490.61080.06940.05640.86470.032310.0010
Seq.1110.01090.126810.03940.02550.65250.146110.0007
Seq.120.99920.01410.08500.35860.04070.02950.38160.023410.0004
The red bold number represents the minimum value on each sequence.
Table 5. Calculation time (seconds/frame) of different methods.
Table 5. Calculation time (seconds/frame) of different methods.
LCMIPINOLCHWLCMTLLCMADMDSTLCFSTLCMMSL-STIPTASIM (Ours)
0.1427139.37011071.00003.071928.46330.00540.01350.037684.33242.6118
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, F.; Qian, W.; Qian, Y.; Ma, C.; Zhang, H.; Wang, J.; Wan, M.; Ren, K. Maritime Infrared Small Target Detection Based on the Appearance Stable Isotropy Measure in Heavy Sea Clutter Environments. Sensors 2023, 23, 9838. https://doi.org/10.3390/s23249838

AMA Style

Wang F, Qian W, Qian Y, Ma C, Zhang H, Wang J, Wan M, Ren K. Maritime Infrared Small Target Detection Based on the Appearance Stable Isotropy Measure in Heavy Sea Clutter Environments. Sensors. 2023; 23(24):9838. https://doi.org/10.3390/s23249838

Chicago/Turabian Style

Wang, Fan, Weixian Qian, Ye Qian, Chao Ma, He Zhang, Jiajie Wang, Minjie Wan, and Kan Ren. 2023. "Maritime Infrared Small Target Detection Based on the Appearance Stable Isotropy Measure in Heavy Sea Clutter Environments" Sensors 23, no. 24: 9838. https://doi.org/10.3390/s23249838

APA Style

Wang, F., Qian, W., Qian, Y., Ma, C., Zhang, H., Wang, J., Wan, M., & Ren, K. (2023). Maritime Infrared Small Target Detection Based on the Appearance Stable Isotropy Measure in Heavy Sea Clutter Environments. Sensors, 23(24), 9838. https://doi.org/10.3390/s23249838

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop