Next Article in Journal
Measurements on the Absolute 2-D and 3-D Localization Accuracy of TerraSAR-X
Previous Article in Journal
Characterizing the Spatio-Temporal Pattern of Land Surface Temperature through Time Series Clustering: Based on the Latent Pattern and Morphology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Target Reconstruction Based on Attributed Scattering Centers with Application to Robust SAR ATR

1
Department of Computer Science, Qiqihar Medical University, Qiqihar 161006, China
2
Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(4), 655; https://doi.org/10.3390/rs10040655
Submission received: 18 March 2018 / Revised: 15 April 2018 / Accepted: 20 April 2018 / Published: 23 April 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
This paper proposes a synthetic aperture radar (SAR) automatic target recognition (ATR) method by target reconstruction based on attributed scattering centers (ASCs). The extracted ASCs can effectively describe the electromagnetic scattering characteristics of the target, while eliminating the background clutters and noises. Therefore, the ASCs are discriminative features for SAR ATR. The neighbor matching algorithm was used to build the correspondence between the test ASC set and corresponding template ASC set. Afterwards, the selected template ASCs were used to reconstruct the template image, whereas all the test ASCs were used to reconstruct the test image based on the ASC model. A similarity measure was further designed based on the reconstructed images for target recognition. Compared with traditional ASC matching methods, the complex one-to-one correspondence between two ASC sets was avoided. Moreover, all the attributes of the ASCs were utilized during the target reconstruction. Therefore, the proposed method can better exploit the discriminability of ASCs to improve the ATR performance. To evaluate the effectiveness and robustness of the proposed method, extensive experiments on the moving and stationary target acquisition and recognition (MSTAR) dataset were conducted under both the standard operating condition (SOC) and typical extended operating conditions (EOCs).

Graphical Abstract

1. Introduction

With its all-weather, all-day, and high-resolution capabilities, synthetic aperture radar (SAR) has been widely applied in both military and civilian applications. Automatic target recognition (ATR) is one of key steps in SAR image interpretation, which has been studied intensively in past decades [1]. Generally, a SAR ATR method involves two parts: feature extraction and classification. Feature extraction aims to find low-dimensional representations for the original SAR images. Meanwhile, the extracted features should maintain the discriminability of the original image. Therefore, the efficiency and effectiveness of a SAR ATR method is closely related to the extracted features. Various features have been applied to SAR ATR, which can be generally divided into three categories. The first one is the geometrical features, which describe the physical sizes and target shape. Park et al. constructed 12 discriminative features based on the binary target region for target recognition [2]. In [3], the Zernike moments were extracted from the binary target region for target recognition. The elliptical Fourier series were used as the outline descriptors for SAR targets in [4]. A matching scheme for the binary target regions was proposed in [5] for SAR ATR. The shadow in SAR images was classified by Papson et al. for target recognition with good performance [6]. We call the second category “projection features”. These features are extracted by projecting the original SAR image to low-dimensional manifolds using some transformation algorithms. Typical methods for extracting the projection features are principal component analysis (PCA) [7], linear discriminant analysis (LDA) [7], and non-negative matrix factorization (NMF) [8,9], among others. Mishra applied PCA and LDA to SAR feature extraction with application to SAR ATR and compared their performances [7]. NMF has been employed for target recognition of SAR images in [8,9]. Some manifold learning methods have also been used for SAR feature extraction, with good ATR performances [10,11,12]. The last one is the scattering center features. SAR sensors collect the electromagnetic scattering characteristics of the target, which can be represented as some local phenomenon in SAR images (i.e., the scattering centers) [13]. As a typical representative, the attributed scattering centers (ASCs) were demonstrated to be very effective for SAR ATR because they provide rich, physically relevant descriptions for the target [14,15,16,17,18,19,20,21,22,23]. Chiang et al. built the correspondence between two ASC sets using the Hungarian algorithm and evaluated their similarity by the posterior probability [17]. Tang et al. proposed a sequential matching algorithm for the ASC sets based on the Karhunen–Loeve (KL) features [18]. In [18], the Hopcroft–Karp (HK) algorithm was employed to solve the ASC matching problem. Ding et al. focused on the similarity evaluation between two ASC sets by exploiting the structural information contained in the ASC set [20,21,22]. In [23], Zhou et al. propose a region-to-point matching method based on the 3D global scattering center model, thus avoiding the direct matching of two scattering center sets. In the classification stage, different classifiers were adopted to determine the target label based on the extracted features. With the fast development of machine learning techniques, many advanced classifiers have been successfully applied to SAR ATR, such as the adaptive boosting (Adaboost) [24], discriminative graphical models [25], support vector machines (SVM) [26,27], sparse representation-based classification (SRC) [28,29,30], convolutional neural networks (CNN) [31,32], and so on. Zhao et al. used SVM for SAR ATR and achieved much better performance than the traditional template-based methods [26]. Inspired by face recognition [33], SRC has also been successfully applied to SAR ATR with very good performance and robustness [28,29,30]. Because of the powerful classification capability, different kinds of CNNs have been designed for SAR target recognition with notably high effectiveness [31,32]. However, these classifiers may only adapt to features with uniform forms, for example, PCA feature vectors of same dimensionality. Specifically, for the features with different forms (e.g., the unordered ASCs), a similarity measure is often designed to evaluate the similarity between the corresponding features. Then, the target type is determined based on the maximum similarity.
In this study, a SAR ATR method is proposed based on the ASCs. The previous ATR methods using ASCs have several defects. First, it is a complex and difficult problem to build a precise one-to-one correspondence between two ASC sets. The reasons can be analyzed from two aspects. On one hand, the precise matching of two ASC sets involves much computation, which will restrict the real-time processing capability of the ATR system. On the other hand, the two ASC sets may include different numbers of ASCs. As a result, there are possible outliers during the matching, which increase the difficulty and complexity of building the correspondence. Second, the previous methods do not make full use of all the attributes of the ASCs. For example, only the spatial positions and amplitude were used in [18,20]. So, the discriminability of the ASCs may not be exploited fully. Third, the similarity evaluation is also complex. As a consideration of the outliers and impreciseness during the matching, the similarity measures designed in the previous methods often have complex forms. The proposed method tries to handle the aforementioned defects. At first, a neighbor matching algorithm was used to build the correspondence between two ASC sets extracted from the test image and corresponding template image, respectively. The positions of the test ASCs were taken as the baseline points, and those template ASCs in a certain neighborhood were selected. Then, the selected template ASCs were used to reconstruct the target characteristics based on the ASC model. Meanwhile, all the test ASCs were used to reconstruct the test image. Finally, a simple but effective similarity measure was defined, based on the basic idea of image correlation, to evaluate the similarity between the reconstructed images for target recognition. The similarity measure comprehensively considers the consistency and discrepancy between the two reconstructed images.
Figure 1 illustrates the technical flowchart of the proposed method. In detail, the template database selects the corresponding template samples according to the estimated azimuth of the test sample. Afterwards, the ASC sets of the test and template samples are extracted and matched using the neighbor matching algorithm. The template images are reconstructed based on the selected ASCs, whereas the test image is reconstructed based on all the test ASCs. Finally, the similarities are evaluated based on the reconstructed images for target recognition. Compared with the previous works, the proposed method has several outstanding advantages. First, an efficient and effective neighbor matching algorithm is proposed to build the correspondence of two ASC sets. According to the general idea of the proposed method, it is not necessary to build a precise one-to-one correspondence between two ASC sets. The neighbor matching algorithm can efficiently build the correspondence between two ASC sets and the selected ASCs can be used for the subsequent target reconstruction. Second, all the attributes of the ASCs are used during the target reconstruction. The attributes of the test ASCs and selected template ASCs are fully used when they are used to reconstruct the targets’ characteristics based on ASC model. Thirdly, a simple but effective similarity measure has been designed to evaluate the similarities between the test ASC set and different template ASC sets from different classes based on the reconstructed images. On one hand, the reconstructed images can effectively eliminate the background noises and clutters. Then, the true target characteristics can be compared. On the other hand, the similarity measure based on the idea of image correlation is notably efficient.
The remainder of this paper is organized as follows. Section 2 describes the ASC model and the extraction of ASCs. In Section 3, the main idea of the proposed target recognition method is presented. Experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset [34] in Section 4 to comprehensively evaluate the proposed method. Conclusions are drawn in Section 5 with some further discussions.

2. Extraction of Attributed Scattering Centers

2.1. Attribute Scattering Center Model

The high-frequency radar backscattering of an object can be well modeled as the sum of responses from individual scattering centers as follows [13].
E ( f , ϕ ; Θ ) = i = 1 p E i ( f , ϕ ; θ i )
For a single ASC, the backscattering field is described as a function of the frequency f and aspect angle ϕ as Equation (2).
E i ( f , ϕ ; θ i ) = A i ( j f f c ) α i exp ( j 4 π f c ( x i cos ϕ + y i sin ϕ ) ) sin c ( 2 π f c L i sin ( ϕ ϕ i ¯ ) ) exp ( 2 π f γ i sin ϕ )
where c is the propagation velocity of the electromagnetic wave and Θ = { θ i } = [ A i , α i , x i , y i , L i , ϕ i ¯ , γ i ] ( i = 1 , 2 , , p ) denotes the attribute set of the i th ASC, in which A i denotes the amplitude; ( x i , y i ) are the spatial positions; α i represents the frequency dependence; L i and ϕ i ¯ are the length and orientation of a distributed ASC; and γ i is the aspect dependence of a localized ASC.
The ASCs provide rich physically relevant descriptions for the whole target as well as the local structures. In detail, the amplitude reflects the relative intensities of different ASCs. The positions and length describe the spatial distribution of the ASCs, which can also reflect the target shape. The frequency dependence directly relates to the curvature of the ASC such as a dihedral or trihedral. Therefore, the ASCs are discriminative features for SAR ATR. In addition, the local descriptions contained in the ASCs are beneficial for the reasoning under some extended operating conditions (EOCs), where a part of the target is contaminated, such as configuration variance, partial occlusion, and others.

2.2. ASC Extraction

Due to the complex form of the ASC model in Equations (1) and (2), the extraction of ASCs is a difficult problem. Since the proposal of the ASC model, several algorithms have been developed to improve the precision of ASC extraction [13,17,35,36]. In this study, the classical approximate maximum likelihood (AML) algorithm was adopted for ASC extraction, which was demonstrated to be effective on the measured SAR images (e.g., the MSTAR dataset). The detailed explanation and derivation of the AML algorithm can be found in the relevant references [17,20]. The main steps of the AML algorithm can be summarized as Algorithm 1.
Algorithm 1. AML algorithm.
Input: SAR image I , model order p , and residue threshold T r .
1. Segment the region R around the highest peak by the watershed algorithm in I .
2. Initialize the parameters of the segment as θ 0 .
3. Optimize the parameters of the segment to estimate θ e .
4. Reconstruct the image by θ e and subtract the reconstructed image from I .
5. Repeat 1–3 to estimate the parameters of next ASC until the number of extracted ASCs reaches p or the residue energy is less than T r .
Output: The ASC set Θ .
Figure 2 shows the reconstruction of a measured BMP2 SAR image from the MSTAR dataset using the extracted ASCs. Figure 2a,b show the original image and reconstructed image, respectively. Intuitively, the dominant target characteristics are maintained in the reconstructed image. Quantitatively, the reconstructed image accounts for about 95% of the total energy of the centered 80 × 80 target region (assuming the target region is located at the rectangular region at the center of the image). The reconstruction residues are mainly caused by the background noises as well as small extraction errors. It is also notable that the extracted ASCs mainly convey the target’s characteristics, which can effectively eliminate the background noises.

3. Target Reconstruction for SAR ATR

3.1. Neighbor Matching

In practical applications, the ASC sets from the test image and its corresponding template image often have different numbers of ASCs. As a result, it is a complex and difficult problem to build a precise one-to-one correspondence between the two ASC sets. In this study, a neighbor matching algorithm was used. The positions of the test ASCs were taken as the centers to form a binary region combined by all the circles. When a template ASC is in the constructed binary region, it will be selected, otherwise it will be discarded. The detailed procedure of neighbor matching is illustrated as Algorithm 2.
Algorithm 2. Neighbor matching algorithm.
Input: Template ASC set A = [ a 1 , a 2 , , a M ] , test ASC set B = [ b 1 , b 2 , , b N ] , neighbor radius R .
for i = 1 : M
1. Compute the Euclidean distance between the a i and b j ( j = 1 , 2 , , N ) based on their spatial positions as d j ( j = 1 , 2 , , N )
2. if min j = 1 N ( d j ) R , a i is selected. Otherwise, it is discarded.
end
Output: The selected template ASC set A s .
Figure 3 presents an illustration of the neighbor matching when the radius is set as 0.3 m. The test ASCs from a BMP2 image (shown in Figure 2a) are matched with the corresponding template ASCs from BMP2. The matched template ASCs, unmatched template ASCs, matched test ASCs, and unmatched test ASCs are represented by different markers. Notably, the radius R plays a very important role in the neighbor matching. By changing the radius from 0.3 m to 0.4 m and 0.5 m, the corresponding matching results are shown in Figure 4a,b, respectively. In comparison with Figure 3, a larger radius will result in more selections of the template and test ASCs. With the radius from small to large, the neighbor matching is conducted from fine to coarse. At the radius of 0.5 m, only four template ASCs are unmatched due to the local differences caused by noises, extraction errors, and other factors. The neighbor matching results between the test ASC set and the corresponding template ASC sets from T72 and BTR70 at the radius of 0.3 m are shown in Figure 5a,b, respectively. Compared with the corresponding result of BMP2, it is clear that more template and test ASCs are unmatched when the target class is not the right one. Therefore, the matching results can provide discriminability for distinguishing different targets.

3.2. Target Reconstruction for Recognition

The selected template ASCs were used to reconstruct a new image based on the ASC model in Equations (1) and (2). In detail, the attributes of each selected template ASC were put into Equation (2) to calculate its backscattering characteristics. Afterwards, the individual responses of all the selected ASCs were combined as Equation (1). Finally, the total backscattering field was transformed into the image domain using the same imaging algorithm as the original image [37]. Figure 6 shows the reconstructed template images based on the matching results from BMP2, T72, and BTR70, as shown in Figure 3 and Figure 5a,b, respectively. When more template ASCs were selected, more details of the target were present in the reconstructed image. Meanwhile, all the test ASCs were used to reconstruct the test image, as shown in Figure 2. The main objective was to eliminate the background noises in the original image. In addition, it is more reasonable to compare the reconstructed test image with the reconstructed template image. By comparing Figure 2 with Figure 6, it is clear that the reconstructed template image from the true class shares a much higher similarity with the reconstructed test image than those from incorrect classes. Therefore, it is easier to make correct decisions on the target type using the reconstructed images.
A similarity measure was designed based on the reconstructed image for target recognition. The image correlation between the reconstructed test image and reconstructed template image were calculated as the preliminary similarity. Considering the outliers in the test ASC set, the proportion of the matched test ASCs was also incorporated in the final similarity measure. Therefore, for the reconstructed test image I T and reconstructed template image I R , their normalized similarity was computed as follows.
f s ( I T , I R ) = C o r ( I T , I R ) M s M
where M s denotes the number of matched test ASCs; C o r ( I T , I R ) represents the image correlation between two images, which is computed as Equation (4) [38].
r = max ( x y [ I T ( x , y ) m 1 ] [ I R ( x Δ x , y Δ y ) m 2 ] [ x y [ I T ( x , y ) m 1 ] 2 [ I R ( x Δ x , y Δ y ) m 2 ] 2 ] 1 / 2 )
where m 1 and m 2 denote the mean pixel value of I T and I R , respectively; Δ x and Δ y are the shifts in the x , y directions. In this study, it was assumed that the MSTAR images were well aligned, thus Δ x and Δ y were set to be 0.
When the test ASCs were matched with the truly corresponding target class, more template ASCs were selected and the reconstructed template image shared a higher correlation with the reconstructed test image. Then, a higher similarity was produced by evaluating Equation (3). For the incorrect classes, only a few template ASCs were selected, and the reconstructed template images tended to share low similarity with the reconstructed test image. Therefore, the designed similarity measure is discriminative to separate different classes of targets.
The procedure of the proposed target recognition method is shown in Figure 1. As it is a difficult problem to define a proper radius, the neighbor matching was performed at several different radii, from coarse to fine. At each radius, a similarity was obtained using Equation (3). Then, the averages of all the similarities were employed as the final similarity between the test image and its corresponding template image. In detail, the azimuth estimation method proposed in [24] was used to estimate the azimuth of the test image. Due to the azimuthal sensitivity of SAR images [39], the corresponding template samples could be effectively selected according to the estimated azimuth. Considering the possible estimation errors, the template samples in the interval of [−3°, 3°] around the estimated azimuth were all selected and their average similarity was adopted. The radius set for neighbor matching was chosen to be {0.3 m, 0.4 m, 0.5 m} in this study, based on the resolution of MSTAR images as well as the experimental observations.

4. Experiment

4.1. Data Preparation

The MSTAR dataset was used for experiments, as it is a famous benchmark for the evaluation of SAR ATR methods. The dataset included the SAR images of 10 classes of military targets, whose optical images are shown in Figure 7. The MSTAR images were collected by the X-band SAR sensors with the resolution of 0.3 m × 0.3 m, covering the full azimuth angles. Table 1 showcases the template and test sets, which were at the depression angles of 17° and 15°, respectively.
Several prevalent SAR ATR methods were used for comparison, which are briefly described in Table 2. In detail, SVM [26] and SRC [28] were used to classify the 80-dimension feature vectors extracted by PCA. CNN [31] was directly performed on the original intensities for training and testing. The ASC matching methods from [20] and [21] were also compared, denoted as “ASC1” and “ASC2”, respectively. In the remainder of this section, the proposed method is first considered under standard operating condition (SOC) on the 10 classes of targets. Afterwards, experiments conducted under several typical EOCs (i.e., configuration variance, large depression angle variance, noise corruption, and partial occlusion) will be discussed.

4.2. Recognition under SOC

The template and test sets in Table 1 were used for the recognition experiment under SOC. According to Table 1, the only difference between the test and template samples was a small depression angle variance of 2°. Table 3 showcases the confusion matrix of the proposed method. All 10 targets could be recognized with PCCs (percentage of correct classification) over 98%, resulting in an average of 99.04%.
The performances of different methods are compared in Table 4. Under SOC, the test samples were quite similar with the template ones. As a result, all the methods achieved PCCs higher than 97%. In comparison, CNN ranked first, owing to its excellent classification capability when the training source is sufficient [28]. With a slightly lower PCC, the proposed method achieved the second-best performance. Compared with the ASC1 and ASC2 methods, the proposed method outperformed them with a notable margin. The main reason is that the proposed method makes full use of all the attributes of ASCs during the target reconstruction, thus providing more discriminability for distinguishing different classes of targets. The results demonstrate that the proposed method can better exploit the discrimination capability of ASCs to improve the ATR performance.

4.3. Recognition under EOCs

The various EOCs in the real-world scenarios are the main obstacles to the practical applications of SAR ATR systems. Therefore, the designed ATR methods should be robust to different EOCs. In this subsection, tests with the proposed method under several typical EOCs (i.e., configuration variance, large depression angle variance, noise corruption, and partial occlusion) is discussed.

4.3.1. Configuration Variance

Due to the physical differences and structural modifications, a certain target may have had several different configurations [12]. The template set could only contain a special configuration, but the test images could have different configurations. Table 5 showcases the template and test sets used for the examination under configuration variance. For BMP2 and T72, the configurations for testing are not contained in the template set. Figure 8 shows the optical and SAR images of four configurations of T72 tank. It can be observed from the optical images that the four configurations have some locally structural modifications on the turret, fuel drums, among others. Table 6 presents the detail recognition results of the proposed method under configuration variance. All the configurations for testing can be classified with the PCCs over 94%, resulting in an average of 96.61%. The average PCCs of different methods are compared in Table 7. With the highest PCC, the proposed method is validated to be most robust to configuration variance. In addition, the ASC matching methods showed better performances than SVM, SRC, and CNN, mainly owing to the advantages of ASCs. As shown in Figure 8, only some local structures of the target were modified and the majority of the ASCs maintained stable under configuration variance. Therefore, by building the correspondence between two ASC sets, the local variations can be sensed, thus improving the robustness. For the proposed method, the stable ASCs under configuration variance can be selected and used for target reconstruction and their discriminability can be better maintained.

4.3.2. Large Depression Angle Variance

The test SAR images were collected at different depression angles with the template set. Considering that SAR images with large depression angle variance may have notably different appearances [40], it is crucial that the ATR methods keep robust under this condition. As showcased in Table 8, the SAR images of 2S1, BRDM2, and ZSU23/4 at 30° and 45° depression angles were tested, while their images at 17° depression angle were used as the template set. Figure 9 shows the SAR images of 2S1 at different depression angles. The detailed recognition results of the proposed method under large depression angle variance are presented in Table 9. At 30° depression angle, the performance of the proposed method still maintained at a high level over 98%. However, when the depression angle changed to 45°, the average PCC decreased sharply to 73.93%. As shown in Figure 9, the SAR image at 30° depression angle shared much more similar characteristics with the template SAR image at 17° than that at 45°.
Table 10 compares the performances of different methods. With the highest PCCs at both depression angles, the proposed method achieved the best robustness to large depression angle variance. The methods using ASCs performed better than others. The reasons can be analyzed from two aspects. On one hand, the ASCs only describe the electromagnetic scattering characteristics in the target region. For the global features (e.g., the PCA features or image intensities), the shadow was also considered. As shown in Figure 9, the shadows from images with large depression angle variance have notably different shapes. Therefore, the use of shadow may present some confusing information, which will worsen the ATR performance. On the other hand, similar to the condition of configuration variance, the large depression angle variance will cause local variations of the target while some local ASCs can maintain stable. Therefore, the ASCs are more discriminative features for SAR ATR under large depression angle variance. In the proposed method, the stable template ASCs were selected and used for target reconstruction. Then, the test image still shared the largest similarity with its truly corresponding class.

4.4. Noise Corruption

There are plenty of noises in real-world scenarios from the background environment, radar system, and others. Therefore, the robustness to noise corruption is also a highly desired merit of an ATR method. To test the performance of the proposed method under noise corruption, different levels of additive white Gaussian noises (AWGN) [41,42] were added to the original test samples in Table 1 according to the preset signal-to-noise ratio (SNR) defined as follows.
SNR ( dB ) = 10 log 10 h = 1 H w = 1 W | r ( h , w ) | 2 H W σ 2
where r ( h , w ) denotes pixel intensities of the original MSTAR image and σ 2 is the variance of AWGN. Figure 10 shows the noisy images at different SNRs.
The noisy test samples were classified by different methods based on the original template samples (see Table 1) and their performances are plotted in Figure 11. With the highest PCC at each SNR, the proposed method achieved better robustness than others. The good performance of the proposed method benefits from the advantage of ASCs as well as the proposed classification scheme. On one hand, the ASC extraction can effectively eliminate the background noises, as shown in Figure 1. In addition, the AML algorithm indeed takes the noises into consideration during the ASC extraction, so it still worked robustly under noise corruption. This is also the reason why the ASC1 and ASC2 methods achieved higher PCCs than SVM, SRC, and CNN at SNRs lower than 5 dB. On the other hand, the target reconstruction in the proposed method fully exploits the noise-robust ASCs to further improve the recognition performance.

4.5. Partial Occlusion

The target may be occluded by the obstacles in the background environment. Then, only a part of the target characteristics will be present in the captured SAR image. As a simulation, the occluded SAR images were generated by removing a certain proportion of the original target region from eight directions according to the occlusion model in [15,42]. Figure 12 shows the occluded SAR images from eight different directions at the occlusion level of 20%. By classifying the occluded test samples based on the original template set, the average PCCs of different methods from all the eight directions are compared in Figure 13. It is clear that the average PCCs of all the methods decreased sharply with the deterioration of partial occlusion. In comparison, the proposed method achieved the best performance under partial occlusion. In addition, the PCCs of the ASC1 and ASC2 methods surpassed those of SVM, SRC, and CNN when the occlusion level exceeded 30%. When the target was partially occluded, only a part of the local structures were discriminative for target recognition. As the local descriptors, the ASCs were more capable of handling this condition than the global features, such as PCA features or image intensities. For the proposed method, the remaining ASCs could still be selected for target reconstruction. Hence, the test image tended to share the largest similarity with its truly corresponding class.

5. Conclusions

This paper proposes a target recognition method of SAR images based on ASCs. A neighbor matching algorithm was first designed to build the correspondence between the test and template ASC sets with high efficiency. Then, the target reconstruction was performed to reconstruct the original test image and template image based on the ASC model. A similarity measure was designed for the reconstructed images for application to target recognition. The neighbor matching could efficiently find the association between the two ASC sets and the target reconstruction could fully exploit the discriminability of the ASCs to improve the ATR performance. Based on the experimental results on the MSTAR dataset, several conclusions can be drawn as follows. First, the ASCs are very discriminative features for SAR ATR under both SOC and various EOCs. Second, the proposed method can better exploit the discriminability of ASCs with higher efficiency and effectiveness than traditional ASC matching methods. Third, the proposed method can achieve better performance than several state-of-the-art SAR ATR methods, which demonstrates its superior effectiveness and robustness.
Future works can be conducted from two aspects. On one hand, a more robust ASC extraction method should be developed to improve the precision and efficiency. On the other hand, the similarity measure for the reconstructed images should be further researched to improve the effectiveness and robustness.

Acknowledgments

This work was supported by the Science and Technology Project under Grant No. 12531826 of Education Department, Heilongjiang, China.

Author Contributions

Jihong Fan proposed the general idea of the method and performed the experiments. Andrew Tomas reviewed the idea and provided the dataset. This manuscript was written by Jihong Fan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. El-Darymli, K.; Gill, E.W.; McGuire, P.; Power, D.; Moloney, C. Automatic Target Recognition in Synthetic Aperture Radar Imagery: A State-of-the-Art Review. IEEE Access 2016, 4, 6014–6058. [Google Scholar] [CrossRef]
  2. Park, J.; Park, S.; Kim, K. New discrimination features for SAR automatic target recognition. IEEE Geosci. Remote Sens. Lett. 2013, 10, 476–480. [Google Scholar] [CrossRef]
  3. Amoon, M.; Rezai-rad, G. Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moment features. IET Comput. Vis. 2014, 8, 77–85. [Google Scholar] [CrossRef]
  4. Anagnostopulos, G.C. SVM-based target recognition from synthetic aperture radar images using target region outline descriptors. Nonlinear Anal. 2009, 71, e2934–e2939. [Google Scholar] [CrossRef]
  5. Ding, B.Y.; Wen, G.J.; Ma, C.H.; Yang, X.L. Target recognition in synthetic aperture radar images using binary morphological operations. J. Appl. Remote Sens. 2016, 10, 046006. [Google Scholar] [CrossRef]
  6. Papson, S.; Narayanan, R.M. Classification via the shadow region in SAR imagery. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 969–980. [Google Scholar] [CrossRef]
  7. Mishra, A.K. Validation of PCA and LDA for SAR ATR. In Proceedings of the 2008 IEEE Region 10 Conference, Hyderabad, India, 19–21 November 2008; pp. 1–6. [Google Scholar]
  8. Cui, Z.Y.; Cao, Z.J.; Yang, J.Y.; Feng, J.L.; Ren, H.L. Target recognition in synthetic aperture radar via non-negative matrix factorization. IET Radar Sonar Navig. 2015, 9, 1376–1385. [Google Scholar] [CrossRef]
  9. Dang, S.H.; Cui, Z.Y.; Cao, Z.J.; Liu, N.Y. SAR target recognition via incremental nonnegative matrix factorization. Remote Sens. 2018, 10, 374. [Google Scholar] [CrossRef]
  10. Huang, Y.L.; Pei, J.F.; Yang, J.Y.; Liu, X. Neighborhood geometric center scaling embedding for SAR ATR. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 180–192. [Google Scholar] [CrossRef]
  11. Yu, M.T.; Zhao, L.J.; Zhang, S.Q.; Xiong, B.L.; Kuang, G.Y. SAR target recognition using parametric supervised t-stochastic neighbor embedding. Remote Sens. Lett. 2017, 8, 849–858. [Google Scholar] [CrossRef]
  12. Yu, M.T.; Dong, G.G.; Fan, H.Y.; Kuang, G.Y. SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation. Remote Sens. 2018, 10, 211. [Google Scholar] [CrossRef]
  13. Gerry, M.J.; Potter, L.C.; Gupta, I.J.; Merwe, A. A parametric model for synthetic aperture radar measurement. IEEE Trans. Antennas Propagat. 1999, 47, 1179–1188. [Google Scholar] [CrossRef]
  14. Potter, L.C.; Mose, R.L. Attributed scattering centers for SAR ATR. IEEE Trans. Image Process. 1997, 6, 79–91. [Google Scholar] [CrossRef] [PubMed]
  15. Bhanu, B.; Lin, Y. Stochastic models for recognition of occluded targets. Pattern Recognit. 2003, 36, 2855–2873. [Google Scholar] [CrossRef]
  16. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Data augmentation by multilevel reconstruction using attributed scattering center for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 979–983. [Google Scholar] [CrossRef]
  17. Chiang, H.; Moses, R.L.; Potter, L.C. Model-based classification of radar images. IEEE Trans. Inf. Theory 2000, 46, 1842–1854. [Google Scholar] [CrossRef]
  18. Tang, T.; Su, Y. Object recognition based on feature matching of scattering centers in SAR imagery. In Proceedings of the 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012. [Google Scholar]
  19. Tian, S.R.; Yin, K.Y.; Wang, C.; Zhang, H. An SAR ATR method based on scattering center feature and bipartite graph matching. IETE Tech. Rev. 2015, 32, 364–375. [Google Scholar] [CrossRef]
  20. Ding, B.Y.; Wen, G.J.; Zhong, J.R.; Ma, C.H.; Yang, X.L. Robust method for the matching of attributed scattering centers with application to synthetic aperture radar automatic target recognition. J. Appl. Remote Sens. 2016, 10, 016010. [Google Scholar] [CrossRef]
  21. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Target recognition in synthetic aperture radar images via matching of attributed scattering centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3334–3347. [Google Scholar] [CrossRef]
  22. Ding, B.Y.; Wen, G.J.; Zhong, J.R.; Ma, C.H.; Yang, X.L. A robust similarity measure for attributed scattering center sets with application to SAR ATR. Neurocomputing 2017, 219, 130–143. [Google Scholar] [CrossRef]
  23. Zhou, J.X.; Shi, Z.G.; Cheng, X.; Fu, Q. Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729. [Google Scholar]
  24. Sun, Y.J.; Liu, Z.P.; Todorovic, S.; Li, J. Adaptive boosting for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 112–125. [Google Scholar] [CrossRef]
  25. Srinivas, U.; Monga, V.; Raj, R.G. SAR automatic target recognition using discriminative graphical models. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 591–606. [Google Scholar] [CrossRef]
  26. Zhao, Q.; Principe, J.C. Support vector machines for synthetic radar automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 643–654. [Google Scholar] [CrossRef]
  27. Liu, H.C.; Li, S.T. Decision fusion of sparse representation and support vector machine for SAR image target recognition. Neurocomputing 2013, 113, 97–104. [Google Scholar] [CrossRef]
  28. Song, H.B.; Ji, K.F.; Zhang, Y.S.; Xing, X.W.; Zou, H.X. Sparse representation-based SAR image target classification on the 10-class MSTAR Data set. Appl. Sci. 2016, 6, 26. [Google Scholar] [CrossRef]
  29. Thiagarajan, J.; Ramamurthy, K.; Knee, P.P.; Spanias, A.; Berisha, V. Sparse representation for automatic target classification in SAR images. In Proceedings of the 2010 4th Communications, Control and Signal Processing (ISCCSP), Limassol, Cyprus, 3–5 March 2010. [Google Scholar]
  30. Ding, B.Y.; Wen, G.J. Sparsity constraint nearest subspace classifier for target recognition of SAR images. J. Vis. Commun. Image Represent. 2018, 52, 170–176. [Google Scholar] [CrossRef]
  31. Chen, S.Z.; Wang, H.P.; Xu, F.; Jin, Y.Q. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 47, 1685–1697. [Google Scholar] [CrossRef]
  32. Huang, Z.L.; Pan, Z.X.; Lei, B. Transfer learning with deep convolutional neural networks for SAR target classification with limited labeled data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef]
  33. Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  34. The Air Force Moving and Stationary Target Recognition Database. Available online: http://www.sdms.afrl.af.mil/datasets/mstar/ (accessed on 5 May 2014).
  35. Liu, H.W.; Jiu, B.; Li, F.; Wang, Y.H. Attributed scattering center extraction algorithm based on sparse representation with dictionary refinement. IEEE Trans. Antennas Propagat. 2017, 65, 2604–2614. [Google Scholar] [CrossRef]
  36. Cong, Y.L.; Chen, B.; Liu, H.W.; Jiu, B. Nonparametric Bayesian attributed scattering center extraction for synthetic aperture radar targets. IEEE Trans. Signal Process. 2016, 64, 4723–4736. [Google Scholar] [CrossRef]
  37. Ding, B.Y.; Wen, G.J. Target recognition of SAR images based on multi-resolution representation. Remote Sens. Lett. 2017, 8, 1006–1014. [Google Scholar] [CrossRef]
  38. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: New Jersey, NJ, USA, 2008; ISBN 978-71-2109-600-6. [Google Scholar]
  39. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Target recognition in SAR images by exploiting the azimuth sensitivity. Remote Sens. Lett. 2017, 8, 821–830. [Google Scholar] [CrossRef]
  40. Ravichandran, B.; Gandhe, A.; Simith, R.; Mehra, R. Robust automatic target recognition using learning classifier systems. Inf. Fusion 2007, 8, 252–265. [Google Scholar] [CrossRef]
  41. Doo, S.; Smith, G.; Baker, C. Target classification performance as a function of measurement uncertainty. In Proceedings of the 5th Asia-Pacific Conference on Synthetic Aperture Radar, Singapore, 1–4 September 2015. [Google Scholar]
  42. Ding, B.Y.; Wen, G.J. Exploiting multi-view SAR images for robust target recognition. Remote Sens. 2017, 9, 1150. [Google Scholar] [CrossRef]
Figure 1. Technical flowchart of the proposed method.
Figure 1. Technical flowchart of the proposed method.
Remotesensing 10 00655 g001
Figure 2. Reconstruction using extracted attributed scattering centers (ASCs) (a) original image (b) reconstructed image.
Figure 2. Reconstruction using extracted attributed scattering centers (ASCs) (a) original image (b) reconstructed image.
Remotesensing 10 00655 g002
Figure 3. The neighbor matching results between the test and template ASC sets from BMP2 at the radius of 0.3 m.
Figure 3. The neighbor matching results between the test and template ASC sets from BMP2 at the radius of 0.3 m.
Remotesensing 10 00655 g003
Figure 4. The neighbor matching results between the test and template ASC sets from BMP at the radius of (a) 0.4 m; (b) 0.5 m.
Figure 4. The neighbor matching results between the test and template ASC sets from BMP at the radius of (a) 0.4 m; (b) 0.5 m.
Remotesensing 10 00655 g004
Figure 5. The neighbor matching results between the BMP2 test ASC set and the template ASC sets from other targets: (a) T72; (b) BTR70.
Figure 5. The neighbor matching results between the BMP2 test ASC set and the template ASC sets from other targets: (a) T72; (b) BTR70.
Remotesensing 10 00655 g005
Figure 6. The reconstructed template images based on the matching results from different targets: (a) BMP2; (b) T72; (c) BTR70.
Figure 6. The reconstructed template images based on the matching results from different targets: (a) BMP2; (b) T72; (c) BTR70.
Remotesensing 10 00655 g006
Figure 7. Optical images of the 10 military targets. (a) BMP2; (b) BTR70; (c) T72; (d) T62; (e) BRDM2; (f) BTR60; (g) ZSU23/4; (h) D7; (i) ZIL131; (j) 2S1.
Figure 7. Optical images of the 10 military targets. (a) BMP2; (b) BTR70; (c) T72; (d) T62; (e) BRDM2; (f) BTR60; (g) ZSU23/4; (h) D7; (i) ZIL131; (j) 2S1.
Remotesensing 10 00655 g007
Figure 8. Optical and synthetic aperture radar (SAR) images of four configurations of T72 tank. (a) Optical images (b) SAR images.
Figure 8. Optical and synthetic aperture radar (SAR) images of four configurations of T72 tank. (a) Optical images (b) SAR images.
Remotesensing 10 00655 g008
Figure 9. SAR images of 2S1 at different depression angles of (a) 17°; (b) 30°; (c) 45°.
Figure 9. SAR images of 2S1 at different depression angles of (a) 17°; (b) 30°; (c) 45°.
Remotesensing 10 00655 g009
Figure 10. Noisy images with different levels of additive white Gaussian noises (AWGN) addition. (a) Original image, (b) 10 dB, (c) 5 dB, (d) 0 dB, (e) −5 dB, (f) −10 dB.
Figure 10. Noisy images with different levels of additive white Gaussian noises (AWGN) addition. (a) Original image, (b) 10 dB, (c) 5 dB, (d) 0 dB, (e) −5 dB, (f) −10 dB.
Remotesensing 10 00655 g010
Figure 11. Performance of different methods under noise corruption.
Figure 11. Performance of different methods under noise corruption.
Remotesensing 10 00655 g011
Figure 12. Twenty percent occluded images from different directions. (a) Original image, (b) direction 1, (c) direction 2, (d) direction 3, (e) direction 4, (f) direction 5, (g) direction 6, (h) direction 7, (i) direction 8.
Figure 12. Twenty percent occluded images from different directions. (a) Original image, (b) direction 1, (c) direction 2, (d) direction 3, (e) direction 4, (f) direction 5, (g) direction 6, (h) direction 7, (i) direction 8.
Remotesensing 10 00655 g012aRemotesensing 10 00655 g012b
Figure 13. Performance of different methods under partial occlusion.
Figure 13. Performance of different methods under partial occlusion.
Remotesensing 10 00655 g013
Table 1. Template and test sets used in the experiments.
Table 1. Template and test sets used in the experiments.
Depr.BMP2BTR70T72T62BDRM2BTR60ZSU23/4D7ZIL1312S1
Template set17°233 (Sn_9563)233232 (Sn_132)299298256299299299299
Test set15°195 (Sn_9563)196196 (Sn_132)273274195274274274274
Table 2. Reference methods for comparison. SVM: support vector machine; SRC: sparse representation-based classification; CNN: convolutional neural network; PCA: principal component analysis.
Table 2. Reference methods for comparison. SVM: support vector machine; SRC: sparse representation-based classification; CNN: convolutional neural network; PCA: principal component analysis.
Abbrev.FeatureClassifierRef.
SVMPCA featuresSVM[26]
SRCPCA featuresSRC[28]
CNNOriginal image intensitiesCNN[31]
ASC1ASCsASC matching method[20]
ASC2ASCsASC matching method[21]
Table 3. Recognition results of the proposed method under standard operating condition (SOC).
Table 3. Recognition results of the proposed method under standard operating condition (SOC).
ClassBMP2BTR70T72T62BDRM2BTR60ZSU23/4D7ZIL1312S1PCC (%)
BMP219400000100099.49
BTR70019600000000100
T7201194000100098.98
T6200027100200099.27
BDRM200102711100098.91
BTR6001010193000098.97
ZSU23/410010027200099.27
D700011002720099.27
ZIL13103000010270098.54
2S100010000127299.27
Average 99.04
Table 4. Comparison of different methods under SOC.
Table 4. Comparison of different methods under SOC.
MethodProposedSVMSRCCNNASC1ASC2
PCC (%)99.0498.3497.5699.1297.2498.12
Table 5. Dataset of different configurations.
Table 5. Dataset of different configurations.
Depr.BMP2BDRM2BTR70T72
Template set17°233 (Sn_9563)298233232 (Sn_132)
Test set15°, 17°428 (Sn_9566)
429 (Sn_c21)
00426 (Sn_812)
573 (Sn_A04)
573 (Sn_A05)
573 (Sn_A07)
567 (Sn_A10)
Table 6. Recognition results of the proposed method under configuration variance. PCC: percentage of correct classification.
Table 6. Recognition results of the proposed method under configuration variance. PCC: percentage of correct classification.
ClassSerial No.BMP2BRDM2BTR-70T-72PCC (%)
BMP2Sn_9566410134195.79
Sn_c2141754397.20
T72Sn_812131141196.48
Sn_A04158055095.99
Sn_A05122255797.21
Sn_A07821055397.21
Sn_A10125055097.00
Average 96.61
Table 7. PCCs of different methods under configuration variance.
Table 7. PCCs of different methods under configuration variance.
MethodProposedSVMSRCCNNASC1ASC2
PCC (%)96.6194.1293.6494.6595.1295.67
Table 8. Dataset with large depression variance.
Table 8. Dataset with large depression variance.
Depr.2S1BDRM2ZSU23/4
Template set17°299298299
Test set30°288287288
45°303303303
Table 9. Recognition results of the proposed method under large depression variance.
Table 9. Recognition results of the proposed method under large depression variance.
Depr.ClassResultsPCC (%)Average (%)
2S1BDRM2ZSU23/4
30°2S12822497.9298.15
BDRM21284298.95
ZSU23/42528197.57
45°2S1199733165.6873.93
BDRM2182275874.92
ZSU23/4124524681.19
Table 10. Comparison with reference methods under different depression angles.
Table 10. Comparison with reference methods under different depression angles.
MethodPCC (%)
30°45°
Proposed98.1573.93
SVM96.8863.01
SRC96.4264.74
CNN97.0263.68
ASC197.3470.24
ASC297.6271.36

Share and Cite

MDPI and ACS Style

Fan, J.; Tomas, A. Target Reconstruction Based on Attributed Scattering Centers with Application to Robust SAR ATR. Remote Sens. 2018, 10, 655. https://doi.org/10.3390/rs10040655

AMA Style

Fan J, Tomas A. Target Reconstruction Based on Attributed Scattering Centers with Application to Robust SAR ATR. Remote Sensing. 2018; 10(4):655. https://doi.org/10.3390/rs10040655

Chicago/Turabian Style

Fan, Jihong, and Andrew Tomas. 2018. "Target Reconstruction Based on Attributed Scattering Centers with Application to Robust SAR ATR" Remote Sensing 10, no. 4: 655. https://doi.org/10.3390/rs10040655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop