Next Article in Journal
Computational Characterization of Turbulent Flow in a Microfluidic Actuator
Next Article in Special Issue
A Reliable Way to Improve Electrochemical Migration (ECM) Resistance of Nanosilver Paste as a Bonding Material
Previous Article in Journal
BFE-Net: Bidirectional Multi-Scale Feature Enhancement for Small Object Detection
Previous Article in Special Issue
Research on Characteristics of Copper Foil Three-Electrode Planar Spark Gap High Voltage Switch Integrated with EFI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Weighting for Pyramid Pooling-Based SAR Image Target Recognition

Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, School of Electrical and Electronic Engineering, Tiangong University, Tianjin 300387, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(7), 3588; https://doi.org/10.3390/app12073588
Submission received: 4 March 2022 / Revised: 26 March 2022 / Accepted: 29 March 2022 / Published: 1 April 2022
(This article belongs to the Special Issue Optoelectronic Materials, Devices, and Applications)

Abstract

:
In this study, a novel feature learning method for synthetic aperture radar (SAR) image automatic target recognition is presented. It is based on spatial pyramid matching (SPM), which represents an image by concatenating the pooling feature vectors that are obtained from different resolution sub-regions. This method exploits the dependability of obtaining the weighted pooling features generated from SPM sub-regions. The dependability is determined by the residuals obtained from sparse representation. This method aims at enhancing the weights of the pooling features generated in the sub-regions located in the target and suppressing the weights of the background. The feature representation for SAR image target recognition is discriminative and robust to speckle noise and background clutter. Experiments performed on the Moving and Stationary Target Acquisition and Recognition public dataset prove the advantageous performance of the presented algorithm over several state-of-the-art methods.

1. Introduction

Synthetic aperture radar (SAR) imagery has become a significant research object in many areas, such as civilian and military fields [1,2,3]. It takes advantage of acquiring images in all weather conditions and during the night as well as the day. Image target recognition is a basic step in understanding and interpreting SAR images [4]. In this context, it is important to develop discriminative and robust methods for automatic target recognition (ATR) systems, and tremendous research attention has been paid to the study of ATR for SAR images [5,6,7,8,9].
Recently, sparse representation (SR) has become a focus and has been used in many areas [10]. It is robust to noise and can maintain natural discrimination without any prior information. Even in SAR target recognition, SR could remove the need for the pose estimating process. Recently, Thiagarajan et al. [11] applied SR to realize SAR image target recognition and applied a local linear approximation to generate a classification prediction for every target class manifold. This algorithm did not demand any specific pose estimation or preprocessing, but the use of random projections in the high-dimensional space discarded some discriminative locality information, thus making occlusion handling more difficult. In another study [12], descriptors from local patches were extracted, and an image was treated as a collection of the unordered descriptors; then, sparse representation was applied to represent the local patches for SAR image target classification in the framework of SPM. Additionally, the application of spatial pyramids was confirmed to be an effective classification method for SAR images.
This model of spatial pyramid matching [13] for image classification is a statistics-based model with the objective of providing a better image representation. In order to obtain discriminant details of the images, the SPM needs to extract the low-level local features through, for example, scale-invariant feature transform (SIFT) [14] and histogram of oriented gradient (HOG) [15]. However, the local features are not provided directly to image classifiers due to their sensitivity to noise and computational complexity. One solution is to represent the images by integrating the local features into the midlevel features. This image representation works well with linear classifiers, and the results have achieved a competitive performance in many image classification tasks. Nonetheless, the SPM model is not perfect when implemented on SAR images. This is attributed to the fact that the variety of targets posed in SAR images hinders the advantages of SPM. Noteworthy, locality is more essential than sparsity [16], locality-constrained linear coding was presented in place of the vector quantization (VQ) coding, and a good approximation was obtained. Recently, Zhang et al. [17] proposed to apply a locality constraint to ensure similar patches shared similar codes in the coding scheme for SAR image target recognition. However, its complete codebook was obtained after preprocessing of the estimation of target poses.
In a literature study [5], a complementary spatial pyramid coding method was used in change detection, and good performance was gained. The SAR targets suffer from the effects of the speckle noise and the background; therefore, different parts of an image play different roles in image representation. Combining the advantages of SPM and SR, a novel SAR image target recognition method is proposed herein, which makes use of the dependability obtained by SR to obtain weighted sub-regions at every pyramid level. Some sub-regions in each level of the spatial pyramid may consist of background noise, and other ones represent the target. Based on the sparse representation theory, the target parts could be represented by the training samples from the same class [18]. Therefore, there could be a small residual value corresponding to its class, which shows the dependability of the sub-region. We apply the dependability to weight the pooling features obtained from the SPM sub-regions [19]. Therefore, the pooling feature located in the target is enhanced. Meanwhile, the pooling feature located in the background is suppressed. The results obtained using real SAR Moving and Stationary Target Acquisition and Recognition (MSTAR) database demonstrate that the method presented herein is more robust to variant unconstrained conditions than the methods reported in other recent related studies.
The organization of this paper is as follows: Section 2 introduces the presented sub-regions weighting method. Section 3 reports the experimental results of the presented algorithm and compares the approach used herein with some classical approaches. Finally, Section 4 includes the conclusion.

2. SAR Image Recognition with Sparse Weighting Spatial Pyramid Pooling

The method proposed herein simultaneously utilizes the SPM model and SR to deal with SAR image recognition. Enlightened by the idea that different parts of an image play different roles, a sparse weighting spatial pyramid pooling method is proposed to extract a new type of feature. The main objective of utilizing this method is to reduce the influence of background clutter and enhance the target. Figure 1 exhibits the flowchart of the proposed SAR image target recognition method. Firstly, an image was divided into gradual fine sub-regions; then, the dense local features were calculated, followed by coding and pooling according to SPM, in order to obtain feature vectors of the sub-regions, respectively. Additionally, the pooling feature vector of the sub-regions at each pyramid level was weighted based on the dependability, which was determined according to the residuals obtained by the SR. Finally, the representation for SAR images was built by systematically concatenating the weighted feature vectors. With sparse representation classification, the method is robust, in particular, in dealing with the speckle noise and large clutter background.

2.1. Local Feature Extraction and Descriptor Quantization

Considering that the SIFT feature has the characteristic of invariance to scale, orientation, and affine distortion, in this study, dense-SIFT descriptors were extracted to express an SAR image [14]. The image I can be denoted by a set of local features’ descriptors as: I = [ d 1 , d 2 , , d N ] R D × N , where d i represents the D - dimensional SIFT description vector. Given a codebook B = [ b 1 , b 2 , , b M ] R D × N , the K-Means clustering algorithm [13] was adopted with the Euclidean distance to cluster local features into groups, and the generated centers of each group were taken as the codebook.
Sparse coding [10] was applied to encode the local feature vectors, as follows:
arg   min V i = 1 N | | d i B v i | | 2 + λ | | v i | | 1 .
The feature vector for a descriptor d i becomes an M-dimensional vector v i . v i is the corresponding vector to the descriptor d i . The sparse coding vectors are obtained by the feature-sign search algorithm [20] by solving Equation (1) here.
The next step is the spatial pooling step aiming at obtaining more discriminative image representation from each sub-region. This study applied the construction of a three-level spatial pyramid. At every resolution e , e = 0, 1, 2, a grid was constructed such that there were 2 e resolution cells along every dimension, i.e., the 1 × 1, 2 × 2, 4 × 4 grid structures; thus, the sub-regions were K = 21 in total. We let V be a collection containing T local feature codes acquired from a sub-region. In addition, the max-pooling strategy was used to concatenate all the descriptors for the sub-region, which can acquire many discriminative features to SAR image spatial variations. The expression of the max-pooling is as follows:
t k = max ( v 1 , v 2 , , v T )
where max represents the element-wisely maximization for the involved vectors. The local features were pooled in all the sub-regions and concatenated to form the image representation f = [ t 1 ; t 2 ; ; t K ] .
Considering a set of G training images from C classes, F = [ f 1 , f 2 , , f G ] , where f g is the feature vector, which is the pooling result of the g th image. Correspondingly, F is partitioned as F = [ F 1 T , F 2 T , , F K T ] T and F k R d × G with d < G . Distinctly, the g th column of F k is the feature vector for the k th sub-region of the g th image, and G represents the image number of the training dataset. Analogously, a test image y is divided into y = [ y 1 ; y 2 ; ; y K ] .

2.2. Determination of Sub-Region Weights

The traditional SPM models use all the extracted local descriptors for image representation; therefore, some of them might not be the objects to be recognized. Thus, it was required to learn the spatial weights to boost the target part and weaken the background parts. Taking the sparse representation into account, the observation to be followed was that the target part was sparser than the background one. The feature vectors of the sub-regions at every pyramid level were weighted on the residual basis to determine their dependability [21]. Then, the following optimization problem [10] was obtained:
x ^ k = argmin x k { | | F k x k y k | | 2 2 + λ | | x k | | 1 } s . t . 1 k K ,
where y k is the pooling feature for each sub-region, and F k is the dictionary.
The residuals of a sub-region can be measured simply by using the L2-norm as follows:
r k c = | | F k δ c ( x ^ k ) y k | | 2 ,
where r k c is the residual corresponding to the c th class of the k th sub-region, δ c ( x ^ k ) is the discriminative function, which selects the elements related with the c th class in x ^ k , c = [ 1 , 2 , , C ] , and C is the class number of the training dataset.
All the residuals of the k th sub-region were concatenated to generate the residual vector r k as follows:
r k = [ r k 1 , r k 2 , , r k C ] .
According to the theory of SR [10], target sub-regions can be accurately represented only by the training samples from the same class. Therefore, the residual vector produced contains only one small element. The background sub-regions could be far away from every subspace spanned by the training samples of each class but nearly within the subspace spanned by all training samples of all classes. The residual vector produced could include almost similar elements. According to the abovementioned analysis, the following function was applied for evaluating the sub-region sparsity:
ς k = min ( r k ) mean ( r k ) .
When r k has only a single zero or near zero residual, ς k reaches the minimum value 0. This shows that the sub-region could be well represented by its subspace. When all residuals in r k are nonzero value and equal, ς k reaches the maximum value 1. This indicates that the sub-region could contain noise or it could be the background part.
This is verified by the numerical value results shown in Figure 2. Figure 2a,b are example images, which illustrate the distribution of sub-regions in the construction of a three-level pyramid. Figure 2c,d show ς k of the sub-regions corresponding to Figure 2a,b respectively. The obtained residuals indicated that the residuals’ sparsity ς k generated in the sub-regions located in the target parts (such as the sub-regions marked by 11 and 12 in Figure 2a) was smaller than the background parts (such as the sub-regions marked by 9 and 10 in Figure 2a) in the third-level pyramid. Additionally, a difference existed between the residual values in the different resolutions, such that the residuals were much greater in the low resolution level than in the high ones, in particular, the target parts. The reason could be that more background cluster and speckle noise was included in a larger region. Therefore, in order to differentiate the target sub-regions from the background ones, the residual could be employed to obtain the weighted feature vector.
To tackle this problem, a simple way was to impair the feature representation of the background sub-regions for the classification. This was achieved by weighting all sub-regions with the reciprocal of the sparsity: ω k , which is viewed as the dependability:
ω k = 1 ς k

2.3. Weighted SPM Sparse Representation

After weighting the sub-regions, the weighted sub-regions were integrated to reconstruct a global feature vector according to the SPM model. Each sub-region vector y k was weighted by a corresponding weight to be represented as a sub-region weighted query vector:
y k ω = ω k y k .
We connected all sub-regions’ weighted query vectors in series to produce a weighted feature vector as
y ω = [ y 1 ω T , y 2 ω T , , y K ω T ] T .
Consider that y ω can be represented by F in a linear manner as follows:
y ω = F u ω .
To obtain the primary label information from the training dataset matrix and weighted query feature vector, the SR classification method [13] was applied to complete the target image classification:
u ^ ω = argmin u ω { | | F u ω y ω | | 2 2 + λ | | u ω | | 1 } .
Linear approximation [15] is capable of selecting significant training vectors to provide a good discrimination among all the classes.

2.4. Classification Procedure

After the representation coefficient u ^ ω was obtained, the given weighted global query vector y ω and the global training matrix F were applied to calculate the minimum reconstruction residual; then, the query image to the subject c that had the global minimum residual [12] was allocated:
c = argmin j | | F δ j ( u ^ ω ) y ω | | 2 .

3. Contrast Experiment Results and Analysis

3.1. Dataset and Experimental Conditions

This section presents the experimental results on the MSTAR [22] public database to assess the performance of the presented algorithm. Three types of vehicles are included in the dataset. They are BMP2 armored personnel carriers and BTR70 and T72 main battle tanks with depression angles at 17 degrees and 15 degrees. Every image in the dataset is 128 × 128 pixels, covering the full 0–to–360 degree aspect range. The resolution of each image is 0.3 m × 0.3 m. The training set contained the SAR target images whose depression was 17 degrees. The other images with a depression of 15 degrees were regarded as the testing set. The visible light images are illustrated in Figure 3. Table 1 lists the details of the training set and testing set.
To reduce the negative interaction of the redundant background, a 64 × 64 pixels sub-image was cropped from the center of every image. Moreover, the amplitude of the sub-image was normalized. Herein, the experiment result, which is reported, was the average classification result of running 10 times.
In this experiment, the SIFT [14] was applied as the local feature. We extracted the SIFT features from patches densely located by six pixels on the image, under the scales of 16 × 16. Furthermore, every SIFT descriptor had the dimension of 128.

3.2. Experiment 1: Investigation of the Proposed Method

In order to verify the superiority of the presented algorithm, the presented sparse weighting sparse pyramid pooling (SWSPP) algorithm was compared with the state-of-the-art methods: namely sparse coding SPM (ScSPM) [13] and locality-constrained linear coding SPM (LLC) [16]. The classification results are listed in Table 2. The performance of the codebook was compared under different dimensions d i m from 100 to 1024. The greater the dimensionality, the more the feature extracted was discriminative. Therefore, the results improved as the dimension increased. In all three scenarios, the method presented in this study was obviously superior to the ScSPM and LLC methods under all the dimensions of the codebook. Since the proposed SWSPP took the sub-regions dependability into account, it enhanced the discrimination of the recognition target. From the experiment results, we verified that the dependability generated according to the sparse representation was effective for classification. The curves of the classification average results under the three algorithms are shown in Figure 4.

3.3. Experiment 2: Comparison of SWAPP with Related Algorithms

In order to show a more rounded analysis, the presented approach was compared with the following methods: the sparse representation classification (SRC) method [11], the PCA feature [23], and the LDA feature [24]. Table 3 gives the average classification results. According to this table, we can see that the presented method had advantages over all the other methods. The average classification result of our proposed method was 2% higher than the other approaches. On the basis of the results, we can see that the classification accuracy of the presented method was better; in particular, the correct rate of the BTR70 was 100%. The result validates the effectiveness of the proposed dependability weighted feature learning method for SAR image recognition.

4. Conclusions

In this study, a robust synthetic aperture radar (SAR) automatic target recognition approach was presented that applied sparse representation to produce a weighted global feature vector based on spatial pyramid matching. SAR image target recognition was performed on the reconstructed image representation. The experimental results clearly and consistently showed that the proposed framework was significantly more discriminative than the traditional SPM methods. Moreover, the feature coding–pooling framework was shown to perform well in image classification tasks, the unavoidable information loss incurred by the feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the recognition performance. Therefore, more systematic investigations are still required to find a new coding method to improve the classification performance in future research.

Author Contributions

Conceptualization, S.W. and Y.L.; methodology, S.W.; software, Y.L.; validation, L.L.; formal analysis, S.W.; investigation, L.L.; resources, S.W.; data curation, S.W.; writing—original draft preparation, Y.L.; writing—review and editing, S.W.; visualization, L.L.; supervision, S.W.; project administration, S.W.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China (Grant No. 61901297) and in part by the National Science Foundation of Tianjin Province of China, (Grant No. 18JCQNJC70600).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luo, Z.; Jiang, X.; Liu, X. Synthetic minority class data by generative adversarial network for imbalanced sar target recognition. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2459–2462. [Google Scholar]
  2. Martone, A.; Innocenti, R.; Ranney, K. Moving Target Indication for Transparent Urban Structures; U.S. Army Research Laboratory: Adelphi, MD, USA, 2009. [Google Scholar]
  3. Zhang, J.; Xing, M.; Xie, Y. FEC: A Feature Fusion Framework for SAR Target Recognition Based on Electromagnetic Scattering Features and Deep CNN Features. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2174–2187. [Google Scholar] [CrossRef]
  4. Mishra, A.K.; Bernard, M. Automatic target recognition using multipolar bistatic synthetic aperture radar images. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 1906–1920. [Google Scholar] [CrossRef]
  5. Wang, S.; Jiao, L.; Yang, S.; Liu, H. SAR image target recognition via complementary spatial pyramid coding. Neurocomputing 2016, 196, 125–132. [Google Scholar] [CrossRef]
  6. O’Sullivan, J.A.; DeVore, M.D.; Kedia, V. SAR ATR performance using a conditionally Gaussian model. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 91–108. [Google Scholar] [CrossRef] [Green Version]
  7. Owirka, G.J.; Verbout, S.M.; Novak, L.M. Template-based SAR ATR performance using different image enhancement techniques. Proc. SPIE 1999, 3721, 302–319. [Google Scholar]
  8. Chen, Y.; Blasch, E.; Qian, T. Experimental feature-based SAR ATR perormance evaluation under different operational conditions. In Proceedings of the 17th SPIE—Signal Processing, Sensor Fusion, and Target Recognition 6968, Orlando, FL, USA, 16–20 March 2008; pp. 69680F-1–69680F-12. [Google Scholar]
  9. Amoon, M.; Rezai-rad, G.-A. Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features. IET Comput. Vis. 2014, 8, 77–85. [Google Scholar] [CrossRef]
  10. Wright, J.; Yang, A.Y.; Ganesh, A. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Thiagarajan, J.J.; Ramamurthy, K.N.; Knee, P. Sparse representations for automatic target classification in SAR images. In Proceedings of the IEEE 4th International Symposium on Communications, Control and Signal Processing, Limassol, Cyprus, 3–5 March 2010; pp. 1–4. [Google Scholar]
  12. Knee, P.; Thiagarajan, J.J.; Ramamurthy, K.N. SAR target classification using sparse representations and spatial pyramids. In Proceedings of the IEEE Radar Conference (RADAR), Kansas City, MO, USA, 23–27 May 2011; pp. 294–298. [Google Scholar]
  13. Yang, J.; Yu, K.; Gong, Y. Linear spatial pyramid matching using sparse coding for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1794–1801. [Google Scholar]
  14. Lowe, D.G. Distinctive image features from scale-invariant key points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  15. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  16. Wang, J.; Yang, J.; Yu, K. Locality-constrained linear coding for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 13–18. [Google Scholar]
  17. Zhang, S.; Sun, F.; Liu, H. Locality-constrained linear coding with spatial pyramid matching for SAR image classification. In Foundations and Practical Applications of Cognitive Systems and Information Processing, Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2014; Volume 215, pp. 867–876. [Google Scholar]
  18. Lai, J.; Jiang, X. Modular weighted global sparse representation for robust face recognition. IEEE Signal Processing Lett. 2012, 19, 571–574. [Google Scholar] [CrossRef]
  19. Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 2169–2178. [Google Scholar]
  20. Lee, H.; Battle, A.; Raina, R.; Ng, A. Efficient sparse coding algorithms. In Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 4–7 December 2006; pp. 801–808. [Google Scholar]
  21. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2008, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  22. Mossing, J.C.; Ross, T.D. An evaluation of SAR ATR algorithm performance sensitivity to MSTAR extended operating conditions. Algorithms Synth. Aperture Radar Imag. V (Proc. SPIE) 1998, 3370, 554–565. [Google Scholar]
  23. Misha, A.; Mulgrew, B. Radar signal classification using PCA-based features. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Toulouse, France, 14–19 May 2006; pp. 14–19. [Google Scholar]
  24. Juwei, L.; Kostantions, N.P.; Anastasios, N.V. Face recognition using LDA-based algorithms. IEEE Trans. Neural Netw. 2003, 14, 195–200. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overview of the SAR image target recognition flowchart.
Figure 1. Overview of the SAR image target recognition flowchart.
Applsci 12 03588 g001
Figure 2. Illustration of the numeric value results of the sub-region sparsity. (a,b) are example images illustrating the distribution of sub-regions in the construction of the three-level pyramid. (c,d) show the ς k of the sub-regions corresponding to the images (a,b).
Figure 2. Illustration of the numeric value results of the sub-region sparsity. (a,b) are example images illustrating the distribution of sub-regions in the construction of the three-level pyramid. (c,d) show the ς k of the sub-regions corresponding to the images (a,b).
Applsci 12 03588 g002
Figure 3. Visible light images for three targets in the MSTAR database. (a) BMP2. (b) BTR70. (c) T72.
Figure 3. Visible light images for three targets in the MSTAR database. (a) BMP2. (b) BTR70. (c) T72.
Applsci 12 03588 g003
Figure 4. Curves of classification results of different codebook dimensions of the three different methods.
Figure 4. Curves of classification results of different codebook dimensions of the three different methods.
Applsci 12 03588 g004
Table 1. The dataset of training and testing targets.
Table 1. The dataset of training and testing targets.
TargetName of Training SetTraining SetName of Testing SetTesting Set
BTR70Sn-c71233Sn-c71196
BMP2Sn-9663233Sn-9663195
Sn-9666196
Sn-c21196
T72Sn-132232Sn-132196
Sn-812195
Sn-s7191
Table 2. MSTAR classification results (%).
Table 2. MSTAR classification results (%).
d i m DataScSPMLLC-SPMSWSPP
100BMP277.2776.1780.99
BTR7092.3595.7797.04
T7285.3688.2485.02
av84.9986.7287.68
200BMP279.6680.8283.79
BTR7094.1395.3698.03
T7286.1788.0987.29
av86.6588.0989.70
400BMP283.6583.4887.67
BTR7096.7396.7399.39
T7286.8787.7988.14
av89.0889.3391.73
600BMP284.1685.3389.27
BTR7097.1997.86100
T7287.6388.3689.00
av89.6690.5292.76
800BMP284.6786.5191.31
BTR7098.1697.81100
T7288.3888.9590.03
av90.4191.0993.78
1024BMP287.0586.4690.46
BTR7096.9498.78100
T7289.1889.7892.27
av91.0691.6794.24
Table 3. Classification results (%) of different methods.
Table 3. Classification results (%) of different methods.
AlgorithmBMP2BTR70T72Average
SRC [11]84.4898.3394.5992.47
PCA [23]94.5485.6696.7992.33
LDA [24]98.4767.8095.8687.38
SWSPP90.4610092.2794.24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, S.; Liu, Y.; Li, L. Sparse Weighting for Pyramid Pooling-Based SAR Image Target Recognition. Appl. Sci. 2022, 12, 3588. https://doi.org/10.3390/app12073588

AMA Style

Wang S, Liu Y, Li L. Sparse Weighting for Pyramid Pooling-Based SAR Image Target Recognition. Applied Sciences. 2022; 12(7):3588. https://doi.org/10.3390/app12073588

Chicago/Turabian Style

Wang, Shaona, Yang Liu, and Linlin Li. 2022. "Sparse Weighting for Pyramid Pooling-Based SAR Image Target Recognition" Applied Sciences 12, no. 7: 3588. https://doi.org/10.3390/app12073588

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop