*2.4. Classification Procedure*

After the representation coefficient *u*ˆ*<sup>ω</sup>* was obtained, the given weighted global query vector *y<sup>ω</sup>* and the global training matrix *F* were applied to calculate the minimum reconstruction residual; then, the query image to the subject *c* that had the global minimum residual [12] was allocated:

$$\mathcal{L} = \operatorname{argmin}\_{j} ||F\delta\_{j}(\hat{\mu}^{\omega}) - y^{\omega}||\_{2}. \tag{12}$$

#### **3. Contrast Experiment Results and Analysis**

#### *3.1. Dataset and Experimental Conditions*

This section presents the experimental results on the MSTAR [22] public database to assess the performance of the presented algorithm. Three types of vehicles are included in the dataset. They are BMP2 armored personnel carriers and BTR70 and T72 main battle tanks with depression angles at 17 degrees and 15 degrees. Every image in the dataset is 128 × 128 pixels, covering the full 0–to–360 degree aspect range. The resolution of each image is 0.3 m × 0.3 m. The training set contained the SAR target images whose depression was 17 degrees. The other images with a depression of 15 degrees were regarded as the testing set. The visible light images are illustrated in Figure 3. Table 1 lists the details of the training set and testing set.

**Figure 3.** Visible light images for three targets in the MSTAR database. (**a**) BMP2. (**b**) BTR70. (**c**) T72.


**Table 1.** The dataset of training and testing targets.

To reduce the negative interaction of the redundant background, a 64 × 64 pixels sub-image was cropped from the center of every image. Moreover, the amplitude of the sub-image was normalized. Herein, the experiment result, which is reported, was the average classification result of running 10 times.

In this experiment, the SIFT [14] was applied as the local feature. We extracted the SIFT features from patches densely located by six pixels on the image, under the scales of 16 × 16. Furthermore, every SIFT descriptor had the dimension of 128.

#### *3.2. Experiment 1: Investigation of the Proposed Method*

In order to verify the superiority of the presented algorithm, the presented sparse weighting sparse pyramid pooling (SWSPP) algorithm was compared with the state-ofthe-art methods: namely sparse coding SPM (ScSPM) [13] and locality-constrained linear coding SPM (LLC) [16]. The classification results are listed in Table 2. The performance of the codebook was compared under different dimensions *dim* from 100 to 1024. The greater the dimensionality, the more the feature extracted was discriminative. Therefore, the results improved as the dimension increased. In all three scenarios, the method presented in this study was obviously superior to the ScSPM and LLC methods under all the dimensions of the codebook. Since the proposed SWSPP took the sub-regions dependability into account, it enhanced the discrimination of the recognition target. From the experiment results, we verified that the dependability generated according to the sparse representation was effective for classification. The curves of the classification average results under the three algorithms are shown in Figure 4.


**Table 2.** MSTAR classification results (%).

**Figure 4.** Curves of classification results of different codebook dimensions of the three different methods.

## *3.3. Experiment 2: Comparison of SWAPP with Related Algorithms*

In order to show a more rounded analysis, the presented approach was compared with the following methods: the sparse representation classification (SRC) method [11], the PCA feature [23], and the LDA feature [24]. Table 3 gives the average classification results. According to this table, we can see that the presented method had advantages over all the other methods. The average classification result of our proposed method was 2% higher than the other approaches. On the basis of the results, we can see that the classification accuracy of the presented method was better; in particular, the correct rate of the BTR70 was 100%. The result validates the effectiveness of the proposed dependability weighted feature learning method for SAR image recognition.


**Table 3.** Classification results (%) of different methods.
